text
stringlengths
559
401k
source
stringlengths
13
121
In particle physics, the electroweak interaction or electroweak force is the unified description of two of the fundamental interactions of nature: electromagnetism (electromagnetic interaction) and the weak interaction. Although these two forces appear very different at everyday low energies, the theory models them as two different aspects of the same force. Above the unification energy, on the order of 246 GeV, they would merge into a single force. Thus, if the temperature is high enough – approximately 1015 K – then the electromagnetic force and weak force merge into a combined electroweak force. During the quark epoch (shortly after the Big Bang), the electroweak force split into the electromagnetic and weak force. It is thought that the required temperature of 1015 K has not been seen widely throughout the universe since before the quark epoch, and currently the highest human-made temperature in thermal equilibrium is around 5.5×1012 K (from the Large Hadron Collider). Sheldon Glashow, Abdus Salam, and Steven Weinberg were awarded the 1979 Nobel Prize in Physics for their contributions to the unification of the weak and electromagnetic interaction between elementary particles, known as the Weinberg–Salam theory. The existence of the electroweak interactions was experimentally established in two stages, the first being the discovery of neutral currents in neutrino scattering by the Gargamelle collaboration in 1973, and the second in 1983 by the UA1 and the UA2 collaborations that involved the discovery of the W and Z gauge bosons in proton–antiproton collisions at the converted Super Proton Synchrotron. In 1999, Gerardus 't Hooft and Martinus Veltman were awarded the Nobel prize for showing that the electroweak theory is renormalizable. == History == After the Wu experiment in 1956 discovered parity violation in the weak interaction, a search began for a way to relate the weak and electromagnetic interactions. Extending his doctoral advisor Julian Schwinger's work, Sheldon Glashow first experimented with introducing two different symmetries, one chiral and one achiral, and combined them such that their overall symmetry was unbroken. This did not yield a renormalizable theory, and its gauge symmetry had to be broken by hand as no spontaneous mechanism was known, but it predicted a new particle, the Z boson. This received little notice, as it matched no experimental finding. In 1964, Salam and John Clive Ward had the same idea, but predicted a massless photon and three massive gauge bosons with a manually broken symmetry. Later around 1967, while investigating spontaneous symmetry breaking, Weinberg found a set of symmetries predicting a massless, neutral gauge boson. Initially rejecting such a particle as useless, he later realized his symmetries produced the electroweak force, and he proceeded to predict rough masses for the W and Z bosons. Significantly, he suggested this new theory was renormalizable. In 1971, Gerard 't Hooft proved that spontaneously broken gauge symmetries are renormalizable even with massive gauge bosons. == Formulation == Mathematically, electromagnetism is unified with the weak interactions as a Yang–Mills field with an SU(2) × U(1) gauge group, which describes the formal operations that can be applied to the electroweak gauge fields without changing the dynamics of the system. These fields are the weak isospin fields W1, W2, and W3, and the weak hypercharge field B. This invariance is known as electroweak symmetry. The generators of SU(2) and U(1) are given the name weak isospin (labeled T) and weak hypercharge (labeled Y) respectively. These then give rise to the gauge bosons that mediate the electroweak interactions – the three W bosons of weak isospin (W1, W2, and W3), and the B boson of weak hypercharge, respectively, all of which are "initially" massless. These are not physical fields yet, before spontaneous symmetry breaking and the associated Higgs mechanism. In the Standard Model, the observed physical particles, the W± and Z0 bosons, and the photon, are produced through the spontaneous symmetry breaking of the electroweak symmetry SU(2) × U(1)Y to U(1)em, effected by the Higgs mechanism (see also Higgs boson), an elaborate quantum-field-theoretic phenomenon that "spontaneously" alters the realization of the symmetry and rearranges degrees of freedom. The electric charge arises as the particular linear combination (nontrivial) of YW (weak hypercharge) and the T3 component of weak isospin ( Q = T 3 + 1 2 Y W {\displaystyle Q=T_{3}+{\tfrac {1}{2}}\,Y_{\mathrm {W} }} ) that does not couple to the Higgs boson. That is to say: the Higgs and the electromagnetic field have no effect on each other, at the level of the fundamental forces ("tree level"), while any other combination of the hypercharge and the weak isospin must interact with the Higgs. This causes an apparent separation between the weak force, which interacts with the Higgs, and electromagnetism, which does not. Mathematically, the electric charge is a specific combination of the hypercharge and T3 outlined in the figure. U(1)em (the symmetry group of electromagnetism only) is defined to be the group generated by this special linear combination, and the symmetry described by the U(1)em group is unbroken, since it does not directly interact with the Higgs. The above spontaneous symmetry breaking makes the W3 and B bosons coalesce into two different physical bosons with different masses – the Z0 boson, and the photon (γ), ( γ Z 0 ) = ( cos ⁡ θ W sin ⁡ θ W − sin ⁡ θ W cos ⁡ θ W ) ( B W 3 ) , {\displaystyle {\begin{pmatrix}\gamma \\Z^{0}\end{pmatrix}}={\begin{pmatrix}\cos \theta _{\text{W}}&\sin \theta _{\text{W}}\\-\sin \theta _{\text{W}}&\cos \theta _{\text{W}}\end{pmatrix}}{\begin{pmatrix}B\\W_{3}\end{pmatrix}},} where θW is the weak mixing angle. The axes representing the particles have essentially just been rotated, in the (W3, B) plane, by the angle θW. This also introduces a mismatch between the mass of the Z0 and the mass of the W± particles (denoted as mZ and mW, respectively), m Z = m W cos ⁡ θ W . {\displaystyle m_{\text{Z}}={\frac {m_{\text{W}}}{\,\cos \theta _{\text{W}}\,}}~.} The W1 and W2 bosons, in turn, combine to produce the charged massive bosons W±: W ± = 1 2 ( W 1 ∓ i W 2 ) . {\displaystyle W^{\pm }={\frac {1}{\sqrt {2\,}}}\,{\bigl (}\,W_{1}\mp iW_{2}\,{\bigr )}~.} == Lagrangian == === Before electroweak symmetry breaking === The Lagrangian for the electroweak interactions is divided into four parts before electroweak symmetry breaking manifests, L E W = L g + L f + L h + L y . {\displaystyle {\mathcal {L}}_{\mathrm {EW} }={\mathcal {L}}_{g}+{\mathcal {L}}_{f}+{\mathcal {L}}_{h}+{\mathcal {L}}_{y}~.} The L g {\displaystyle {\mathcal {L}}_{g}} term describes the interaction between the three W vector bosons and the B vector boson, L g = − 1 4 W a μ ν W μ ν a − 1 4 B μ ν B μ ν , {\displaystyle {\mathcal {L}}_{g}=-{\tfrac {1}{4}}W_{a}^{\mu \nu }W_{\mu \nu }^{a}-{\tfrac {1}{4}}B^{\mu \nu }B_{\mu \nu },} where W a μ ν {\displaystyle W_{a}^{\mu \nu }} ( a = 1 , 2 , 3 {\displaystyle a=1,2,3} ) and B μ ν {\displaystyle B^{\mu \nu }} are the field strength tensors for the weak isospin and weak hypercharge gauge fields. L f {\displaystyle {\mathcal {L}}_{f}} is the kinetic term for the Standard Model fermions. The interaction of the gauge bosons and the fermions are through the gauge covariant derivative, L f = Q ¯ j i D / Q j + u ¯ j i D / u j + d ¯ j i D / d j + L ¯ j i D / L j + e ¯ j i D / e j , {\displaystyle {\mathcal {L}}_{f}={\overline {Q}}_{j}iD\!\!\!\!/\;Q_{j}+{\overline {u}}_{j}iD\!\!\!\!/\;u_{j}+{\overline {d}}_{j}iD\!\!\!\!/\;d_{j}+{\overline {L}}_{j}iD\!\!\!\!/\;L_{j}+{\overline {e}}_{j}iD\!\!\!\!/\;e_{j},} where the subscript j sums over the three generations of fermions; Q, u, and d are the left-handed doublet, right-handed singlet up, and right handed singlet down quark fields; and L and e are the left-handed doublet and right-handed singlet electron fields. The Feynman slash D / {\displaystyle D\!\!\!\!/} means the contraction of the 4-gradient with the Dirac matrices, defined as D / ≡ γ μ D μ , {\displaystyle D\!\!\!\!/\equiv \gamma ^{\mu }\ D_{\mu },} and the covariant derivative (excluding the gluon gauge field for the strong interaction) is defined as D μ ≡ ∂ μ − i g ′ 2 Y B μ − i g 2 T j W μ j . {\displaystyle \ D_{\mu }\equiv \partial _{\mu }-i\ {\frac {g'}{2}}\ Y\ B_{\mu }-i\ {\frac {g}{2}}\ T_{j}\ W_{\mu }^{j}.} Here Y {\displaystyle \ Y\ } is the weak hypercharge and the T j {\displaystyle \ T_{j}\ } are the components of the weak isospin. The L h {\displaystyle {\mathcal {L}}_{h}} term describes the Higgs field h {\displaystyle h} and its interactions with itself and the gauge bosons, L h = | D μ h | 2 − λ ( | h | 2 − v 2 2 ) 2 , {\displaystyle {\mathcal {L}}_{h}=|D_{\mu }h|^{2}-\lambda \left(|h|^{2}-{\frac {v^{2}}{2}}\right)^{2}\ ,} where v {\displaystyle v} is the vacuum expectation value. The L y {\displaystyle \ {\mathcal {L}}_{y}\ } term describes the Yukawa interaction with the fermions, L y = − y u i j ϵ a b h b † Q ¯ i a u j c − y d i j h Q ¯ i d j c − y e i j h L ¯ i e j c + h . c . , {\displaystyle {\mathcal {L}}_{y}=-y_{u}^{ij}\epsilon ^{ab}\ h_{b}^{\dagger }\ {\overline {Q}}_{ia}u_{j}^{c}-y_{d}^{ij}\ h\ {\overline {Q}}_{i}d_{j}^{c}-y_{e}^{ij}\ h\ {\overline {L}}_{i}e_{j}^{c}+\mathrm {h.c.} ~,} and generates their masses, manifest when the Higgs field acquires a nonzero vacuum expectation value, discussed next. The y k i j , {\displaystyle \ y_{k}^{ij}\ ,} for k ∈ { u , d , e } , {\displaystyle \ k\in \{\mathrm {u,d,e} \}\ ,} are matrices of Yukawa couplings. === After electroweak symmetry breaking === The Lagrangian reorganizes itself as the Higgs field acquires a non-vanishing vacuum expectation value dictated by the potential of the previous section. As a result of this rewriting, the symmetry breaking becomes manifest. In the history of the universe, this is believed to have happened shortly after the hot big bang, when the universe was at a temperature 159.5±1.5 GeV (assuming the Standard Model of particle physics). Due to its complexity, this Lagrangian is best described by breaking it up into several parts as follows. L E W = L K + L N + L C + L H + L H V + L W W V + L W W V V + L Y . {\displaystyle {\mathcal {L}}_{\mathrm {EW} }={\mathcal {L}}_{\mathrm {K} }+{\mathcal {L}}_{\mathrm {N} }+{\mathcal {L}}_{\mathrm {C} }+{\mathcal {L}}_{\mathrm {H} }+{\mathcal {L}}_{\mathrm {HV} }+{\mathcal {L}}_{\mathrm {WWV} }+{\mathcal {L}}_{\mathrm {WWVV} }+{\mathcal {L}}_{\mathrm {Y} }~.} The kinetic term L K {\displaystyle {\mathcal {L}}_{K}} contains all the quadratic terms of the Lagrangian, which include the dynamic terms (the partial derivatives) and the mass terms (conspicuously absent from the Lagrangian before symmetry breaking) L K = ∑ f f ¯ ( i ∂ / − m f ) f − 1 4 A μ ν A μ ν − 1 2 W μ ν + W − μ ν + m W 2 W μ + W − μ − 1 4 Z μ ν Z μ ν + 1 2 m Z 2 Z μ Z μ + 1 2 ( ∂ μ H ) ( ∂ μ H ) − 1 2 m H 2 H 2 , {\displaystyle {\begin{aligned}{\mathcal {L}}_{\mathrm {K} }=\sum _{f}{\overline {f}}(i\partial \!\!\!/\!\;-m_{f})\ f-{\frac {1}{4}}\ A_{\mu \nu }\ A^{\mu \nu }-{\frac {1}{2}}\ W_{\mu \nu }^{+}\ W^{-\mu \nu }+m_{W}^{2}\ W_{\mu }^{+}\ W^{-\mu }\\\qquad -{\frac {1}{4}}\ Z_{\mu \nu }Z^{\mu \nu }+{\frac {1}{2}}\ m_{Z}^{2}\ Z_{\mu }\ Z^{\mu }+{\frac {1}{2}}\ (\partial ^{\mu }\ H)(\partial _{\mu }\ H)-{\frac {1}{2}}\ m_{H}^{2}\ H^{2}~,\end{aligned}}} where the sum runs over all the fermions of the theory (quarks and leptons), and the fields A μ ν , {\displaystyle \ A_{\mu \nu }\ ,} Z μ ν , {\displaystyle \ Z_{\mu \nu }\ ,} W μ ν − , {\displaystyle \ W_{\mu \nu }^{-}\ ,} and W μ ν + ≡ ( W μ ν − ) † {\displaystyle \ W_{\mu \nu }^{+}\equiv (W_{\mu \nu }^{-})^{\dagger }\ } are given as X μ ν a = ∂ μ X ν a − ∂ ν X μ a + g f a b c X μ b X ν c , {\displaystyle X_{\mu \nu }^{a}=\partial _{\mu }X_{\nu }^{a}-\partial _{\nu }X_{\mu }^{a}+gf^{abc}X_{\mu }^{b}X_{\nu }^{c}~,} with X {\displaystyle X} to be replaced by the relevant field ( A , {\displaystyle A,} Z , {\displaystyle Z,} W ± {\displaystyle W^{\pm }} ) and f abc by the structure constants of the appropriate gauge group. The neutral current L N {\displaystyle \ {\mathcal {L}}_{\mathrm {N} }\ } and charged current L C {\displaystyle \ {\mathcal {L}}_{\mathrm {C} }\ } components of the Lagrangian contain the interactions between the fermions and gauge bosons, L N = e J μ e m A μ + g cos ⁡ θ W ( J μ 3 − sin 2 ⁡ θ W J μ e m ) Z μ , {\displaystyle {\mathcal {L}}_{\mathrm {N} }=e\ J_{\mu }^{\mathrm {em} }\ A^{\mu }+{\frac {g}{\ \cos \theta _{W}\ }}\ (\ J_{\mu }^{3}-\sin ^{2}\theta _{W}\ J_{\mu }^{\mathrm {em} }\ )\ Z^{\mu }~,} where e = g sin ⁡ θ W = g ′ cos ⁡ θ W . {\displaystyle ~e=g\ \sin \theta _{\mathrm {W} }=g'\ \cos \theta _{\mathrm {W} }~.} The electromagnetic current J μ e m {\displaystyle \;J_{\mu }^{\mathrm {em} }\;} is J μ e m = ∑ f q f f ¯ γ μ f , {\displaystyle J_{\mu }^{\mathrm {em} }=\sum _{f}\ q_{f}\ {\overline {f}}\ \gamma _{\mu }\ f~,} where q f {\displaystyle \ q_{f}\ } is the fermions' electric charges. The neutral weak current J μ 3 {\displaystyle \ J_{\mu }^{3}\ } is J μ 3 = ∑ f T f 3 f ¯ γ μ 1 − γ 5 2 f , {\displaystyle J_{\mu }^{3}=\sum _{f}\ T_{f}^{3}\ {\overline {f}}\ \gamma _{\mu }\ {\frac {\ 1-\gamma ^{5}\ }{2}}\ f~,} where T f 3 {\displaystyle T_{f}^{3}} is the fermions' weak isospin. The charged current part of the Lagrangian is given by L C = − g 2 [ u ¯ i γ μ 1 − γ 5 2 M i j C K M d j + ν ¯ i γ μ 1 − γ 5 2 e i ] W μ + + h . c . , {\displaystyle {\mathcal {L}}_{\mathrm {C} }=-{\frac {g}{\ {\sqrt {2\;}}\ }}\ \left[\ {\overline {u}}_{i}\ \gamma ^{\mu }\ {\frac {\ 1-\gamma ^{5}\ }{2}}\;M_{ij}^{\mathrm {CKM} }\ d_{j}+{\overline {\nu }}_{i}\ \gamma ^{\mu }\;{\frac {\ 1-\gamma ^{5}\ }{2}}\;e_{i}\ \right]\ W_{\mu }^{+}+\mathrm {h.c.} ~,} where ν {\displaystyle \ \nu \ } is the right-handed singlet neutrino field, and the CKM matrix M i j C K M {\displaystyle M_{ij}^{\mathrm {CKM} }} determines the mixing between mass and weak eigenstates of the quarks. L H {\displaystyle {\mathcal {L}}_{\mathrm {H} }} contains the Higgs three-point and four-point self interaction terms, L H = − g m H 2 4 m W H 3 − g 2 m H 2 32 m W 2 H 4 . {\displaystyle {\mathcal {L}}_{\mathrm {H} }=-{\frac {\ g\ m_{\mathrm {H} }^{2}\,}{\ 4\ m_{\mathrm {W} }\ }}\;H^{3}-{\frac {\ g^{2}\ m_{\mathrm {H} }^{2}\ }{32\ m_{\mathrm {W} }^{2}}}\;H^{4}~.} L H V {\displaystyle {\mathcal {L}}_{\mathrm {HV} }} contains the Higgs interactions with gauge vector bosons, L H V = ( g m H V + g 2 4 H 2 ) ( W μ + W − μ + 1 2 cos 2 ⁡ θ W Z μ Z μ ) . {\displaystyle {\mathcal {L}}_{\mathrm {HV} }=\left(\ g\ m_{\mathrm {HV} }+{\frac {\ g^{2}\ }{4}}\;H^{2}\ \right)\left(\ W_{\mu }^{+}\ W^{-\mu }+{\frac {1}{\ 2\ \cos ^{2}\ \theta _{\mathrm {W} }\ }}\;Z_{\mu }\ Z^{\mu }\ \right)~.} L W W V {\displaystyle {\mathcal {L}}_{\mathrm {WWV} }} contains the gauge three-point self interactions, L W W V = − i g [ ( W μ ν + W − μ − W + μ W μ ν − ) ( A ν sin ⁡ θ W − Z ν cos ⁡ θ W ) + W ν − W μ + ( A μ ν sin ⁡ θ W − Z μ ν cos ⁡ θ W ) ] . {\displaystyle {\mathcal {L}}_{\mathrm {WWV} }=-i\ g\ \left[\;\left(\ W_{\mu \nu }^{+}\ W^{-\mu }-W^{+\mu }\ W_{\mu \nu }^{-}\ \right)\left(\ A^{\nu }\ \sin \theta _{\mathrm {W} }-Z^{\nu }\ \cos \theta _{\mathrm {W} }\ \right)+W_{\nu }^{-}\ W_{\mu }^{+}\ \left(\ A^{\mu \nu }\ \sin \theta _{\mathrm {W} }-Z^{\mu \nu }\ \cos \theta _{\mathrm {W} }\ \right)\;\right]~.} L W W V V {\displaystyle {\mathcal {L}}_{\mathrm {WWVV} }} contains the gauge four-point self interactions, L W W V V = − g 2 4 { [ 2 W μ + W − μ + ( A μ sin ⁡ θ W − Z μ cos ⁡ θ W ) 2 ] 2 − [ W μ + W ν − + W ν + W μ − + ( A μ sin ⁡ θ W − Z μ cos ⁡ θ W ) ( A ν sin ⁡ θ W − Z ν cos ⁡ θ W ) ] 2 } . {\displaystyle {\begin{aligned}{\mathcal {L}}_{\mathrm {WWVV} }=-{\frac {\ g^{2}\ }{4}}\ {\Biggl \{}\ &{\Bigl [}\ 2\ W_{\mu }^{+}\ W^{-\mu }+(\ A_{\mu }\ \sin \theta _{\mathrm {W} }-Z_{\mu }\ \cos \theta _{\mathrm {W} }\ )^{2}\ {\Bigr ]}^{2}\\&-{\Bigl [}\ W_{\mu }^{+}\ W_{\nu }^{-}+W_{\nu }^{+}\ W_{\mu }^{-}+\left(\ A_{\mu }\ \sin \theta _{\mathrm {W} }-Z_{\mu }\ \cos \theta _{\mathrm {W} }\ \right)\left(\ A_{\nu }\ \sin \theta _{\mathrm {W} }-Z_{\nu }\ \cos \theta _{\mathrm {W} }\ \right)\ {\Bigr ]}^{2}\,{\Biggr \}}~.\end{aligned}}} L Y {\displaystyle \ {\mathcal {L}}_{\mathrm {Y} }\ } contains the Yukawa interactions between the fermions and the Higgs field, L Y = − ∑ f g m f 2 m W f ¯ f H . {\displaystyle {\mathcal {L}}_{\mathrm {Y} }=-\sum _{f}\ {\frac {\ g\ m_{f}\ }{2\ m_{\mathrm {W} }}}\;{\overline {f}}\ f\ H~.} == See also == Electroweak star Fundamental forces History of quantum field theory Standard Model (mathematical formulation) Unitarity gauge Weinberg angle Yang–Mills theory == Notes == == References == == Further reading == === General readers === B. A. Schumm (2004). Deep Down Things: The Breathtaking Beauty of Particle Physics. Johns Hopkins University Press. ISBN 0-8018-7971-X. Conveys much of the Standard Model with no formal mathematics. Very thorough on the weak interaction. === Texts === D. J. Griffiths (1987). Introduction to Elementary Particles. John Wiley & Sons. ISBN 0-471-60386-4. W. Greiner; B. Müller (2000). Gauge Theory of Weak Interactions. Springer. ISBN 3-540-67672-4. E. A. Paschos (2023). Electroweak Theory. Cambridge University Press. ISBN 9781009402378. === Articles === E. S. Abers; B. W. Lee (1973). "Gauge theories". Physics Reports. 9 (1): 1–141. Bibcode:1973PhR.....9....1A. doi:10.1016/0370-1573(73)90027-6. Y. Hayato; et al. (1999). "Search for Proton Decay through p → νK+ in a Large Water Cherenkov Detector". Physical Review Letters. 83 (8): 1529–1533. arXiv:hep-ex/9904020. Bibcode:1999PhRvL..83.1529H. doi:10.1103/PhysRevLett.83.1529. S2CID 118326409. J. Hucks (1991). "Global structure of the standard model, anomalies, and charge quantization". Physical Review D. 43 (8): 2709–2717. Bibcode:1991PhRvD..43.2709H. doi:10.1103/PhysRevD.43.2709. PMID 10013661. S. F. Novaes (2000). "Standard Model: An Introduction". arXiv:hep-ph/0001283. D. P. Roy (1999). "Basic Constituents of Matter and their Interactions – A Progress Report". arXiv:hep-ph/9912523.
Wikipedia/Electroweak_force
The history of science covers the development of science from ancient times to the present. It encompasses all three major branches of science: natural, social, and formal. Protoscience, early sciences, and natural philosophies such as alchemy and astrology that existed during the Bronze Age, Iron Age, classical antiquity and the Middle Ages, declined during the early modern period after the establishment of formal disciplines of science in the Age of Enlightenment. The earliest roots of scientific thinking and practice can be traced to Ancient Egypt and Mesopotamia during the 3rd and 2nd millennia BCE. These civilizations' contributions to mathematics, astronomy, and medicine influenced later Greek natural philosophy of classical antiquity, wherein formal attempts were made to provide explanations of events in the physical world based on natural causes. After the fall of the Western Roman Empire, knowledge of Greek conceptions of the world deteriorated in Latin-speaking Western Europe during the early centuries (400 to 1000 CE) of the Middle Ages, but continued to thrive in the Greek-speaking Byzantine Empire. Aided by translations of Greek texts, the Hellenistic worldview was preserved and absorbed into the Arabic-speaking Muslim world during the Islamic Golden Age. The recovery and assimilation of Greek works and Islamic inquiries into Western Europe from the 10th to 13th century revived the learning of natural philosophy in the West. Traditions of early science were also developed in ancient India and separately in ancient China, the Chinese model having influenced Vietnam, Korea and Japan before Western exploration. Among the Pre-Columbian peoples of Mesoamerica, the Zapotec civilization established their first known traditions of astronomy and mathematics for producing calendars, followed by other civilizations such as the Maya. Natural philosophy was transformed by the Scientific Revolution that transpired during the 16th and 17th centuries in Europe, as new ideas and discoveries departed from previous Greek conceptions and traditions. The New Science that emerged was more mechanistic in its worldview, more integrated with mathematics, and more reliable and open as its knowledge was based on a newly defined scientific method. More "revolutions" in subsequent centuries soon followed. The chemical revolution of the 18th century, for instance, introduced new quantitative methods and measurements for chemistry. In the 19th century, new perspectives regarding the conservation of energy, age of Earth, and evolution came into focus. And in the 20th century, new discoveries in genetics and physics laid the foundations for new sub disciplines such as molecular biology and particle physics. Moreover, industrial and military concerns as well as the increasing complexity of new research endeavors ushered in the era of "big science," particularly after World War II. == Approaches to history of science == The nature of the history of science is a topic of debate (as is, by implication, the definition of science itself). The history of science is often seen as a linear story of progress, but historians have come to see the story as more complex. Alfred Edward Taylor has characterised lean periods in the advance of scientific discovery as "periodical bankruptcies of science". Science is a human activity, and scientific contributions have come from people from a wide range of different backgrounds and cultures. Historians of science increasingly see their field as part of a global history of exchange, conflict and collaboration. The relationship between science and religion has been variously characterized in terms of "conflict", "harmony", "complexity", and "mutual independence", among others. Events in Europe such as the Galileo affair of the early 17th century – associated with the scientific revolution and the Age of Enlightenment – led scholars such as John William Draper to postulate (c. 1874) a conflict thesis, suggesting that religion and science have been in conflict methodologically, factually and politically throughout history. The "conflict thesis" has since lost favor among the majority of contemporary scientists and historians of science. However, some contemporary philosophers and scientists, such as Richard Dawkins, still subscribe to this thesis. Historians have emphasized that trust is necessary for agreement on claims about nature. In this light, the 1660 establishment of the Royal Society and its code of experiment – trustworthy because witnessed by its members – has become an important chapter in the historiography of science. Many people in modern history (typically women and persons of color) were excluded from elite scientific communities and characterized by the science establishment as inferior. Historians in the 1980s and 1990s described the structural barriers to participation and began to recover the contributions of overlooked individuals. Historians have also investigated the mundane practices of science such as fieldwork and specimen collection, correspondence, drawing, record-keeping, and the use of laboratory and field equipment. == Prehistory == In prehistoric times, knowledge and technique were passed from generation to generation in an oral tradition. For instance, the domestication of maize for agriculture has been dated to about 9,000 years ago in southern Mexico, before the development of writing systems. Similarly, archaeological evidence indicates the development of astronomical knowledge in preliterate societies. The oral tradition of preliterate societies had several features, the first of which was its fluidity. New information was constantly absorbed and adjusted to new circumstances or community needs. There were no archives or reports. This fluidity was closely related to the practical need to explain and justify a present state of affairs. Another feature was the tendency to describe the universe as just sky and earth, with a potential underworld. They were also prone to identify causes with beginnings, thereby providing a historical origin with an explanation. There was also a reliance on a "medicine man" or "wise woman" for healing, knowledge of divine or demonic causes of diseases, and in more extreme cases, for rituals such as exorcism, divination, songs, and incantations. Finally, there was an inclination to unquestioningly accept explanations that might be deemed implausible in more modern times while at the same time not being aware that such credulous behaviors could have posed problems. The development of writing enabled humans to store and communicate knowledge across generations with much greater accuracy. Its invention was a prerequisite for the development of philosophy and later science in ancient times. Moreover, the extent to which philosophy and science would flourish in ancient times depended on the efficiency of a writing system (e.g., use of alphabets). == Ancient Near East == The earliest roots of science can be traced to the Ancient Near East c. 3000–1200 BCE – in particular to Ancient Egypt and Mesopotamia. === Ancient Egypt === ==== Number system and geometry ==== Starting c. 3000 BCE, the ancient Egyptians developed a numbering system that was decimal in character and had oriented their knowledge of geometry to solving practical problems such as those of surveyors and builders. Their development of geometry was itself a necessary development of surveying to preserve the layout and ownership of farmland, which was flooded annually by the Nile. The 3-4-5 right triangle and other rules of geometry were used to build rectilinear structures, and the post and lintel architecture of Egypt. ==== Disease and healing ==== Egypt was also a center of alchemy research for much of the Mediterranean. According to the medical papyri (written c. 2500–1200 BCE), the ancient Egyptians believed that disease was mainly caused by the invasion of bodies by evil forces or spirits. Thus, in addition to medicine, therapies included prayer, incantation, and ritual. The Ebers Papyrus, written c. 1600 BCE, contains medical recipes for treating diseases related to the eyes, mouth, skin, internal organs, and extremities, as well as abscesses, wounds, burns, ulcers, swollen glands, tumors, headaches, and bad breath. The Edwin Smith Papyrus, written at about the same time, contains a surgical manual for treating wounds, fractures, and dislocations. The Egyptians believed that the effectiveness of their medicines depended on the preparation and administration under appropriate rituals. Medical historians believe that ancient Egyptian pharmacology, for example, was largely ineffective. Both the Ebers and Edwin Smith papyri applied the following components to the treatment of disease: examination, diagnosis, treatment, and prognosis, which display strong parallels to the basic empirical method of science and, according to G. E. R. Lloyd, played a significant role in the development of this methodology. ==== Calendar ==== The ancient Egyptians even developed an official calendar that contained twelve months, thirty days each, and five days at the end of the year. Unlike the Babylonian calendar or the ones used in Greek city-states at the time, the official Egyptian calendar was much simpler as it was fixed and did not take lunar and solar cycles into consideration. === Mesopotamia === The ancient Mesopotamians had extensive knowledge about the chemical properties of clay, sand, metal ore, bitumen, stone, and other natural materials, and applied this knowledge to practical use in manufacturing pottery, faience, glass, soap, metals, lime plaster, and waterproofing. Metallurgy required knowledge about the properties of metals. Nonetheless, the Mesopotamians seem to have had little interest in gathering information about the natural world for the mere sake of gathering information and were far more interested in studying the manner in which the gods had ordered the universe. Biology of non-human organisms was generally only written about in the context of mainstream academic disciplines. Animal physiology was studied extensively for the purpose of divination; the anatomy of the liver, which was seen as an important organ in haruspicy, was studied in particularly intensive detail. Animal behavior was also studied for divinatory purposes. Most information about the training and domestication of animals was probably transmitted orally without being written down, but one text dealing with the training of horses has survived. ==== Mesopotamian medicine ==== The ancient Mesopotamians had no distinction between "rational science" and magic. When a person became ill, doctors prescribed magical formulas to be recited as well as medicinal treatments. The earliest medical prescriptions appear in Sumerian during the Third Dynasty of Ur (c. 2112 BCE – c. 2004 BCE). The most extensive Babylonian medical text, however, is the Diagnostic Handbook written by the ummânū, or chief scholar, Esagil-kin-apli of Borsippa, during the reign of the Babylonian king Adad-apla-iddina (1069–1046 BCE). In East Semitic cultures, the main medicinal authority was a kind of exorcist-healer known as an āšipu. The profession was generally passed down from father to son and was held in extremely high regard. Of less frequent recourse was another kind of healer known as an asu, who corresponds more closely to a modern physician and treated physical symptoms using primarily folk remedies composed of various herbs, animal products, and minerals, as well as potions, enemas, and ointments or poultices. These physicians, who could be either male or female, also dressed wounds, set limbs, and performed simple surgeries. The ancient Mesopotamians also practiced prophylaxis and took measures to prevent the spread of disease. ==== Astronomy and celestial divination ==== In Babylonian astronomy, records of the motions of the stars, planets, and the moon are left on thousands of clay tablets created by scribes. Even today, astronomical periods identified by Mesopotamian proto-scientists are still widely used in Western calendars such as the solar year and the lunar month. Using this data, they developed mathematical methods to compute the changing length of daylight in the course of the year, predict the appearances and disappearances of the Moon and planets, and eclipses of the Sun and Moon. Only a few astronomers' names are known, such as that of Kidinnu, a Chaldean astronomer and mathematician. Kiddinu's value for the solar year is in use for today's calendars. Babylonian astronomy was "the first and highly successful attempt at giving a refined mathematical description of astronomical phenomena." According to the historian A. Aaboe, "all subsequent varieties of scientific astronomy, in the Hellenistic world, in India, in Islam, and in the West—if not indeed all subsequent endeavour in the exact sciences—depend upon Babylonian astronomy in decisive and fundamental ways." To the Babylonians and other Near Eastern cultures, messages from the gods or omens were concealed in all natural phenomena that could be deciphered and interpreted by those who are adept. Hence, it was believed that the gods could speak through all terrestrial objects (e.g., animal entrails, dreams, malformed births, or even the color of a dog urinating on a person) and celestial phenomena. Moreover, Babylonian astrology was inseparable from Babylonian astronomy. ==== Mathematics ==== The Mesopotamian cuneiform tablet Plimpton 322, dating to the 18th century BCE, records a number of Pythagorean triplets (3, 4, 5) and (5, 12, 13) ..., hinting that the ancient Mesopotamians might have been aware of the Pythagorean theorem over a millennium before Pythagoras. == Ancient and medieval South Asia and East Asia == Mathematical achievements from Mesopotamia had some influence on the development of mathematics in India, and there were confirmed transmissions of mathematical ideas between India and China, which were bidirectional. Nevertheless, the mathematical and scientific achievements in India and particularly in China occurred largely independently from those of Europe and the confirmed early influences that these two civilizations had on the development of science in Europe in the pre-modern era were indirect, with Mesopotamia and later the Islamic World acting as intermediaries. The arrival of modern science, which grew out of the Scientific Revolution, in India and China and the greater Asian region in general can be traced to the scientific activities of Jesuit missionaries who were interested in studying the region's flora and fauna during the 16th to 17th century. === India === ==== Mathematics ==== The earliest traces of mathematical knowledge in the Indian subcontinent appear with the Indus Valley Civilisation (c. 3300 – c. 1300 BCE). The people of this civilization made bricks whose dimensions were in the proportion 4:2:1, which is favorable for the stability of a brick structure. They also tried to standardize measurement of length to a high degree of accuracy. They designed a ruler—the Mohenjo-daro ruler—whose length of approximately 1.32 in (34 mm) was divided into ten equal parts. Bricks manufactured in ancient Mohenjo-daro often had dimensions that were integral multiples of this unit of length. The Bakhshali manuscript contains problems involving arithmetic, algebra and geometry, including mensuration. The topics covered include fractions, square roots, arithmetic and geometric progressions, solutions of simple equations, simultaneous linear equations, quadratic equations and indeterminate equations of the second degree. In the 3rd century BCE, Pingala presents the Pingala-sutras, the earliest known treatise on Sanskrit prosody. He also presents a numerical system by adding one to the sum of place values. Pingala's work also includes material related to the Fibonacci numbers, called mātrāmeru. Indian astronomer and mathematician Aryabhata (476–550), in his Aryabhatiya (499) introduced the sine function in trigonometry and the number 0. In 628, Brahmagupta suggested that gravity was a force of attraction. He also lucidly explained the use of zero as both a placeholder and a decimal digit, along with the Hindu–Arabic numeral system now used universally throughout the world. Arabic translations of the two astronomers' texts were soon available in the Islamic world, introducing what would become Arabic numerals to the Islamic world by the 9th century. Narayana Pandita (1340–1400) was an Indian mathematician. Plofker writes that his texts were the most significant Sanskrit mathematics treatises after those of Bhaskara II, other than the Kerala school.: 52  He wrote the Ganita Kaumudi (lit. "Moonlight of mathematics") in 1356 about mathematical operations. The work anticipated many developments in combinatorics. Between the 14th and 16th centuries, the Kerala school of astronomy and mathematics made significant advances in astronomy and especially mathematics, including fields such as trigonometry and analysis. In particular, Madhava of Sangamagrama led advancement in analysis by providing the infinite and taylor series expansion of some trigonometric functions and pi approximation. Parameshvara (1380–1460), presents a case of the Mean Value theorem in his commentaries on Govindasvāmi and Bhāskara II. The Yuktibhāṣā was written by Jyeshtadeva in 1530. ==== Astronomy ==== The first textual mention of astronomical concepts comes from the Vedas, religious literature of India. According to Sarma (2008): "One finds in the Rigveda intelligent speculations about the genesis of the universe from nonexistence, the configuration of the universe, the spherical self-supporting earth, and the year of 360 days divided into 12 equal parts of 30 days each with a periodical intercalary month.". The first 12 chapters of the Siddhanta Shiromani, written by Bhāskara in the 12th century, cover topics such as: mean longitudes of the planets; true longitudes of the planets; the three problems of diurnal rotation; syzygies; lunar eclipses; solar eclipses; latitudes of the planets; risings and settings; the moon's crescent; conjunctions of the planets with each other; conjunctions of the planets with the fixed stars; and the patas of the sun and moon. The 13 chapters of the second part cover the nature of the sphere, as well as significant astronomical and trigonometric calculations based on it. In the Tantrasangraha treatise, Nilakantha Somayaji's updated the Aryabhatan model for the interior planets, Mercury, and Venus and the equation that he specified for the center of these planets was more accurate than the ones in European or Islamic astronomy until the time of Johannes Kepler in the 17th century. Jai Singh II of Jaipur constructed five observatories called Jantar Mantars in total, in New Delhi, Jaipur, Ujjain, Mathura and Varanasi; they were completed between 1724 and 1735. ==== Grammar ==== Some of the earliest linguistic activities can be found in Iron Age India (1st millennium BCE) with the analysis of Sanskrit for the purpose of the correct recitation and interpretation of Vedic texts. The most notable grammarian of Sanskrit was Pāṇini (c. 520–460 BCE), whose grammar formulates close to 4,000 rules for Sanskrit. Inherent in his analytic approach are the concepts of the phoneme, the morpheme and the root. The Tolkāppiyam text, composed in the early centuries of the common era, is a comprehensive text on Tamil grammar, which includes sutras on orthography, phonology, etymology, morphology, semantics, prosody, sentence structure and the significance of context in language. ==== Medicine ==== Findings from Neolithic graveyards in what is now Pakistan show evidence of proto-dentistry among an early farming culture. The ancient text Suśrutasamhitā of Suśruta describes procedures on various forms of surgery, including rhinoplasty, the repair of torn ear lobes, perineal lithotomy, cataract surgery, and several other excisions and other surgical procedures. The Charaka Samhita of Charaka describes ancient theories on human body, etiology, symptomology and therapeutics for a wide range of diseases. It also includes sections on the importance of diet, hygiene, prevention, medical education, and the teamwork of a physician, nurse and patient necessary for recovery to health. ==== Politics and state ==== An ancient Indian treatise on statecraft, economic policy and military strategy by Kautilya and Viṣhṇugupta, who are traditionally identified with Chāṇakya (c. 350–283 BCE). In this treatise, the behaviors and relationships of the people, the King, the State, the Government Superintendents, Courtiers, Enemies, Invaders, and Corporations are analyzed and documented. Roger Boesche describes the Arthaśāstra as "a book of political realism, a book analyzing how the political world does work and not very often stating how it ought to work, a book that frequently discloses to a king what calculating and sometimes brutal measures he must carry out to preserve the state and the common good." ==== Logic ==== The development of Indian logic dates back to the Chandahsutra of Pingala and anviksiki of Medhatithi Gautama (c. 6th century BCE); the Sanskrit grammar rules of Pāṇini (c. 5th century BCE); the Vaisheshika school's analysis of atomism (c. 6th century BCE to 2nd century BCE); the analysis of inference by Gotama (c. 6th century BCE to 2nd century CE), founder of the Nyaya school of Hindu philosophy; and the tetralemma of Nagarjuna (c. 2nd century CE). Indian logic stands as one of the three original traditions of logic, alongside the Greek and the Chinese logic. The Indian tradition continued to develop through early to modern times, in the form of the Navya-Nyāya school of logic. In the 2nd century, the Buddhist philosopher Nagarjuna refined the Catuskoti form of logic. The Catuskoti is also often glossed Tetralemma (Greek) which is the name for a largely comparable, but not equatable, 'four corner argument' within the tradition of Classical logic. Navya-Nyāya developed a sophisticated language and conceptual scheme that allowed it to raise, analyse, and solve problems in logic and epistemology. It systematised all the Nyāya concepts into four main categories: sense or perception (pratyakşa), inference (anumāna), comparison or similarity (upamāna), and testimony (sound or word; śabda). === China === ==== Chinese mathematics ==== From the earliest the Chinese used a positional decimal system on counting boards in order to calculate. To express 10, a single rod is placed in the second box from the right. The spoken language uses a similar system to English: e.g. four thousand two hundred and seven. No symbol was used for zero. By the 1st century BCE, negative numbers and decimal fractions were in use and The Nine Chapters on the Mathematical Art included methods for extracting higher order roots by Horner's method and solving linear equations and by Pythagoras' theorem. Cubic equations were solved in the Tang dynasty and solutions of equations of order higher than 3 appeared in print in 1245 CE by Ch'in Chiu-shao. Pascal's triangle for binomial coefficients was described around 1100 by Jia Xian. Although the first attempts at an axiomatization of geometry appear in the Mohist canon in 330 BCE, Liu Hui developed algebraic methods in geometry in the 3rd century CE and also calculated pi to 5 significant figures. In 480, Zu Chongzhi improved this by discovering the ratio 355 113 {\displaystyle {\tfrac {355}{113}}} which remained the most accurate value for 1200 years. ==== Astronomical observations ==== Astronomical observations from China constitute the longest continuous sequence from any civilization and include records of sunspots (112 records from 364 BCE), supernovas (1054), lunar and solar eclipses. By the 12th century, they could reasonably accurately make predictions of eclipses, but the knowledge of this was lost during the Ming dynasty, so that the Jesuit Matteo Ricci gained much favor in 1601 by his predictions. By 635 Chinese astronomers had observed that the tails of comets always point away from the sun. From antiquity, the Chinese used an equatorial system for describing the skies and a star map from 940 was drawn using a cylindrical (Mercator) projection. The use of an armillary sphere is recorded from the 4th century BCE and a sphere permanently mounted in equatorial axis from 52 BCE. In 125 CE Zhang Heng used water power to rotate the sphere in real time. This included rings for the meridian and ecliptic. By 1270 they had incorporated the principles of the Arab torquetum. In the Song Empire (960–1279) of Imperial China, Chinese scholar-officials unearthed, studied, and cataloged ancient artifacts. ==== Inventions ==== To better prepare for calamities, Zhang Heng invented a seismometer in 132 CE which provided instant alert to authorities in the capital Luoyang that an earthquake had occurred in a location indicated by a specific cardinal or ordinal direction. Although no tremors could be felt in the capital when Zhang told the court that an earthquake had just occurred in the northwest, a message came soon afterwards that an earthquake had indeed struck 400 to 500 km (250 to 310 mi) northwest of Luoyang (in what is now modern Gansu). Zhang called his device the 'instrument for measuring the seasonal winds and the movements of the Earth' (Houfeng didong yi 候风地动仪), so-named because he and others thought that earthquakes were most likely caused by the enormous compression of trapped air. There are many notable contributors to early Chinese disciplines, inventions, and practices throughout the ages. One of the best examples would be the medieval Song Chinese Shen Kuo (1031–1095), a polymath and statesman who was the first to describe the magnetic-needle compass used for navigation, discovered the concept of true north, improved the design of the astronomical gnomon, armillary sphere, sight tube, and clepsydra, and described the use of drydocks to repair boats. After observing the natural process of the inundation of silt and the find of marine fossils in the Taihang Mountains (hundreds of miles from the Pacific Ocean), Shen Kuo devised a theory of land formation, or geomorphology. He also adopted a theory of gradual climate change in regions over time, after observing petrified bamboo found underground at Yan'an, Shaanxi. If not for Shen Kuo's writing, the architectural works of Yu Hao would be little known, along with the inventor of movable type printing, Bi Sheng (990–1051). Shen's contemporary Su Song (1020–1101) was also a brilliant polymath, an astronomer who created a celestial atlas of star maps, wrote a treatise related to botany, zoology, mineralogy, and metallurgy, and had erected a large astronomical clocktower in Kaifeng city in 1088. To operate the crowning armillary sphere, his clocktower featured an escapement mechanism and the world's oldest known use of an endless power-transmitting chain drive. The Jesuit China missions of the 16th and 17th centuries "learned to appreciate the scientific achievements of this ancient culture and made them known in Europe. Through their correspondence European scientists first learned about the Chinese science and culture." Western academic thought on the history of Chinese technology and science was galvanized by the work of Joseph Needham and the Needham Research Institute. Among the technological accomplishments of China were, according to the British scholar Needham, the water-powered celestial globe (Zhang Heng), dry docks, sliding calipers, the double-action piston pump, the blast furnace, the multi-tube seed drill, the wheelbarrow, the suspension bridge, the winnowing machine, gunpowder, the raised-relief map, toilet paper, the efficient harness, along with contributions in logic, astronomy, medicine, and other fields. However, cultural factors prevented these Chinese achievements from developing into "modern science". According to Needham, it may have been the religious and philosophical framework of Chinese intellectuals which made them unable to accept the ideas of laws of nature: It was not that there was no order in nature for the Chinese, but rather that it was not an order ordained by a rational personal being, and hence there was no conviction that rational personal beings would be able to spell out in their lesser earthly languages the divine code of laws which he had decreed aforetime. The Taoists, indeed, would have scorned such an idea as being too naïve for the subtlety and complexity of the universe as they intuited it. == Pre-Columbian Mesoamerica == During the Middle Formative Period (c. 900 BCE – c. 300 BCE) of Pre-Columbian Mesoamerica, the Zapotec civilization, heavily influenced by the Olmec civilization, established the first known full writing system of the region (possibly predated by the Olmec Cascajal Block), as well as the first known astronomical calendar in Mesoamerica. Following a period of initial urban development in the Preclassical period, the Classic Maya civilization (c. 250 CE – c. 900 CE) built on the shared heritage of the Olmecs by developing the most sophisticated systems of writing, astronomy, calendrical science, and mathematics among Mesoamerican peoples. The Maya developed a positional numeral system with a base of 20 that included the use of zero for constructing their calendars. Maya writing, which was developed by 200 BCE, widespread by 100 BCE, and rooted in Olmec and Zapotec scripts, contains easily discernible calendar dates in the form of logographs representing numbers, coefficients, and calendar periods amounting to 20 days and even 20 years for tracking social, religious, political, and economic events in 360-day years. == Classical antiquity and Greco-Roman science == The contributions of the Ancient Egyptians and Mesopotamians in the areas of astronomy, mathematics, and medicine had entered and shaped Greek natural philosophy of classical antiquity, whereby formal attempts were made to provide explanations of events in the physical world based on natural causes. Inquiries were also aimed at such practical goals such as establishing a reliable calendar or determining how to cure a variety of illnesses. The ancient people who were considered the first scientists may have thought of themselves as natural philosophers, as practitioners of a skilled profession (for example, physicians), or as followers of a religious tradition (for example, temple healers). === Pre-socratics === The earliest Greek philosophers, known as the pre-Socratics, provided competing answers to the question found in the myths of their neighbors: "How did the ordered cosmos in which we live come to be?" The pre-Socratic philosopher Thales (640–546 BCE) of Miletus, identified by later authors such as Aristotle as the first of the Ionian philosophers, postulated non-supernatural explanations for natural phenomena. For example, that land floats on water and that earthquakes are caused by the agitation of the water upon which the land floats, rather than the god Poseidon. Thales' student Pythagoras of Samos founded the Pythagorean school, which investigated mathematics for its own sake, and was the first to postulate that the Earth is spherical in shape. Leucippus (5th century BCE) introduced atomism, the theory that all matter is made of indivisible, imperishable units called atoms. This was greatly expanded on by his pupil Democritus and later Epicurus. === Natural philosophy === Plato and Aristotle produced the first systematic discussions of natural philosophy, which did much to shape later investigations of nature. Their development of deductive reasoning was of particular importance and usefulness to later scientific inquiry. Plato founded the Platonic Academy in 387 BCE, whose motto was "Let none unversed in geometry enter here," and also turned out many notable philosophers. Plato's student Aristotle introduced empiricism and the notion that universal truths can be arrived at via observation and induction, thereby laying the foundations of the scientific method. Aristotle also produced many biological writings that were empirical in nature, focusing on biological causation and the diversity of life. He made countless observations of nature, especially the habits and attributes of plants and animals on Lesbos, classified more than 540 animal species, and dissected at least 50. Aristotle's writings profoundly influenced subsequent Islamic and European scholarship, though they were eventually superseded in the Scientific Revolution. Aristotle also contributed to theories of the elements and the cosmos. He believed that the celestial bodies (such as the planets and the Sun) had something called an unmoved mover that put the celestial bodies in motion. Aristotle tried to explain everything through mathematics and physics, but sometimes explained things such as the motion of celestial bodies through a higher power such as God. Aristotle did not have the technological advancements that would have explained the motion of celestial bodies. In addition, Aristotle had many views on the elements. He believed that everything was derived of the elements earth, water, air, fire, and lastly the Aether. The Aether was a celestial element, and therefore made up the matter of the celestial bodies. The elements of earth, water, air and fire were derived of a combination of two of the characteristics of hot, wet, cold, and dry, and all had their inevitable place and motion. The motion of these elements begins with earth being the closest to "the Earth," then water, air, fire, and finally Aether. In addition to the makeup of all things, Aristotle came up with theories as to why things did not return to their natural motion. He understood that water sits above earth, air above water, and fire above air in their natural state. He explained that although all elements must return to their natural state, the human body and other living things have a constraint on the elements – thus not allowing the elements making one who they are to return to their natural state. The important legacy of this period included substantial advances in factual knowledge, especially in anatomy, zoology, botany, mineralogy, geography, mathematics and astronomy; an awareness of the importance of certain scientific problems, especially those related to the problem of change and its causes; and a recognition of the methodological importance of applying mathematics to natural phenomena and of undertaking empirical research. In the Hellenistic age scholars frequently employed the principles developed in earlier Greek thought: the application of mathematics and deliberate empirical research, in their scientific investigations. Thus, clear unbroken lines of influence lead from ancient Greek and Hellenistic philosophers, to medieval Muslim philosophers and scientists, to the European Renaissance and Enlightenment, to the secular sciences of the modern day. Neither reason nor inquiry began with the Ancient Greeks, but the Socratic method did, along with the idea of Forms, give great advances in geometry, logic, and the natural sciences. According to Benjamin Farrington, former professor of Classics at Swansea University: "Men were weighing for thousands of years before Archimedes worked out the laws of equilibrium; they must have had practical and intuitional knowledge of the principals involved. What Archimedes did was to sort out the theoretical implications of this practical knowledge and present the resulting body of knowledge as a logically coherent system." and again: "With astonishment we find ourselves on the threshold of modern science. Nor should it be supposed that by some trick of translation the extracts have been given an air of modernity. Far from it. The vocabulary of these writings and their style are the source from which our own vocabulary and style have been derived." === Greek astronomy === The astronomer Aristarchus of Samos was the first known person to propose a heliocentric model of the Solar System, while the geographer Eratosthenes accurately calculated the circumference of the Earth. Hipparchus (c. 190 – c. 120 BCE) produced the first systematic star catalog. The level of achievement in Hellenistic astronomy and engineering is impressively shown by the Antikythera mechanism (150–100 BCE), an analog computer for calculating the position of planets. Technological artifacts of similar complexity did not reappear until the 14th century, when mechanical astronomical clocks appeared in Europe. === Hellenistic medicine === There was not a defined societal structure for healthcare during the age of Hippocrates. At that time, society was not organized and knowledgeable as people still relied on pure religious reasoning to explain illnesses. Hippocrates introduced the first healthcare system based on science and clinical protocols. Hippocrates' theories about physics and medicine helped pave the way in creating an organized medical structure for society. In medicine, Hippocrates (c. 460–370 BCE) and his followers were the first to describe many diseases and medical conditions and developed the Hippocratic Oath for physicians, still relevant and in use today. Hippocrates' ideas are expressed in The Hippocratic Corpus. The collection notes descriptions of medical philosophies and how disease and lifestyle choices reflect on the physical body. Hippocrates influenced a Westernized, professional relationship among physician and patient. Hippocrates is also known as "the Father of Medicine". Herophilos (335–280 BCE) was the first to base his conclusions on dissection of the human body and to describe the nervous system. Galen (129 – c. 200 CE) performed many audacious operations—including brain and eye surgeries— that were not tried again for almost two millennia. === Greek mathematics === In Hellenistic Egypt, the mathematician Euclid laid down the foundations of mathematical rigor and introduced the concepts of definition, axiom, theorem and proof still in use today in his Elements, considered the most influential textbook ever written. Archimedes, considered one of the greatest mathematicians of all time, is credited with using the method of exhaustion to calculate the area under the arc of a parabola with the summation of an infinite series, and gave a remarkably accurate approximation of pi. He is also known in physics for laying the foundations of hydrostatics, statics, and the explanation of the principle of the lever. === Other developments === Theophrastus wrote some of the earliest descriptions of plants and animals, establishing the first taxonomy and looking at minerals in terms of their properties, such as hardness. Pliny the Elder produced one of the largest encyclopedias of the natural world in 77 CE, and was a successor to Theophrastus. For example, he accurately describes the octahedral shape of the diamond and noted that diamond dust is used by engravers to cut and polish other gems owing to its great hardness. His recognition of the importance of crystal shape is a precursor to modern crystallography, while notes on other minerals presages mineralogy. He recognizes other minerals have characteristic crystal shapes, but in one example, confuses the crystal habit with the work of lapidaries. Pliny was the first to show amber was a resin from pine trees, because of trapped insects within them. The development of archaeology has its roots in history and with those who were interested in the past, such as kings and queens who wanted to show past glories of their respective nations. The 5th-century-BCE Greek historian Herodotus was the first scholar to systematically study the past and perhaps the first to examine artifacts. === Greek scholarship under Roman rule === During the rule of Rome, famous historians such as Polybius, Livy and Plutarch documented the rise of the Roman Republic, and the organization and histories of other nations, while statesmen like Julius Caesar, Cicero, and others provided examples of the politics of the republic and Rome's empire and wars. The study of politics during this age was oriented toward understanding history, understanding methods of governing, and describing the operation of governments. The Roman conquest of Greece did not diminish learning and culture in the Greek provinces. On the contrary, the appreciation of Greek achievements in literature, philosophy, politics, and the arts by Rome's upper class coincided with the increased prosperity of the Roman Empire. Greek settlements had existed in Italy for centuries and the ability to read and speak Greek was not uncommon in Italian cities such as Rome. Moreover, the settlement of Greek scholars in Rome, whether voluntarily or as slaves, gave Romans access to teachers of Greek literature and philosophy. Conversely, young Roman scholars also studied abroad in Greece and upon their return to Rome, were able to convey Greek achievements to their Latin leadership. And despite the translation of a few Greek texts into Latin, Roman scholars who aspired to the highest level did so using the Greek language. The Roman statesman and philosopher Cicero (106 – 43 BCE) was a prime example. He had studied under Greek teachers in Rome and then in Athens and Rhodes. He mastered considerable portions of Greek philosophy, wrote Latin treatises on several topics, and even wrote Greek commentaries of Plato's Timaeus as well as a Latin translation of it, which has not survived. In the beginning, support for scholarship in Greek knowledge was almost entirely funded by the Roman upper class. There were all sorts of arrangements, ranging from a talented scholar being attached to a wealthy household to owning educated Greek-speaking slaves. In exchange, scholars who succeeded at the highest level had an obligation to provide advice or intellectual companionship to their Roman benefactors, or to even take care of their libraries. The less fortunate or accomplished ones would teach their children or perform menial tasks. The level of detail and sophistication of Greek knowledge was adjusted to suit the interests of their Roman patrons. That meant popularizing Greek knowledge by presenting information that were of practical value such as medicine or logic (for courts and politics) but excluding subtle details of Greek metaphysics and epistemology. Beyond the basics, the Romans did not value natural philosophy and considered it an amusement for leisure time. Commentaries and encyclopedias were the means by which Greek knowledge was popularized for Roman audiences. The Greek scholar Posidonius (c. 135-c. 51 BCE), a native of Syria, wrote prolifically on history, geography, moral philosophy, and natural philosophy. He greatly influenced Latin writers such as Marcus Terentius Varro (116-27 BCE), who wrote the encyclopedia Nine Books of Disciplines, which covered nine arts: grammar, rhetoric, logic, arithmetic, geometry, astronomy, musical theory, medicine, and architecture. The Disciplines became a model for subsequent Roman encyclopedias and Varro's nine liberal arts were considered suitable education for a Roman gentleman. The first seven of Varro's nine arts would later define the seven liberal arts of medieval schools. The pinnacle of the popularization movement was the Roman scholar Pliny the Elder (23/24–79 CE), a native of northern Italy, who wrote several books on the history of Rome and grammar. His most famous work was his voluminous Natural History. After the death of the Roman Emperor Marcus Aurelius in 180 CE, the favorable conditions for scholarship and learning in the Roman Empire were upended by political unrest, civil war, urban decay, and looming economic crisis. In around 250 CE, barbarians began attacking and invading the Roman frontiers. These combined events led to a general decline in political and economic conditions. The living standards of the Roman upper class was severely impacted, and their loss of leisure diminished scholarly pursuits. Moreover, during the 3rd and 4th centuries CE, the Roman Empire was administratively divided into two halves: Greek East and Latin West. These administrative divisions weakened the intellectual contact between the two regions. Eventually, both halves went their separate ways, with the Greek East becoming the Byzantine Empire. Christianity was also steadily expanding during this time and soon became a major patron of education in the Latin West. Initially, the Christian church adopted some of the reasoning tools of Greek philosophy in the 2nd and 3rd centuries CE to defend its faith against sophisticated opponents. Nevertheless, Greek philosophy received a mixed reception from leaders and adherents of the Christian faith. Some such as Tertullian (c. 155-c. 230 CE) were vehemently opposed to philosophy, denouncing it as heretic. Others such as Augustine of Hippo (354-430 CE) were ambivalent and defended Greek philosophy and science as the best ways to understand the natural world and therefore treated it as a handmaiden (or servant) of religion. Education in the West began its gradual decline, along with the rest of Western Roman Empire, due to invasions by Germanic tribes, civil unrest, and economic collapse. Contact with the classical tradition was lost in specific regions such as Roman Britain and northern Gaul but continued to exist in Rome, northern Italy, southern Gaul, Spain, and North Africa. == Middle Ages == In the Middle Ages, the classical learning continued in three major linguistic cultures and civilizations: Greek (the Byzantine Empire), Arabic (the Islamic world), and Latin (Western Europe). === Byzantine Empire === ==== Preservation of Greek heritage ==== The fall of the Western Roman Empire led to a deterioration of the classical tradition in the western part (or Latin West) of Europe during the 5th century. In contrast, the Byzantine Empire resisted the barbarian attacks and preserved and improved the learning. While the Byzantine Empire still held learning centers such as Constantinople, Alexandria and Antioch, Western Europe's knowledge was concentrated in monasteries until the development of medieval universities in the 12th centuries. The curriculum of monastic schools included the study of the few available ancient texts and of new works on practical subjects like medicine and timekeeping. In the sixth century in the Byzantine Empire, Isidore of Miletus compiled Archimedes' mathematical works in the Archimedes Palimpsest, where all Archimedes' mathematical contributions were collected and studied. John Philoponus, another Byzantine scholar, was the first to question Aristotle's teaching of physics, introducing the theory of impetus. The theory of impetus was an auxiliary or secondary theory of Aristotelian dynamics, put forth initially to explain projectile motion against gravity. It is the intellectual precursor to the concepts of inertia, momentum and acceleration in classical mechanics. The works of John Philoponus inspired Galileo Galilei ten centuries later. ==== Collapse ==== During the Fall of Constantinople in 1453, a number of Greek scholars fled to North Italy in which they fueled the era later commonly known as the "Renaissance" as they brought with them a great deal of classical learning including an understanding of botany, medicine, and zoology. Byzantium also gave the West important inputs: John Philoponus' criticism of Aristotelian physics, and the works of Dioscorides. === Islamic world === This was the period (8th–14th century CE) of the Islamic Golden Age where commerce thrived, and new ideas and technologies emerged such as the importation of papermaking from China, which made the copying of manuscripts inexpensive. ==== Translations and Hellenization ==== The eastward transmission of Greek heritage to Western Asia was a slow and gradual process that spanned over a thousand years, beginning with the Asian conquests of Alexander the Great in 335 BCE to the founding of Islam in the 7th century CE. The birth and expansion of Islam during the 7th century was quickly followed by its Hellenization. Knowledge of Greek conceptions of the world was preserved and absorbed into Islamic theology, law, culture, and commerce, which were aided by the translations of traditional Greek texts and some Syriac intermediary sources into Arabic during the 8th–9th century. ==== Education and scholarly pursuits ==== Madrasas were centers for many different religious and scientific studies and were the culmination of different institutions such as mosques based around religious studies, housing for out-of-town visitors, and finally educational institutions focused on the natural sciences. Unlike Western universities, students at a madrasa would learn from one specific teacher, who would issue a certificate at the completion of their studies called an Ijazah. An Ijazah differs from a western university degree in many ways one being that it is issued by a single person rather than an institution, and another being that it is not an individual degree declaring adequate knowledge over broad subjects, but rather a license to teach and pass on a very specific set of texts. Women were also allowed to attend madrasas, as both students and teachers, something not seen in high western education until the 1800s. Madrasas were more than just academic centers. The Suleymaniye Mosque, for example, was one of the earliest and most well-known madrasas, which was built by Suleiman the Magnificent in the 16th century. The Suleymaniye Mosque was home to a hospital and medical college, a kitchen, and children's school, as well as serving as a temporary home for travelers. Higher education at a madrasa (or college) was focused on Islamic law and religious science and students had to engage in self-study for everything else. And despite the occasional theological backlash, many Islamic scholars of science were able to conduct their work in relatively tolerant urban centers (e.g., Baghdad and Cairo) and were protected by powerful patrons. They could also travel freely and exchange ideas as there were no political barriers within the unified Islamic state. Islamic science during this time was primarily focused on the correction, extension, articulation, and application of Greek ideas to new problems. ==== Advancements in mathematics ==== Most of the achievements by Islamic scholars during this period were in mathematics. Arabic mathematics was a direct descendant of Greek and Indian mathematics. For instance, what is now known as Arabic numerals originally came from India, but Muslim mathematicians made several key refinements to the number system, such as the introduction of decimal point notation. Mathematicians such as Muhammad ibn Musa al-Khwarizmi (c. 780–850) gave his name to the concept of the algorithm, while the term algebra is derived from al-jabr, the beginning of the title of one of his publications. Islamic trigonometry continued from the works of Ptolemy's Almagest and Indian Siddhanta, from which they added trigonometric functions, drew up tables, and applied trignometry to spheres and planes. Many of their engineers, instruments makers, and surveyors contributed books in applied mathematics. It was in astronomy where Islamic mathematicians made their greatest contributions. Al-Battani (c. 858–929) improved the measurements of Hipparchus, preserved in the translation of Ptolemy's Hè Megalè Syntaxis (The great treatise) translated as Almagest. Al-Battani also improved the precision of the measurement of the precession of the Earth's axis. Corrections were made to Ptolemy's geocentric model by al-Battani, Ibn al-Haytham, Averroes and the Maragha astronomers such as Nasir al-Din al-Tusi, Mu'ayyad al-Din al-Urdi and Ibn al-Shatir. Scholars with geometric skills made significant improvements to the earlier classical texts on light and sight by Euclid, Aristotle, and Ptolemy. The earliest surviving Arabic treatises were written in the 9th century by Abū Ishāq al-Kindī, Qustā ibn Lūqā, and (in fragmentary form) Ahmad ibn Isā. Later in the 11th century, Ibn al-Haytham (known as Alhazen in the West), a mathematician and astronomer, synthesized a new theory of vision based on the works of his predecessors. His new theory included a complete system of geometrical optics, which was set in great detail in his Book of Optics. His book was translated into Latin and was relied upon as a principal source on the science of optics in Europe until the 17th century. ==== Institutionalization of medicine ==== The medical sciences were prominently cultivated in the Islamic world. The works of Greek medical theories, especially those of Galen, were translated into Arabic and there was an outpouring of medical texts by Islamic physicians, which were aimed at organizing, elaborating, and disseminating classical medical knowledge. Medical specialties started to emerge, such as those involved in the treatment of eye diseases such as cataracts. Ibn Sina (known as Avicenna in the West, c. 980–1037) was a prolific Persian medical encyclopedist wrote extensively on medicine, with his two most notable works in medicine being the Kitāb al-shifāʾ ("Book of Healing") and The Canon of Medicine, both of which were used as standard medicinal texts in both the Muslim world and in Europe well into the 17th century. Amongst his many contributions are the discovery of the contagious nature of infectious diseases, and the introduction of clinical pharmacology. Institutionalization of medicine was another important achievement in the Islamic world. Although hospitals as an institution for the sick emerged in the Byzantium empire, the model of institutionalized medicine for all social classes was extensive in the Islamic empire and was scattered throughout. In addition to treating patients, physicians could teach apprentice physicians, as well write and do research. The discovery of the pulmonary transit of blood in the human body by Ibn al-Nafis occurred in a hospital setting. ==== Decline ==== Islamic science began its decline in the 12th–13th century, before the Renaissance in Europe, due in part to the Christian reconquest of Spain and the Mongol conquests in the East in the 11th–13th century. The Mongols sacked Baghdad, capital of the Abbasid Caliphate, in 1258, which ended the Abbasid empire. Nevertheless, many of the conquerors became patrons of the sciences. Hulagu Khan, for example, who led the siege of Baghdad, became a patron of the Maragheh observatory. Islamic astronomy continued to flourish into the 16th century. === Western Europe === By the eleventh century, most of Europe had become Christian; stronger monarchies emerged; borders were restored; technological developments and agricultural innovations were made, increasing the food supply and population. Classical Greek texts were translated from Arabic and Greek into Latin, stimulating scientific discussion in Western Europe. In classical antiquity, Greek and Roman taboos had meant that dissection was usually banned, but in the Middle Ages medical teachers and students at Bologna began to open human bodies, and Mondino de Luzzi (c. 1275–1326) produced the first known anatomy textbook based on human dissection. As a result of the Pax Mongolica, Europeans, such as Marco Polo, began to venture further and further east. The written accounts of Polo and his fellow travelers inspired other Western European maritime explorers to search for a direct sea route to Asia, ultimately leading to the Age of Discovery. Technological advances were also made, such as the early flight of Eilmer of Malmesbury (who had studied mathematics in 11th-century England), and the metallurgical achievements of the Cistercian blast furnace at Laskill. ==== Medieval universities ==== An intellectual revitalization of Western Europe started with the birth of medieval universities in the 12th century. These urban institutions grew from the informal scholarly activities of learned friars who visited monasteries, consulted libraries, and conversed with other fellow scholars. A friar who became well-known would attract a following of disciples, giving rise to a brotherhood of scholars (or collegium in Latin). A collegium might travel to a town or request a monastery to host them. However, if the number of scholars within a collegium grew too large, they would opt to settle in a town instead. As the number of collegia within a town grew, the collegia might request that their king grant them a charter that would convert them into a universitas. Many universities were chartered during this period, with the first in Bologna in 1088, followed by Paris in 1150, Oxford in 1167, and Cambridge in 1231. The granting of a charter meant that the medieval universities were partially sovereign and independent from local authorities. Their independence allowed them to conduct themselves and judge their own members based on their own rules. Furthermore, as initially religious institutions, their faculties and students were protected from capital punishment (e.g., gallows). Such independence was a matter of custom, which could, in principle, be revoked by their respective rulers if they felt threatened. Discussions of various subjects or claims at these medieval institutions, no matter how controversial, were done in a formalized way so as to declare such discussions as being within the bounds of a university and therefore protected by the privileges of that institution's sovereignty. A claim could be described as ex cathedra (literally "from the chair", used within the context of teaching) or ex hypothesi (by hypothesis). This meant that the discussions were presented as purely an intellectual exercise that did not require those involved to commit themselves to the truth of a claim or to proselytize. Modern academic concepts and practices such as academic freedom or freedom of inquiry are remnants of these medieval privileges that were tolerated in the past. The curriculum of these medieval institutions centered on the seven liberal arts, which were aimed at providing beginning students with the skills for reasoning and scholarly language. Students would begin their studies starting with the first three liberal arts or Trivium (grammar, rhetoric, and logic) followed by the next four liberal arts or Quadrivium (arithmetic, geometry, astronomy, and music). Those who completed these requirements and received their baccalaureate (or Bachelor of Arts) had the option to join the higher faculty (law, medicine, or theology), which would confer an LLD for a lawyer, an MD for a physician, or ThD for a theologian. Students who chose to remain in the lower faculty (arts) could work towards a Magister (or Master's) degree and would study three philosophies: metaphysics, ethics, and natural philosophy. Latin translations of Aristotle's works such as De Anima (On the Soul) and the commentaries on them were required readings. As time passed, the lower faculty was allowed to confer its own doctoral degree called the PhD. Many of the Masters were drawn to encyclopedias and had used them as textbooks. But these scholars yearned for the complete original texts of the Ancient Greek philosophers, mathematicians, and physicians such as Aristotle, Euclid, and Galen, which were not available to them at the time. These Ancient Greek texts were to be found in the Byzantine Empire and the Islamic World. ==== Translations of Greek and Arabic sources ==== Contact with the Byzantine Empire, and with the Islamic world during the Reconquista and the Crusades, allowed Latin Europe access to scientific Greek and Arabic texts, including the works of Aristotle, Ptolemy, Isidore of Miletus, John Philoponus, Jābir ibn Hayyān, al-Khwarizmi, Alhazen, Avicenna, and Averroes. European scholars had access to the translation programs of Raymond of Toledo, who sponsored the 12th century Toledo School of Translators from Arabic to Latin. Later translators like Michael Scotus would learn Arabic in order to study these texts directly. The European universities aided materially in the translation and propagation of these texts and started a new infrastructure which was needed for scientific communities. In fact, European university put many works about the natural world and the study of nature at the center of its curriculum, with the result that the "medieval university laid far greater emphasis on science than does its modern counterpart and descendent." At the beginning of the 13th century, there were reasonably accurate Latin translations of the main works of almost all the intellectually crucial ancient authors, allowing a sound transfer of scientific ideas via both the universities and the monasteries. By then, the natural philosophy in these texts began to be extended by scholastics such as Robert Grosseteste, Roger Bacon, Albertus Magnus and Duns Scotus. Precursors of the modern scientific method, influenced by earlier contributions of the Islamic world, can be seen already in Grosseteste's emphasis on mathematics as a way to understand nature, and in the empirical approach admired by Bacon, particularly in his Opus Majus. Pierre Duhem's thesis is that Stephen Tempier – the Bishop of Paris – Condemnation of 1277 led to the study of medieval science as a serious discipline, "but no one in the field any longer endorses his view that modern science started in 1277". However, many scholars agree with Duhem's view that the mid-late Middle Ages saw important scientific developments. ==== Medieval science ==== The first half of the 14th century saw much important scientific work, largely within the framework of scholastic commentaries on Aristotle's scientific writings. William of Ockham emphasized the principle of parsimony: natural philosophers should not postulate unnecessary entities, so that motion is not a distinct thing but is only the moving object and an intermediary "sensible species" is not needed to transmit an image of an object to the eye. Scholars such as Jean Buridan and Nicole Oresme started to reinterpret elements of Aristotle's mechanics. In particular, Buridan developed the theory that impetus was the cause of the motion of projectiles, which was a first step towards the modern concept of inertia. The Oxford Calculators began to mathematically analyze the kinematics of motion, making this analysis without considering the causes of motion. In 1348, the Black Death and other disasters sealed a sudden end to philosophic and scientific development. Yet, the rediscovery of ancient texts was stimulated by the Fall of Constantinople in 1453, when many Byzantine scholars sought refuge in the West. Meanwhile, the introduction of printing was to have great effect on European society. The facilitated dissemination of the printed word democratized learning and allowed ideas such as algebra to propagate more rapidly. These developments paved the way for the Scientific Revolution, where scientific inquiry, halted at the start of the Black Death, resumed. == Renaissance == === Revival of learning === The renewal of learning in Europe began with 12th century Scholasticism. The Northern Renaissance showed a decisive shift in focus from Aristotelian natural philosophy to chemistry and the biological sciences (botany, anatomy, and medicine). Thus modern science in Europe was resumed in a period of great upheaval: the Protestant Reformation and Catholic Counter-Reformation; the discovery of the Americas by Christopher Columbus; the Fall of Constantinople; but also the re-discovery of Aristotle during the Scholastic period presaged large social and political changes. Thus, a suitable environment was created in which it became possible to question scientific doctrine, in much the same way that Martin Luther and John Calvin questioned religious doctrine. The works of Ptolemy (astronomy) and Galen (medicine) were found not always to match everyday observations. Work by Vesalius on human cadavers found problems with the Galenic view of anatomy. The discovery of Cristallo contributed to the advancement of science in the period as well with its appearance out of Venice around 1450. The new glass allowed for better spectacles and eventually to the inventions of the telescope and microscope. Theophrastus' work on rocks, Peri lithōn, remained authoritative for millennia: its interpretation of fossils was not overturned until after the Scientific Revolution. During the Italian Renaissance, Niccolò Machiavelli established the emphasis of modern political science on direct empirical observation of political institutions and actors. Later, the expansion of the scientific paradigm during the Enlightenment further pushed the study of politics beyond normative determinations. In particular, the study of statistics, to study the subjects of the state, has been applied to polling and voting. In archaeology, the 15th and 16th centuries saw the rise of antiquarians in Renaissance Europe who were interested in the collection of artifacts. === Scientific Revolution and birth of New Science === The early modern period is seen as a flowering of the European Renaissance. There was a willingness to question previously held truths and search for new answers. This resulted in a period of major scientific advancements, now known as the Scientific Revolution, which led to the emergence of a New Science that was more mechanistic in its worldview, more integrated with mathematics, and more reliable and open as its knowledge was based on a newly defined scientific method. The Scientific Revolution is a convenient boundary between ancient thought and classical physics, and is traditionally held to have begun in 1543, when the books De humani corporis fabrica (On the Workings of the Human Body) by Andreas Vesalius, and also De Revolutionibus, by the astronomer Nicolaus Copernicus, were first printed. The period culminated with the publication of the Philosophiæ Naturalis Principia Mathematica in 1687 by Isaac Newton, representative of the unprecedented growth of scientific publications throughout Europe. Other significant scientific advances were made during this time by Galileo Galilei, Johannes Kepler, Edmond Halley, William Harvey, Pierre Fermat, Robert Hooke, Christiaan Huygens, Tycho Brahe, Marin Mersenne, Gottfried Leibniz, Isaac Newton, and Blaise Pascal. In philosophy, major contributions were made by Francis Bacon, Sir Thomas Browne, René Descartes, Baruch Spinoza, Pierre Gassendi, Robert Boyle, and Thomas Hobbes. Christiaan Huygens derived the centripetal and centrifugal forces and was the first to transfer mathematical inquiry to describe unobservable physical phenomena. William Gilbert did some of the earliest experiments with electricity and magnetism, establishing that the Earth itself is magnetic. ==== Heliocentrism ==== The heliocentric astronomical model of the universe was refined by Nicolaus Copernicus. Copernicus proposed the idea that the Earth and all heavenly spheres, containing the planets and other objects in the cosmos, rotated around the Sun. His heliocentric model also proposed that all stars were fixed and did not rotate on an axis, nor in any motion at all. His theory proposed the yearly rotation of the Earth and the other heavenly spheres around the Sun and was able to calculate the distances of planets using deferents and epicycles. Although these calculations were not completely accurate, Copernicus was able to understand the distance order of each heavenly sphere. The Copernican heliocentric system was a revival of the hypotheses of Aristarchus of Samos and Seleucus of Seleucia. Aristarchus of Samos did propose that the Earth rotated around the Sun but did not mention anything about the other heavenly spheres' order, motion, or rotation. Seleucus of Seleucia also proposed the rotation of the Earth around the Sun but did not mention anything about the other heavenly spheres. In addition, Seleucus of Seleucia understood that the Moon rotated around the Earth and could be used to explain the tides of the oceans, thus further proving his understanding of the heliocentric idea. == Age of Enlightenment == === Continuation of Scientific Revolution === The Scientific Revolution continued into the Age of Enlightenment, which accelerated the development of modern science. ==== Planets and orbits ==== The heliocentric model revived by Nicolaus Copernicus was followed by the model of planetary motion given by Johannes Kepler in the early 17th century, which proposed that the planets follow elliptical orbits, with the Sun at one focus of the ellipse. In Astronomia Nova (A New Astronomy), the first two of the laws of planetary motion were shown by the analysis of the orbit of Mars. Kepler introduced the revolutionary concept of planetary orbit. Because of his work astronomical phenomena came to be seen as being governed by physical laws. ==== Emergence of chemistry ==== A decisive moment came when "chemistry" was distinguished from alchemy by Robert Boyle in his work The Sceptical Chymist, in 1661; although the alchemical tradition continued for some time after his work. Other important steps included the gravimetric experimental practices of medical chemists like William Cullen, Joseph Black, Torbern Bergman and Pierre Macquer and through the work of Antoine Lavoisier ("father of modern chemistry") on oxygen and the law of conservation of mass, which refuted phlogiston theory. Modern chemistry emerged from the sixteenth through the eighteenth centuries through the material practices and theories promoted by alchemy, medicine, manufacturing and mining. ==== Calculus and Newtonian mechanics ==== In 1687, Isaac Newton published the Principia Mathematica, detailing two comprehensive and successful physical theories: Newton's laws of motion, which led to classical mechanics; and Newton's law of universal gravitation, which describes the fundamental force of gravity. ==== Circulatory system ==== William Harvey published De Motu Cordis in 1628, which revealed his conclusions based on his extensive studies of vertebrate circulatory systems. He identified the central role of the heart, arteries, and veins in producing blood movement in a circuit, and failed to find any confirmation of Galen's pre-existing notions of heating and cooling functions. The history of early modern biology and medicine is often told through the search for the seat of the soul. Galen in his descriptions of his foundational work in medicine presents the distinctions between arteries, veins, and nerves using the vocabulary of the soul. ==== Scientific societies and journals ==== A critical innovation was the creation of permanent scientific societies and their scholarly journals, which dramatically sped the diffusion of new ideas. Typical was the founding of the Royal Society in London in 1660 and its journal in 1665 the Philosophical Transaction of the Royal Society, the first scientific journal in English. 1665 also saw the first journal in French, the Journal des sçavans. Science drawing on the works of Newton, Descartes, Pascal and Leibniz, science was on a path to modern mathematics, physics and technology by the time of the generation of Benjamin Franklin (1706–1790), Leonhard Euler (1707–1783), Mikhail Lomonosov (1711–1765) and Jean le Rond d'Alembert (1717–1783). Denis Diderot's Encyclopédie, published between 1751 and 1772 brought this new understanding to a wider audience. The impact of this process was not limited to science and technology, but affected philosophy (Immanuel Kant, David Hume), religion (the increasingly significant impact of science upon religion), and society and politics in general (Adam Smith, Voltaire). ==== Developments in geology ==== Geology did not undergo systematic restructuring during the Scientific Revolution but instead existed as a cloud of isolated, disconnected ideas about rocks, minerals, and landforms long before it became a coherent science. Robert Hooke formulated a theory of earthquakes, and Nicholas Steno developed the theory of superposition and argued that fossils were the remains of once-living creatures. Beginning with Thomas Burnet's Sacred Theory of the Earth in 1681, natural philosophers began to explore the idea that the Earth had changed over time. Burnet and his contemporaries interpreted Earth's past in terms of events described in the Bible, but their work laid the intellectual foundations for secular interpretations of Earth history. === Post-Scientific Revolution === ==== Bioelectricity ==== During the late 18th century, researchers such as Hugh Williamson and John Walsh experimented on the effects of electricity on the human body. Further studies by Luigi Galvani and Alessandro Volta established the electrical nature of what Volta called galvanism. ==== Developments in geology ==== Modern geology, like modern chemistry, gradually evolved during the 18th and early 19th centuries. Benoît de Maillet and the Comte de Buffon saw the Earth as much older than the 6,000 years envisioned by biblical scholars. Jean-Étienne Guettard and Nicolas Desmarest hiked central France and recorded their observations on some of the first geological maps. Aided by chemical experimentation, naturalists such as Scotland's John Walker, Sweden's Torbern Bergman, and Germany's Abraham Werner created comprehensive classification systems for rocks and minerals—a collective achievement that transformed geology into a cutting edge field by the end of the eighteenth century. These early geologists also proposed a generalized interpretations of Earth history that led James Hutton, Georges Cuvier and Alexandre Brongniart, following in the steps of Steno, to argue that layers of rock could be dated by the fossils they contained: a principle first applied to the geology of the Paris Basin. The use of index fossils became a powerful tool for making geological maps, because it allowed geologists to correlate the rocks in one locality with those of similar age in other, distant localities. ==== Birth of modern economics ==== The basis for classical economics forms Adam Smith's An Inquiry into the Nature and Causes of the Wealth of Nations, published in 1776. Smith criticized mercantilism, advocating a system of free trade with division of labour. He postulated an "invisible hand" that regulated economic systems made up of actors guided only by self-interest. The "invisible hand" mentioned in a lost page in the middle of a chapter in the middle of the "Wealth of Nations", 1776, advances as Smith's central message. ==== Social science ==== Anthropology can best be understood as an outgrowth of the Age of Enlightenment. It was during this period that Europeans attempted systematically to study human behavior. Traditions of jurisprudence, history, philology and sociology developed during this time and informed the development of the social sciences of which anthropology was a part. == 19th century == The 19th century saw the birth of science as a profession. William Whewell had coined the term scientist in 1833, which soon replaced the older term natural philosopher. === Developments in physics === In physics, the behavior of electricity and magnetism was studied by Giovanni Aldini, Alessandro Volta, Michael Faraday, Georg Ohm, and others. The experiments, theories and discoveries of Michael Faraday, Andre-Marie Ampere, James Clerk Maxwell, and their contemporaries led to the unification of the two phenomena into a single theory of electromagnetism as described by Maxwell's equations. Thermodynamics led to an understanding of heat and the notion of energy being defined. === Discovery of Neptune === In astronomy, the planet Neptune was discovered. Advances in astronomy and in optical systems in the 19th century resulted in the first observation of an asteroid (1 Ceres) in 1801, and the discovery of Neptune in 1846. === Developments in mathematics === In mathematics, the notion of complex numbers finally matured and led to a subsequent analytical theory; they also began the use of hypercomplex numbers. Karl Weierstrass and others carried out the arithmetization of analysis for functions of real and complex variables. It also saw rise to new progress in geometry beyond those classical theories of Euclid, after a period of nearly two thousand years. The mathematical science of logic likewise had revolutionary breakthroughs after a similarly long period of stagnation. But the most important step in science at this time were the ideas formulated by the creators of electrical science. Their work changed the face of physics and made possible for new technology to come about such as electric power, electrical telegraphy, the telephone, and radio. === Developments in chemistry === In chemistry, Dmitri Mendeleev, following the atomic theory of John Dalton, created the first periodic table of elements. Other highlights include the discoveries unveiling the nature of atomic structure and matter, simultaneously with chemistry – and of new kinds of radiation. The theory that all matter is made of atoms, which are the smallest constituents of matter that cannot be broken down without losing the basic chemical and physical properties of that matter, was provided by John Dalton in 1803, although the question took a hundred years to settle as proven. Dalton also formulated the law of mass relationships. In 1869, Dmitri Mendeleev composed his periodic table of elements on the basis of Dalton's discoveries. The synthesis of urea by Friedrich Wöhler opened a new research field, organic chemistry, and by the end of the 19th century, scientists were able to synthesize hundreds of organic compounds. The later part of the 19th century saw the exploitation of the Earth's petrochemicals, after the exhaustion of the oil supply from whaling. By the 20th century, systematic production of refined materials provided a ready supply of products which provided not only energy, but also synthetic materials for clothing, medicine, and everyday disposable resources. Application of the techniques of organic chemistry to living organisms resulted in physiological chemistry, the precursor to biochemistry. === Age of the Earth === Over the first half of the 19th century, geologists such as Charles Lyell, Adam Sedgwick, and Roderick Murchison applied the new technique to rocks throughout Europe and eastern North America, setting the stage for more detailed, government-funded mapping projects in later decades. Midway through the 19th century, the focus of geology shifted from description and classification to attempts to understand how the surface of the Earth had changed. The first comprehensive theories of mountain building were proposed during this period, as were the first modern theories of earthquakes and volcanoes. Louis Agassiz and others established the reality of continent-covering ice ages, and "fluvialists" like Andrew Crombie Ramsay argued that river valleys were formed, over millions of years by the rivers that flow through them. After the discovery of radioactivity, radiometric dating methods were developed, starting in the 20th century. Alfred Wegener's theory of "continental drift" was widely dismissed when he proposed it in the 1910s, but new data gathered in the 1950s and 1960s led to the theory of plate tectonics, which provided a plausible mechanism for it. Plate tectonics also provided a unified explanation for a wide range of seemingly unrelated geological phenomena. Since the 1960s it has served as the unifying principle in geology. === Evolution and inheritance === Perhaps the most prominent, controversial, and far-reaching theory in all of science has been the theory of evolution by natural selection, which was independently formulated by Charles Darwin and Alfred Wallace. It was described in detail in Darwin's book The Origin of Species, which was published in 1859. In it, Darwin proposed that the features of all living things, including humans, were shaped by natural processes over long periods of time. The theory of evolution in its current form affects almost all areas of biology. Implications of evolution on fields outside of pure science have led to both opposition and support from different parts of society, and profoundly influenced the popular understanding of "man's place in the universe". Separately, Gregor Mendel formulated the principles of inheritance in 1866, which became the basis of modern genetics. === Germ theory === Another important landmark in medicine and biology were the successful efforts to prove the germ theory of disease. Following this, Louis Pasteur made the first vaccine against rabies, and also made many discoveries in the field of chemistry, including the asymmetry of crystals. In 1847, Hungarian physician Ignác Fülöp Semmelweis dramatically reduced the occurrence of puerperal fever by simply requiring physicians to wash their hands before attending to women in childbirth. This discovery predated the germ theory of disease. However, Semmelweis' findings were not appreciated by his contemporaries and handwashing came into use only with discoveries by British surgeon Joseph Lister, who in 1865 proved the principles of antisepsis. Lister's work was based on the important findings by French biologist Louis Pasteur. Pasteur was able to link microorganisms with disease, revolutionizing medicine. He also devised one of the most important methods in preventive medicine, when in 1880 he produced a vaccine against rabies. Pasteur invented the process of pasteurization, to help prevent the spread of disease through milk and other foods. === Schools of economics === Karl Marx developed an alternative economic theory, called Marxian economics. Marxian economics is based on the labor theory of value and assumes the value of good to be based on the amount of labor required to produce it. Under this axiom, capitalism was based on employers not paying the full value of workers labor to create profit. The Austrian School responded to Marxian economics by viewing entrepreneurship as driving force of economic development. This replaced the labor theory of value by a system of supply and demand. === Founding of psychology === Psychology as a scientific enterprise that was independent from philosophy began in 1879 when Wilhelm Wundt founded the first laboratory dedicated exclusively to psychological research (in Leipzig). Other important early contributors to the field include Hermann Ebbinghaus (a pioneer in memory studies), Ivan Pavlov (who discovered classical conditioning), William James, and Sigmund Freud. Freud's influence has been enormous, though more as cultural icon than a force in scientific psychology. === Modern sociology === Modern sociology emerged in the early 19th century as the academic response to the modernization of the world. Among many early sociologists (e.g., Émile Durkheim), the aim of sociology was in structuralism, understanding the cohesion of social groups, and developing an "antidote" to social disintegration. Max Weber was concerned with the modernization of society through the concept of rationalization, which he believed would trap individuals in an "iron cage" of rational thought. Some sociologists, including Georg Simmel and W. E. B. Du Bois, used more microsociological, qualitative analyses. This microlevel approach played an important role in American sociology, with the theories of George Herbert Mead and his student Herbert Blumer resulting in the creation of the symbolic interactionism approach to sociology. In particular, just Auguste Comte, illustrated with his work the transition from a theological to a metaphysical stage and, from this, to a positive stage. Comte took care of the classification of the sciences as well as a transit of humanity towards a situation of progress attributable to a re-examination of nature according to the affirmation of 'sociality' as the basis of the scientifically interpreted society. === Romanticism === The Romantic Movement of the early 19th century reshaped science by opening up new pursuits unexpected in the classical approaches of the Enlightenment. The decline of Romanticism occurred because a new movement, Positivism, began to take hold of the ideals of the intellectuals after 1840 and lasted until about 1880. At the same time, the romantic reaction to the Enlightenment produced thinkers such as Johann Gottfried Herder and later Wilhelm Dilthey whose work formed the basis for the culture concept which is central to the discipline. Traditionally, much of the history of the subject was based on colonial encounters between Western Europe and the rest of the world, and much of 18th- and 19th-century anthropology is now classed as scientific racism. During the late 19th century, battles over the "study of man" took place between those of an "anthropological" persuasion (relying on anthropometrical techniques) and those of an "ethnological" persuasion (looking at cultures and traditions), and these distinctions became part of the later divide between physical anthropology and cultural anthropology, the latter ushered in by the students of Franz Boas. == 20th century == Science advanced dramatically during the 20th century. There were new and radical developments in the physical and life sciences, building on the progress from the 19th century. === Theory of relativity and quantum mechanics === The beginning of the 20th century brought the start of a revolution in physics. The long-held theories of Newton were shown not to be correct in all circumstances. Beginning in 1900, Max Planck, Albert Einstein, Niels Bohr and others developed quantum theories to explain various anomalous experimental results, by introducing discrete energy levels. Not only did quantum mechanics show that the laws of motion did not hold on small scales, but the theory of general relativity, proposed by Einstein in 1915, showed that the fixed background of spacetime, on which both Newtonian mechanics and special relativity depended, could not exist. In 1925, Werner Heisenberg and Erwin Schrödinger formulated quantum mechanics, which explained the preceding quantum theories. Currently, general relativity and quantum mechanics are inconsistent with each other, and efforts are underway to unify the two. === Big Bang === The observation by Edwin Hubble in 1929 that the speed at which galaxies recede positively correlates with their distance, led to the understanding that the universe is expanding, and the formulation of the Big Bang theory by Georges Lemaître. George Gamow, Ralph Alpher, and Robert Herman had calculated that there should be evidence for a Big Bang in the background temperature of the universe. In 1964, Arno Penzias and Robert Wilson discovered a 3 Kelvin background hiss in their Bell Labs radiotelescope (the Holmdel Horn Antenna), which was evidence for this hypothesis, and formed the basis for a number of results that helped determine the age of the universe. === Big science === In 1938 Otto Hahn and Fritz Strassmann discovered nuclear fission with radiochemical methods, and in 1939 Lise Meitner and Otto Robert Frisch wrote the first theoretical interpretation of the fission process, which was later improved by Niels Bohr and John A. Wheeler. Further developments took place during World War II, which led to the practical application of radar and the development and use of the atomic bomb. Around this time, Chien-Shiung Wu was recruited by the Manhattan Project to help develop a process for separating uranium metal into U-235 and U-238 isotopes by Gaseous diffusion. She was an expert experimentalist in beta decay and weak interaction physics. Wu designed an experiment (see Wu experiment) that enabled theoretical physicists Tsung-Dao Lee and Chen-Ning Yang to disprove the law of parity experimentally, winning them a Nobel Prize in 1957. Though the process had begun with the invention of the cyclotron by Ernest O. Lawrence in the 1930s, physics in the postwar period entered into a phase of what historians have called "Big Science", requiring massive machines, budgets, and laboratories in order to test their theories and move into new frontiers. The primary patron of physics became state governments, who recognized that the support of "basic" research could often lead to technologies useful to both military and industrial applications. === Advances in genetics === In the early 20th century, the study of heredity became a major investigation after the rediscovery in 1900 of the laws of inheritance developed by Mendel. The 20th century also saw the integration of physics and chemistry, with chemical properties explained as the result of the electronic structure of the atom. Linus Pauling's book on The Nature of the Chemical Bond used the principles of quantum mechanics to deduce bond angles in ever-more complicated molecules. Pauling's work culminated in the physical modelling of DNA, the secret of life (in the words of Francis Crick, 1953). In the same year, the Miller–Urey experiment demonstrated in a simulation of primordial processes, that basic constituents of proteins, simple amino acids, could themselves be built up from simpler molecules, kickstarting decades of research into the chemical origins of life. By 1953, James D. Watson and Francis Crick clarified the basic structure of DNA, the genetic material for expressing life in all its forms, building on the work of Maurice Wilkins and Rosalind Franklin, suggested that the structure of DNA was a double helix. In their famous paper "Molecular structure of Nucleic Acids" In the late 20th century, the possibilities of genetic engineering became practical for the first time, and a massive international effort began in 1990 to map out an entire human genome (the Human Genome Project). The discipline of ecology typically traces its origin to the synthesis of Darwinian evolution and Humboldtian biogeography, in the late 19th and early 20th centuries. Equally important in the rise of ecology, however, were microbiology and soil science—particularly the cycle of life concept, prominent in the work of Louis Pasteur and Ferdinand Cohn. The word ecology was coined by Ernst Haeckel, whose particularly holistic view of nature in general (and Darwin's theory in particular) was important in the spread of ecological thinking. The field of ecosystem ecology emerged in the Atomic Age with the use of radioisotopes to visualize food webs and by the 1970s ecosystem ecology deeply influenced global environmental management. === Space exploration === In 1925, Cecilia Payne-Gaposchkin determined that stars were composed mostly of hydrogen and helium. She was dissuaded by astronomer Henry Norris Russell from publishing this finding in her PhD thesis because of the widely held belief that stars had the same composition as the Earth. However, four years later, in 1929, Henry Norris Russell came to the same conclusion through different reasoning and the discovery was eventually accepted. In 1987, supernova SN 1987A was observed by astronomers on Earth both visually, and in a triumph for neutrino astronomy, by the solar neutrino detectors at Kamiokande. But the solar neutrino flux was a fraction of its theoretically expected value. This discrepancy forced a change in some values in the standard model for particle physics. === Neuroscience as a distinct discipline === The understanding of neurons and the nervous system became increasingly precise and molecular during the 20th century. For example, in 1952, Alan Lloyd Hodgkin and Andrew Huxley presented a mathematical model for transmission of electrical signals in neurons of the giant axon of a squid, which they called "action potentials", and how they are initiated and propagated, known as the Hodgkin–Huxley model. In 1961–1962, Richard FitzHugh and J. Nagumo simplified Hodgkin–Huxley, in what is called the FitzHugh–Nagumo model. In 1962, Bernard Katz modeled neurotransmission across the space between neurons known as synapses. Beginning in 1966, Eric Kandel and collaborators examined biochemical changes in neurons associated with learning and memory storage in Aplysia. In 1981 Catherine Morris and Harold Lecar combined these models in the Morris–Lecar model. Such increasingly quantitative work gave rise to numerous biological neuron models and models of neural computation. Neuroscience began to be recognized as a distinct academic discipline in its own right. Eric Kandel and collaborators have cited David Rioch, Francis O. Schmitt, and Stephen Kuffler as having played critical roles in establishing the field. === Plate tectonics === Geologists' embrace of plate tectonics became part of a broadening of the field from a study of rocks into a study of the Earth as a planet. Other elements of this transformation include: geophysical studies of the interior of the Earth, the grouping of geology with meteorology and oceanography as one of the "earth sciences", and comparisons of Earth and the solar system's other rocky planets. === Applications === In terms of applications, a massive number of new technologies were developed in the 20th century. Technologies such as electricity, the incandescent light bulb, the automobile and the phonograph, first developed at the end of the 19th century, were perfected and universally deployed. The first car was introduced by Karl Benz in 1885. The first airplane flight occurred in 1903, and by the end of the century airliners flew thousands of miles in a matter of hours. The development of the radio, television and computers caused massive changes in the dissemination of information. Advances in biology also led to large increases in food production, as well as the elimination of diseases such as polio by Dr. Jonas Salk. Gene mapping and gene sequencing, invented by Drs. Mark Skolnik and Walter Gilbert, respectively, are the two technologies that made the Human Genome Project feasible. Computer science, built upon a foundation of theoretical linguistics, discrete mathematics, and electrical engineering, studies the nature and limits of computation. Subfields include computability, computational complexity, database design, computer networking, artificial intelligence, and the design of computer hardware. One area in which advances in computing have contributed to more general scientific development is by facilitating large-scale archiving of scientific data. Contemporary computer science typically distinguishes itself by emphasizing mathematical 'theory' in contrast to the practical emphasis of software engineering. Einstein's paper "On the Quantum Theory of Radiation" outlined the principles of the stimulated emission of photons. This led to the invention of the Laser (light amplification by the stimulated emission of radiation) and the optical amplifier which ushered in the Information Age. It is optical amplification that allows fiber optic networks to transmit the massive capacity of the Internet. Based on wireless transmission of electromagnetic radiation and global networks of cellular operation, the mobile phone became a primary means to access the internet. === Developments in political science and economics === In political science during the 20th century, the study of ideology, behaviouralism and international relations led to a multitude of 'pol-sci' subdisciplines including rational choice theory, voting theory, game theory (also used in economics), psephology, political geography/geopolitics, political anthropology/political psychology/political sociology, political economy, policy analysis, public administration, comparative political analysis and peace studies/conflict analysis. In economics, John Maynard Keynes prompted a division between microeconomics and macroeconomics in the 1920s. Under Keynesian economics macroeconomic trends can overwhelm economic choices made by individuals. Governments should promote aggregate demand for goods as a means to encourage economic expansion. Following World War II, Milton Friedman created the concept of monetarism. Monetarism focuses on using the supply and demand of money as a method for controlling economic activity. In the 1970s, monetarism has adapted into supply-side economics which advocates reducing taxes as a means to increase the amount of money available for economic expansion. Other modern schools of economic thought are New Classical economics and New Keynesian economics. New Classical economics was developed in the 1970s, emphasizing solid microeconomics as the basis for macroeconomic growth. New Keynesian economics was created partially in response to New Classical economics. It shows how imperfect competition and market rigidities, means monetary policy has real effects, and enables analysis of different policies. === Developments in psychology, sociology, and anthropology === Psychology in the 20th century saw a rejection of Freud's theories as being too unscientific, and a reaction against Edward Titchener's atomistic approach of the mind. This led to the formulation of behaviorism by John B. Watson, which was popularized by B.F. Skinner. Behaviorism proposed epistemologically limiting psychological study to overt behavior, since that could be reliably measured. Scientific knowledge of the "mind" was considered too metaphysical, hence impossible to achieve. The final decades of the 20th century have seen the rise of cognitive science, which considers the mind as once again a subject for investigation, using the tools of psychology, linguistics, computer science, philosophy, and neurobiology. New methods of visualizing the activity of the brain, such as PET scans and CAT scans, began to exert their influence as well, leading some researchers to investigate the mind by investigating the brain, rather than cognition. These new forms of investigation assume that a wide understanding of the human mind is possible, and that such an understanding may be applied to other research domains, such as artificial intelligence. Evolutionary theory was applied to behavior and introduced to anthropology and psychology, through the works of cultural anthropologist Napoleon Chagnon. Physical anthropology would become biological anthropology, incorporating elements of evolutionary biology. American sociology in the 1940s and 1950s was dominated largely by Talcott Parsons, who argued that aspects of society that promoted structural integration were therefore "functional". This structural functionalism approach was questioned in the 1960s, when sociologists came to see this approach as merely a justification for inequalities present in the status quo. In reaction, conflict theory was developed, which was based in part on the philosophies of Karl Marx. Conflict theorists saw society as an arena in which different groups compete for control over resources. Symbolic interactionism also came to be regarded as central to sociological thinking. Erving Goffman saw social interactions as a stage performance, with individuals preparing "backstage" and attempting to control their audience through impression management. While these theories are currently prominent in sociological thought, other approaches exist, including feminist theory, post-structuralism, rational choice theory, and postmodernism. In the mid-20th century, much of the methodologies of earlier anthropological and ethnographical study were reevaluated with an eye towards research ethics, while at the same time the scope of investigation has broadened far beyond the traditional study of "primitive cultures". == 21st century == In the early 21st century, some concepts that originated in 20th century physics were proven. On 4 July 2012, physicists working at CERN's Large Hadron Collider announced that they had discovered a new subatomic particle greatly resembling the Higgs boson, confirmed as such by the following March. Gravitational waves were first detected on 14 September 2015. The Human Genome Project was declared complete in 2003. The CRISPR gene editing technique developed in 2012 allowed scientists to precisely and easily modify DNA and led to the development of new medicine. In 2020, xenobots, a new class of living robotics, were invented; reproductive capabilities were introduced the following year. Positive psychology is a branch of psychology founded in 1998 by Martin Seligman that is concerned with the study of happiness, mental well-being, and positive human functioning, and is a reaction to 20th century psychology's emphasis on mental illness and dysfunction. == See also == == References == === Sources === Bruno, Leonard C. (1989). The Landmarks of Science. Facts on File. ISBN 978-0-8160-2137-6. Heilbron, John L., ed. (2003). The Oxford Companion to the History of Modern Science. Oxford University Press. ISBN 978-0-19-511229-0. Needham, Joseph; Wang, Ling (1954). Introductory Orientations. Science and Civilisation in China. Vol. 1. Cambridge University Press. Needham, Joseph (1986a). Mathematics and the Sciences of the Heavens and the Earth. Science and Civilisation in China. Vol. 3. Taipei: Caves Books Ltd. Needham, Joseph (1986c). Physics and Physical Technology, Part 2, Mechanical Engineering. Science and Civilisation in China. Vol. 4. Taipei: Caves Books Ltd. Needham, Joseph; Robinson, Kenneth G.; Huang, Jen-Yü (2004). "General Conclusions and Reflections". Science and Chinese society. Science and Civilisation in China. Vol. 7. Cambridge University Press. Sambursky, Shmuel (1974). Physical Thought from the Presocratics to the Quantum Physicists: an anthology selected, introduced and edited by Shmuel Sambursky. Pica Press. p. 584. ISBN 978-0-87663-712-8. == Further reading == == External links == 'What is the History of Science', British Academy British Society for the History of Science "Scientific Change". Internet Encyclopedia of Philosophy. The CNRS History of Science and Technology Research Center in Paris (France) (in French) Henry Smith Williams, History of Science, Vols 1–4, online text Digital Archives of the National Institute of Standards and Technology (NIST) Digital facsimiles of books from the History of Science Collection Archived 13 January 2020 at the Wayback Machine, Linda Hall Library Digital Collections Division of History of Science and Technology of the International Union of History and Philosophy of Science Giants of Science (website of the Institute of National Remembrance) History of Science Digital Collection: Utah State University – Contains primary sources by such major figures in the history of scientific inquiry as Otto Brunfels, Charles Darwin, Erasmus Darwin, Carolus Linnaeus Antony van Leeuwenhoek, Jan Swammerdam, James Sowerby, Andreas Vesalius, and others. History of Science Society ("HSS") Archived 15 September 2020 at the Wayback Machine Inter-Divisional Teaching Commission (IDTC) of the International Union for the History and Philosophy of Science (IUHPS) Archived 13 January 2020 at the Wayback Machine International Academy of the History of Science International History, Philosophy and Science Teaching Group IsisCB Explore: History of Science Index An open access discovery tool Museo Galileo – Institute and Museum of the History of Science in Florence, Italy National Center for Atmospheric Research (NCAR) Archives The official site of the Nobel Foundation. Features biographies and info on Nobel laureates The Royal Society, trailblazing science from 1650 to date Archived 18 August 2015 at the Wayback Machine The Vega Science Trust Free to view videos of scientists including Feynman, Perutz, Rotblat, Born and many Nobel Laureates. A Century of Science in America: with special reference to the American Journal of Science, 1818-1918
Wikipedia/Medieval_science
In economics, comparative statics is the comparison of two different economic outcomes, before and after a change in some underlying exogenous parameter. As a type of static analysis it compares two different equilibrium states, after the process of adjustment (if any). It does not study the motion towards equilibrium, nor the process of the change itself. Comparative statics is commonly used to study changes in supply and demand when analyzing a single market, and to study changes in monetary or fiscal policy when analyzing the whole economy. Comparative statics is a tool of analysis in microeconomics (including general equilibrium analysis) and macroeconomics. Comparative statics was formalized by John R. Hicks (1939) and Paul A. Samuelson (1947) (Kehoe, 1987, p. 517) but was presented graphically from at least the 1870s. For models of stable equilibrium rates of change, such as the neoclassical growth model, comparative dynamics is the counterpart of comparative statics (Eatwell, 1987). == Linear approximation == Comparative statics results are usually derived by using the implicit function theorem to calculate a linear approximation to the system of equations that defines the equilibrium, under the assumption that the equilibrium is stable. That is, if we consider a sufficiently small change in some exogenous parameter, we can calculate how each endogenous variable changes using only the first derivatives of the terms that appear in the equilibrium equations. For example, suppose the equilibrium value of some endogenous variable x {\displaystyle x} is determined by the following equation: f ( x , a ) = 0 {\displaystyle f(x,a)=0\,} where a {\displaystyle a} is an exogenous parameter. Then, to a first-order approximation, the change in x {\displaystyle x} caused by a small change in a {\displaystyle a} must satisfy: B d x + C d a = 0. {\displaystyle B{\text{d}}x+C{\text{d}}a=0.} Here d x {\displaystyle {\text{d}}x} and d a {\displaystyle {\text{d}}a} represent the changes in x {\displaystyle x} and a {\displaystyle a} , respectively, while B {\displaystyle B} and C {\displaystyle C} are the partial derivatives of f {\displaystyle f} with respect to x {\displaystyle x} and a {\displaystyle a} (evaluated at the initial values of x {\displaystyle x} and a {\displaystyle a} ), respectively. Equivalently, we can write the change in x {\displaystyle x} as: d x = − B − 1 C d a . {\displaystyle {\text{d}}x=-B^{-1}C{\text{d}}a.} Dividing through the last equation by da gives the comparative static derivative of x with respect to a, also called the multiplier of a on x: d x d a = − B − 1 C . {\displaystyle {\frac {{\text{d}}x}{{\text{d}}a}}=-B^{-1}C.} === Many equations and unknowns === All the equations above remain true in the case of a system of n {\displaystyle n} equations in n {\displaystyle n} unknowns. In other words, suppose f ( x , a ) = 0 {\displaystyle f(x,a)=0} represents a system of n {\displaystyle n} equations involving the vector of n {\displaystyle n} unknowns x {\displaystyle x} , and the vector of m {\displaystyle m} given parameters a {\displaystyle a} . If we make a sufficiently small change d a {\displaystyle {\text{d}}a} in the parameters, then the resulting changes in the endogenous variables can be approximated arbitrarily well by d x = − B − 1 C d a {\displaystyle {\text{d}}x=-B^{-1}C{\text{d}}a} . In this case, B {\displaystyle B} represents the n {\displaystyle n} × n {\displaystyle n} matrix of partial derivatives of the functions f {\displaystyle f} with respect to the variables x {\displaystyle x} , and C {\displaystyle C} represents the n {\displaystyle n} × m {\displaystyle m} matrix of partial derivatives of the functions f {\displaystyle f} with respect to the parameters a {\displaystyle a} . (The derivatives in B {\displaystyle B} and C {\displaystyle C} are evaluated at the initial values of x {\displaystyle x} and a {\displaystyle a} .) Note that if one wants just the comparative static effect of one exogenous variable on one endogenous variable, Cramer's Rule can be used on the totally differentiated system of equations B d x + C d a = 0 {\displaystyle B{\text{d}}x+C{\text{d}}a\,=0} . === Stability === The assumption that the equilibrium is stable matters for two reasons. First, if the equilibrium were unstable, a small parameter change might cause a large jump in the value of x {\displaystyle x} , invalidating the use of a linear approximation. Moreover, Paul A. Samuelson's correspondence principle: pp.122–123.  states that stability of equilibrium has qualitative implications about the comparative static effects. In other words, knowing that the equilibrium is stable may help us predict whether each of the coefficients in the vector B − 1 C {\displaystyle B^{-1}C} is positive or negative. Specifically, one of the n necessary and jointly sufficient conditions for stability is that the determinant of the n×n matrix B have a particular sign; since this determinant appears as the denominator in the expression for B − 1 {\displaystyle B^{-1}} , the sign of the determinant influences the signs of all the elements of the vector B − 1 C d a {\displaystyle B^{-1}C{\text{d}}a} of comparative static effects. ==== An example of the role of the stability assumption ==== Suppose that the quantities demanded and supplied of a product are determined by the following equations: Q d ( P ) = a + b P {\displaystyle Q^{d}(P)=a+bP} Q s ( P ) = c + g P {\displaystyle Q^{s}(P)=c+gP} where Q d {\displaystyle Q^{d}} is the quantity demanded, Q s {\displaystyle Q^{s}} is the quantity supplied, P is the price, a and c are intercept parameters determined by exogenous influences on demand and supply respectively, b < 0 is the reciprocal of the slope of the demand curve, and g is the reciprocal of the slope of the supply curve; g > 0 if the supply curve is upward sloped, g = 0 if the supply curve is vertical, and g < 0 if the supply curve is backward-bending. If we equate quantity supplied with quantity demanded to find the equilibrium price P e q b {\displaystyle P^{eqb}} , we find that P e q b = a − c g − b . {\displaystyle P^{eqb}={\frac {a-c}{g-b}}.} This means that the equilibrium price depends positively on the demand intercept if g – b > 0, but depends negatively on it if g – b < 0. Which of these possibilities is relevant? In fact, starting from an initial static equilibrium and then changing a, the new equilibrium is relevant only if the market actually goes to that new equilibrium. Suppose that price adjustments in the market occur according to d P d t = λ ( Q d ( P ) − Q s ( P ) ) {\displaystyle {\frac {dP}{dt}}=\lambda (Q^{d}(P)-Q^{s}(P))} where λ {\displaystyle \lambda } > 0 is the speed of adjustment parameter and d P d t {\displaystyle {\frac {dP}{dt}}} is the time derivative of the price — that is, it denotes how fast and in what direction the price changes. By stability theory, P will converge to its equilibrium value if and only if the derivative d ( d P / d t ) d P {\displaystyle {\frac {d(dP/dt)}{dP}}} is negative. This derivative is given by d ( d P / d t ) d P = − λ ( − b + g ) . {\displaystyle {\frac {d(dP/dt)}{dP}}=-\lambda (-b+g).} This is negative if and only if g – b > 0, in which case the demand intercept parameter a positively influences the price. So we can say that while the direction of effect of the demand intercept on the equilibrium price is ambiguous when all we know is that the reciprocal of the supply curve's slope, g, is negative, in the only relevant case (in which the price actually goes to its new equilibrium value) an increase in the demand intercept increases the price. Note that this case, with g – b > 0, is the case in which the supply curve, if negatively sloped, is steeper than the demand curve. == Without constraints == Suppose p ( x ; q ) {\displaystyle p(x;q)} is a smooth and strictly concave objective function where x is a vector of n endogenous variables and q is a vector of m exogenous parameters. Consider the unconstrained optimization problem x ∗ ( q ) = arg ⁡ max p ( x ; q ) {\displaystyle x^{*}(q)=\arg \max p(x;q)} . Let f ( x ; q ) = D x p ( x ; q ) {\displaystyle f(x;q)=D_{x}p(x;q)} , the n by n matrix of first partial derivatives of p ( x ; q ) {\displaystyle p(x;q)} with respect to its first n arguments x1,...,xn. The maximizer x ∗ ( q ) {\displaystyle x^{*}(q)} is defined by the n×1 first order condition f ( x ∗ ( q ) ; q ) = 0 {\displaystyle f(x^{*}(q);q)=0} . Comparative statics asks how this maximizer changes in response to changes in the m parameters. The aim is to find ∂ x i ∗ / ∂ q j , i = 1 , . . . , n , j = 1 , . . . , m {\displaystyle \partial x_{i}^{*}/\partial q_{j},i=1,...,n,j=1,...,m} . The strict concavity of the objective function implies that the Jacobian of f, which is exactly the matrix of second partial derivatives of p with respect to the endogenous variables, is nonsingular (has an inverse). By the implicit function theorem, then, x ∗ ( q ) {\displaystyle x^{*}(q)} may be viewed locally as a continuously differentiable function, and the local response of x ∗ ( q ) {\displaystyle x^{*}(q)} to small changes in q is given by D q x ∗ ( q ) = − [ D x f ( x ∗ ( q ) ; q ) ] − 1 D q f ( x ∗ ( q ) ; q ) . {\displaystyle D_{q}x^{*}(q)=-[D_{x}f(x^{*}(q);q)]^{-1}D_{q}f(x^{*}(q);q).} Applying the chain rule and first order condition, D q p ( x ∗ ( q ) , q ) = D q p ( x ; q ) | x = x ∗ ( q ) . {\displaystyle D_{q}p(x^{*}(q),q)=D_{q}p(x;q)|_{x=x^{*}(q)}.} (See Envelope theorem). === Application for profit maximization === Suppose a firm produces n goods in quantities x 1 , . . . , x n {\displaystyle x_{1},...,x_{n}} . The firm's profit is a function p of x 1 , . . . , x n {\displaystyle x_{1},...,x_{n}} and of m exogenous parameters q 1 , . . . , q m {\displaystyle q_{1},...,q_{m}} which may represent, for instance, various tax rates. Provided the profit function satisfies the smoothness and concavity requirements, the comparative statics method above describes the changes in the firm's profit due to small changes in the tax rates. == With constraints == A generalization of the above method allows the optimization problem to include a set of constraints. This leads to the general envelope theorem. Applications include determining changes in Marshallian demand in response to changes in price or wage. == Limitations and extensions == One limitation of comparative statics using the implicit function theorem is that results are valid only in a (potentially very small) neighborhood of the optimum—that is, only for very small changes in the exogenous variables. Another limitation is the potentially overly restrictive nature of the assumptions conventionally used to justify comparative statics procedures. For example, John Nachbar discovered in one of his case studies that using comparative statics in general equilibrium analysis works best with very small, individual level of data rather than at an aggregate level. Paul Milgrom and Chris Shannon pointed out in 1994 that the assumptions conventionally used to justify the use of comparative statics on optimization problems are not actually necessary—specifically, the assumptions of convexity of preferred sets or constraint sets, smoothness of their boundaries, first and second derivative conditions, and linearity of budget sets or objective functions. In fact, sometimes a problem meeting these conditions can be monotonically transformed to give a problem with identical comparative statics but violating some or all of these conditions; hence these conditions are not necessary to justify the comparative statics. Stemming from the article by Milgrom and Shannon as well as the results obtained by Veinott and Topkis an important strand of operational research was developed called monotone comparative statics. In particular, this theory concentrates on the comparative statics analysis using only conditions that are independent of order-preserving transformations. The method uses lattice theory and introduces the notions of quasi-supermodularity and the single-crossing condition. The wide application of monotone comparative statics to economics includes production theory, consumer theory, game theory with complete and incomplete information, auction theory, and others. == See also == Model (economics) Qualitative economics == Notes == == References == John Eatwell et al., ed. (1987). "Comparative dynamics," The New Palgrave: A Dictionary of Economics, v. 1, p. 517. John R. Hicks (1939). Value and Capital. Timothy J. Kehoe, 1987. "Comparative statics," The New Palgrave: A Dictionary of Economics, v. 1, pp. 517–20. Andreu Mas-Colell, Michael D. Whinston, and Jerry R. Green, 1995. Microeconomic Theory. Paul A. Samuelson (1947). Foundations of Economic Analysis. Eugene Silberberg and Wing Suen, 2000. The Structure of Economics: A Mathematical Analysis, 3rd edition. == External links == San Jose State University Economics Department - Comparative Statics Analysis AmosWeb Glossary IFCI Risk Institute Glossary (from the American Stock Exchange glossary) [1]
Wikipedia/Comparative_statics
In physics and mechanics, torque is the rotational analogue of linear force. It is also referred to as the moment of force (also abbreviated to moment). The symbol for torque is typically τ {\displaystyle {\boldsymbol {\tau }}} , the lowercase Greek letter tau. When being referred to as moment of force, it is commonly denoted by M. Just as a linear force is a push or a pull applied to a body, a torque can be thought of as a twist applied to an object with respect to a chosen point; for example, driving a screw uses torque to force it into an object, which is applied by the screwdriver rotating around its axis to the drives on the head. == Historical terminology == The term torque (from Latin torquēre, 'to twist') is said to have been suggested by James Thomson and appeared in print in April, 1884. Usage is attested the same year by Silvanus P. Thompson in the first edition of Dynamo-Electric Machinery. Thompson describes his usage of the term as follows: Just as the Newtonian definition of force is that which produces or tends to produce motion (along a line), so torque may be defined as that which produces or tends to produce torsion (around an axis). It is better to use a term which treats this action as a single definite entity than to use terms like "couple" and "moment", which suggest more complex ideas. The single notion of a twist applied to turn a shaft is better than the more complex notion of applying a linear force (or a pair of forces) with a certain leverage. Today, torque is referred to using different vocabulary depending on geographical location and field of study. This article follows the definition used in US physics in its usage of the word torque. In the UK and in US mechanical engineering, torque is referred to as moment of force, usually shortened to moment. This terminology can be traced back to at least 1811 in Siméon Denis Poisson's Traité de mécanique. An English translation of Poisson's work appears in 1842. == Definition and relation to other physical quantities == A force applied perpendicularly to a lever multiplied by its distance from the lever's fulcrum (the length of the lever arm) is its torque. Therefore, torque is defined as the product of the magnitude of the perpendicular component of the force and the distance of the line of action of a force from the point around which it is being determined. In three dimensions, the torque is a pseudovector; for point particles, it is given by the cross product of the displacement vector and the force vector. The direction of the torque can be determined by using the right hand grip rule: if the fingers of the right hand are curled from the direction of the lever arm to the direction of the force, then the thumb points in the direction of the torque. It follows that the torque vector is perpendicular to both the position and force vectors and defines the plane in which the two vectors lie. The resulting torque vector direction is determined by the right-hand rule. Therefore any force directed parallel to the particle's position vector does not produce a torque. The magnitude of torque applied to a rigid body depends on three quantities: the force applied, the lever arm vector connecting the point about which the torque is being measured to the point of force application, and the angle between the force and lever arm vectors. In symbols: τ = r × F ⟹ τ = r F ⊥ = r F sin ⁡ θ {\displaystyle {\boldsymbol {\tau }}=\mathbf {r} \times \mathbf {F} \implies \tau =rF_{\perp }=rF\sin \theta } where τ {\displaystyle {\boldsymbol {\tau }}} is the torque vector and τ {\displaystyle \tau } is the magnitude of the torque, r {\displaystyle \mathbf {r} } is the position vector (a vector from the point about which the torque is being measured to the point where the force is applied), and r is the magnitude of the position vector, F {\displaystyle \mathbf {F} } is the force vector, F is the magnitude of the force vector and F⊥ is the amount of force directed perpendicularly to the position of the particle, × {\displaystyle \times } denotes the cross product, which produces a vector that is perpendicular both to r and to F following the right-hand rule, θ {\displaystyle \theta } is the angle between the force vector and the lever arm vector. The SI unit for torque is the newton-metre (N⋅m). For more on the units of torque, see § Units. === Relationship with the angular momentum === The net torque on a body determines the rate of change of the body's angular momentum, τ = d L d t {\displaystyle {\boldsymbol {\tau }}={\frac {\mathrm {d} \mathbf {L} }{\mathrm {d} t}}} where L is the angular momentum vector and t is time. For the motion of a point particle, L = I ω , {\displaystyle \mathbf {L} =I{\boldsymbol {\omega }},} where I = m r 2 {\textstyle I=mr^{2}} is the moment of inertia and ω is the orbital angular velocity pseudovector. It follows that τ n e t = I 1 ω 1 ˙ e 1 ^ + I 2 ω 2 ˙ e 2 ^ + I 3 ω 3 ˙ e 3 ^ + I 1 ω 1 d e 1 ^ d t + I 2 ω 2 d e 2 ^ d t + I 3 ω 3 d e 3 ^ d t = I ω ˙ + ω × ( I ω ) {\displaystyle {\boldsymbol {\tau }}_{\mathrm {net} }=I_{1}{\dot {\omega _{1}}}{\hat {\boldsymbol {e_{1}}}}+I_{2}{\dot {\omega _{2}}}{\hat {\boldsymbol {e_{2}}}}+I_{3}{\dot {\omega _{3}}}{\hat {\boldsymbol {e_{3}}}}+I_{1}\omega _{1}{\frac {d{\hat {\boldsymbol {e_{1}}}}}{dt}}+I_{2}\omega _{2}{\frac {d{\hat {\boldsymbol {e_{2}}}}}{dt}}+I_{3}\omega _{3}{\frac {d{\hat {\boldsymbol {e_{3}}}}}{dt}}=I{\boldsymbol {\dot {\omega }}}+{\boldsymbol {\omega }}\times (I{\boldsymbol {\omega }})} using the derivative of a vector is d e i ^ d t = ω × e i ^ {\displaystyle {d{\boldsymbol {\hat {e_{i}}}} \over dt}={\boldsymbol {\omega }}\times {\boldsymbol {\hat {e_{i}}}}} This equation is the rotational analogue of Newton's second law for point particles, and is valid for any type of trajectory. In some simple cases like a rotating disc, where only the moment of inertia on rotating axis is, the rotational Newton's second law can be τ = I α {\displaystyle {\boldsymbol {\tau }}=I{\boldsymbol {\alpha }}} where α = ω ˙ {\displaystyle {\boldsymbol {\alpha }}={\dot {\boldsymbol {\omega }}}} . ==== Proof of the equivalence of definitions ==== The definition of angular momentum for a single point particle is: L = r × p {\displaystyle \mathbf {L} =\mathbf {r} \times \mathbf {p} } where p is the particle's linear momentum and r is the position vector from the origin. The time-derivative of this is: d L d t = r × d p d t + d r d t × p . {\displaystyle {\frac {\mathrm {d} \mathbf {L} }{\mathrm {d} t}}=\mathbf {r} \times {\frac {\mathrm {d} \mathbf {p} }{\mathrm {d} t}}+{\frac {\mathrm {d} \mathbf {r} }{\mathrm {d} t}}\times \mathbf {p} .} This result can easily be proven by splitting the vectors into components and applying the product rule. But because the rate of change of linear momentum is force F {\textstyle \mathbf {F} } and the rate of change of position is velocity v {\textstyle \mathbf {v} } , d L d t = r × F + v × p {\displaystyle {\frac {\mathrm {d} \mathbf {L} }{\mathrm {d} t}}=\mathbf {r} \times \mathbf {F} +\mathbf {v} \times \mathbf {p} } The cross product of momentum p {\displaystyle \mathbf {p} } with its associated velocity v {\displaystyle \mathbf {v} } is zero because velocity and momentum are parallel, so the second term vanishes. Therefore, torque on a particle is equal to the first derivative of its angular momentum with respect to time. If multiple forces are applied, according Newton's second law it follows that d L d t = r × F n e t = τ n e t . {\displaystyle {\frac {\mathrm {d} \mathbf {L} }{\mathrm {d} t}}=\mathbf {r} \times \mathbf {F} _{\mathrm {net} }={\boldsymbol {\tau }}_{\mathrm {net} }.} This is a general proof for point particles, but it can be generalized to a system of point particles by applying the above proof to each of the point particles and then summing over all the point particles. Similarly, the proof can be generalized to a continuous mass by applying the above proof to each point within the mass, and then integrating over the entire mass. === Derivatives of torque === In physics, rotatum is the derivative of torque with respect to time P = d τ d t , {\displaystyle \mathbf {P} ={\frac {\mathrm {d} {\boldsymbol {\tau }}}{\mathrm {d} t}},} where τ is torque. This word is derived from the Latin word rotātus meaning 'to rotate'. The term rotatum is not universally recognized but is commonly used. There is not a universally accepted lexicon to indicate the successive derivatives of rotatum, even if sometimes various proposals have been made. Using the cross product definition of torque, an alternative expression for rotatum is: P = r × d F d t + d r d t × F . {\displaystyle \mathbf {P} =\mathbf {r} \times {\frac {\mathrm {d} \mathbf {F} }{\mathrm {d} t}}+{\frac {\mathrm {d} \mathbf {r} }{\mathrm {d} t}}\times \mathbf {F} .} Because the rate of change of force is yank Y {\textstyle \mathbf {Y} } and the rate of change of position is velocity v {\textstyle \mathbf {v} } , the expression can be further simplified to: P = r × Y + v × F . {\displaystyle \mathbf {P} =\mathbf {r} \times \mathbf {Y} +\mathbf {v} \times \mathbf {F} .} === Relationship with power and energy === The law of conservation of energy can also be used to understand torque. If a force is allowed to act through a distance, it is doing mechanical work. Similarly, if torque is allowed to act through an angular displacement, it is doing work. Mathematically, for rotation about a fixed axis through the center of mass, the work W can be expressed as W = ∫ θ 1 θ 2 τ d θ , {\displaystyle W=\int _{\theta _{1}}^{\theta _{2}}\tau \ \mathrm {d} \theta ,} where τ is torque, and θ1 and θ2 represent (respectively) the initial and final angular positions of the body. It follows from the work–energy principle that W also represents the change in the rotational kinetic energy Er of the body, given by E r = 1 2 I ω 2 , {\displaystyle E_{\mathrm {r} }={\tfrac {1}{2}}I\omega ^{2},} where I is the moment of inertia of the body and ω is its angular speed. Power is the work per unit time, given by P = τ ⋅ ω , {\displaystyle P={\boldsymbol {\tau }}\cdot {\boldsymbol {\omega }},} where P is power, τ is torque, ω is the angular velocity, and ⋅ {\displaystyle \cdot } represents the scalar product. Algebraically, the equation may be rearranged to compute torque for a given angular speed and power output. The power injected by the torque depends only on the instantaneous angular speed – not on whether the angular speed increases, decreases, or remains constant while the torque is being applied (this is equivalent to the linear case where the power injected by a force depends only on the instantaneous speed – not on the resulting acceleration, if any). ==== Proof ==== The work done by a variable force acting over a finite linear displacement s {\displaystyle s} is given by integrating the force with respect to an elemental linear displacement d s {\displaystyle \mathrm {d} \mathbf {s} } W = ∫ s 1 s 2 F ⋅ d s {\displaystyle W=\int _{s_{1}}^{s_{2}}\mathbf {F} \cdot \mathrm {d} \mathbf {s} } However, the infinitesimal linear displacement d s {\displaystyle \mathrm {d} \mathbf {s} } is related to a corresponding angular displacement d θ {\displaystyle \mathrm {d} {\boldsymbol {\theta }}} and the radius vector r {\displaystyle \mathbf {r} } as d s = d θ × r {\displaystyle \mathrm {d} \mathbf {s} =\mathrm {d} {\boldsymbol {\theta }}\times \mathbf {r} } Substitution in the above expression for work, , gives W = ∫ s 1 s 2 F ⋅ d θ × r {\displaystyle W=\int _{s_{1}}^{s_{2}}\mathbf {F} \cdot \mathrm {d} {\boldsymbol {\theta }}\times \mathbf {r} } The expression inside the integral is a scalar triple product F ⋅ d θ × r = r × F ⋅ d θ {\displaystyle \mathbf {F} \cdot \mathrm {d} {\boldsymbol {\theta }}\times \mathbf {r} =\mathbf {r} \times \mathbf {F} \cdot \mathrm {d} {\boldsymbol {\theta }}} , but as per the definition of torque, and since the parameter of integration has been changed from linear displacement to angular displacement, the equation becomes W = ∫ θ 1 θ 2 τ ⋅ d θ {\displaystyle W=\int _{\theta _{1}}^{\theta _{2}}{\boldsymbol {\tau }}\cdot \mathrm {d} {\boldsymbol {\theta }}} If the torque and the angular displacement are in the same direction, then the scalar product reduces to a product of magnitudes; i.e., τ ⋅ d θ = | τ | | d θ | cos ⁡ 0 = τ d θ {\displaystyle {\boldsymbol {\tau }}\cdot \mathrm {d} {\boldsymbol {\theta }}=\left|{\boldsymbol {\tau }}\right|\left|\mathrm {d} {\boldsymbol {\theta }}\right|\cos 0=\tau \,\mathrm {d} \theta } giving W = ∫ θ 1 θ 2 τ d θ {\displaystyle W=\int _{\theta _{1}}^{\theta _{2}}\tau \,\mathrm {d} \theta } == Principle of moments == The principle of moments, also known as Varignon's theorem (not to be confused with the geometrical theorem of the same name) states that the resultant torques due to several forces applied to about a point is equal to the sum of the contributing torques: τ = r 1 × F 1 + r 2 × F 2 + … + r N × F N . {\displaystyle \tau =\mathbf {r} _{1}\times \mathbf {F} _{1}+\mathbf {r} _{2}\times \mathbf {F} _{2}+\ldots +\mathbf {r} _{N}\times \mathbf {F} _{N}.} From this it follows that the torques resulting from N number of forces acting around a pivot on an object are balanced when r 1 × F 1 + r 2 × F 2 + … + r N × F N = 0 . {\displaystyle \mathbf {r} _{1}\times \mathbf {F} _{1}+\mathbf {r} _{2}\times \mathbf {F} _{2}+\ldots +\mathbf {r} _{N}\times \mathbf {F} _{N}=\mathbf {0} .} == Units == Torque has the dimension of force times distance, symbolically T−2L2M and those fundamental dimensions are the same as that for energy or work. Official SI literature indicates newton-metre, is properly denoted N⋅m, as the unit for torque; although this is dimensionally equivalent to the joule, which is not used for torque. In the case of torque, the unit is assigned to a vector, whereas for energy, it is assigned to a scalar. This means that the dimensional equivalence of the newton-metre and the joule may be applied in the former but not in the latter case. This problem is addressed in orientational analysis, which treats the radian as a base unit rather than as a dimensionless unit. The traditional imperial units for torque are the pound foot (lbf-ft), or, for small values, the pound inch (lbf-in). In the US, torque is most commonly referred to as the foot-pound (denoted as either lb-ft or ft-lb) and the inch-pound (denoted as in-lb). Practitioners depend on context and the hyphen in the abbreviation to know that these refer to torque and not to energy or moment of mass (as the symbolism ft-lb would properly imply). === Conversion to other units === A conversion factor may be necessary when using different units of power or torque. For example, if rotational speed (unit: revolution per minute or second) is used in place of angular speed (unit: radian per second), we must multiply by 2π radians per revolution. In the following formulas, P is power, τ is torque, and ν (Greek letter nu) is rotational speed. P = τ ⋅ 2 π ⋅ ν {\displaystyle P=\tau \cdot 2\pi \cdot \nu } Showing units: P W = τ N ⋅ m ⋅ 2 π r a d / r e v ⋅ ν r e v / s {\displaystyle P_{\rm {W}}=\tau _{\rm {N{\cdot }m}}\cdot 2\pi _{\rm {rad/rev}}\cdot \nu _{\rm {rev/s}}} Dividing by 60 seconds per minute gives us the following. P W = τ N ⋅ m ⋅ 2 π r a d / r e v ⋅ ν r e v / m i n 60 s / m i n {\displaystyle P_{\rm {W}}={\frac {\tau _{\rm {N{\cdot }m}}\cdot 2\pi _{\rm {rad/rev}}\cdot \nu _{\rm {rev/min}}}{\rm {60~s/min}}}} where rotational speed is in revolutions per minute (rpm, rev/min). Some people (e.g., American automotive engineers) use horsepower (mechanical) for power, foot-pounds (lbf⋅ft) for torque and rpm for rotational speed. This results in the formula changing to: P h p = τ l b f ⋅ f t ⋅ 2 π r a d / r e v ⋅ ν r e v / m i n 33 , 000 . {\displaystyle P_{\rm {hp}}={\frac {\tau _{\rm {lbf{\cdot }ft}}\cdot 2\pi _{\rm {rad/rev}}\cdot \nu _{\rm {rev/min}}}{33,000}}.} The constant below (in foot-pounds per minute) changes with the definition of the horsepower; for example, using metric horsepower, it becomes approximately 32,550. The use of other units (e.g., BTU per hour for power) would require a different custom conversion factor. ==== Derivation ==== For a rotating object, the linear distance covered at the circumference of rotation is the product of the radius with the angle covered. That is: linear distance = radius × angular distance. And by definition, linear distance = linear speed × time = radius × angular speed × time. By the definition of torque: torque = radius × force. We can rearrange this to determine force = torque ÷ radius. These two values can be substituted into the definition of power: power = force ⋅ linear distance time = ( torque r ) ⋅ ( r ⋅ angular speed ⋅ t ) t = torque ⋅ angular speed . {\displaystyle {\begin{aligned}{\text{power}}&={\frac {{\text{force}}\cdot {\text{linear distance}}}{\text{time}}}\\[6pt]&={\frac {\left({\dfrac {\text{torque}}{r}}\right)\cdot (r\cdot {\text{angular speed}}\cdot t)}{t}}\\[6pt]&={\text{torque}}\cdot {\text{angular speed}}.\end{aligned}}} The radius r and time t have dropped out of the equation. However, angular speed must be in radians per unit of time, by the assumed direct relationship between linear speed and angular speed at the beginning of the derivation. If the rotational speed is measured in revolutions per unit of time, the linear speed and distance are increased proportionately by 2π in the above derivation to give: power = torque ⋅ 2 π ⋅ rotational speed . {\displaystyle {\text{power}}={\text{torque}}\cdot 2\pi \cdot {\text{rotational speed}}.\,} If torque is in newton-metres and rotational speed in revolutions per second, the above equation gives power in newton-metres per second or watts. If Imperial units are used, and if torque is in pounds-force feet and rotational speed in revolutions per minute, the above equation gives power in foot pounds-force per minute. The horsepower form of the equation is then derived by applying the conversion factor 33,000 ft⋅lbf/min per horsepower: power = torque ⋅ 2 π ⋅ rotational speed ⋅ ft ⋅ lbf min ⋅ horsepower 33 , 000 ⋅ ft ⋅ lbf min ≈ torque ⋅ RPM 5 , 252 {\displaystyle {\begin{aligned}{\text{power}}&={\text{torque}}\cdot 2\pi \cdot {\text{rotational speed}}\cdot {\frac {{\text{ft}}{\cdot }{\text{lbf}}}{\text{min}}}\cdot {\frac {\text{horsepower}}{33,000\cdot {\frac {{\text{ft}}\cdot {\text{lbf}}}{\text{min}}}}}\\[6pt]&\approx {\frac {{\text{torque}}\cdot {\text{RPM}}}{5,252}}\end{aligned}}} because 5252.113122 ≈ 33 , 000 2 π . {\displaystyle 5252.113122\approx {\frac {33,000}{2\pi }}.\,} == Special cases and other facts == === Moment arm formula === A very useful special case, often given as the definition of torque in fields other than physics, is as follows: τ = ( moment arm ) ( force ) . {\displaystyle \tau =({\text{moment arm}})({\text{force}}).} The construction of the "moment arm" is shown in the figure to the right, along with the vectors r and F mentioned above. The problem with this definition is that it does not give the direction of the torque but only the magnitude, and hence it is difficult to use in three-dimensional cases. If the force is perpendicular to the displacement vector r, the moment arm will be equal to the distance to the centre, and torque will be a maximum for the given force. The equation for the magnitude of a torque, arising from a perpendicular force: τ = ( distance to centre ) ( force ) . {\displaystyle \tau =({\text{distance to centre}})({\text{force}}).} For example, if a person places a force of 10 N at the terminal end of a wrench that is 0.5 m long (or a force of 10 N acting 0.5 m from the twist point of a wrench of any length), the torque will be 5 N⋅m – assuming that the person moves the wrench by applying force in the plane of movement and perpendicular to the wrench. === Static equilibrium === For an object to be in static equilibrium, not only must the sum of the forces be zero, but also the sum of the torques (moments) about any point. For a two-dimensional situation with horizontal and vertical forces, the sum of the forces requirement is two equations: ΣH = 0 and ΣV = 0, and the torque a third equation: Στ = 0. That is, to solve statically determinate equilibrium problems in two-dimensions, three equations are used. === Net force versus torque === When the net force on the system is zero, the torque measured from any point in space is the same. For example, the torque on a current-carrying loop in a uniform magnetic field is the same regardless of the point of reference. If the net force F {\displaystyle \mathbf {F} } is not zero, and τ 1 {\displaystyle {\boldsymbol {\tau }}_{1}} is the torque measured from r 1 {\displaystyle \mathbf {r} _{1}} , then the torque measured from r 2 {\displaystyle \mathbf {r} _{2}} is τ 2 = τ 1 + ( r 2 − r 1 ) × F {\displaystyle {\boldsymbol {\tau }}_{2}={\boldsymbol {\tau }}_{1}+(\mathbf {r} _{2}-\mathbf {r} _{1})\times \mathbf {F} } === Machine torque === Torque forms part of the basic specification of an engine: the power output of an engine is expressed as its torque multiplied by the angular speed of the drive shaft. Internal-combustion engines produce useful torque only over a limited range of rotational speeds (typically from around 1,000–6,000 rpm for a small car). One can measure the varying torque output over that range with a dynamometer, and show it as a torque curve. Steam engines and electric motors tend to produce maximum torque close to zero rpm, with the torque diminishing as rotational speed rises (due to increasing friction and other constraints). Reciprocating steam-engines and electric motors can start heavy loads from zero rpm without a clutch. In practice, the relationship between power and torque can be observed in bicycles: Bicycles are typically composed of two road wheels, front and rear gears (referred to as sprockets) meshing with a chain, and a derailleur mechanism if the bicycle's transmission system allows multiple gear ratios to be used (i.e. multi-speed bicycle), all of which attached to the frame. A cyclist, the person who rides the bicycle, provides the input power by turning pedals, thereby cranking the front sprocket (commonly referred to as chainring). The input power provided by the cyclist is equal to the product of angular speed (i.e. the number of pedal revolutions per minute times 2π) and the torque at the spindle of the bicycle's crankset. The bicycle's drivetrain transmits the input power to the road wheel, which in turn conveys the received power to the road as the output power of the bicycle. Depending on the gear ratio of the bicycle, a (torque, angular speed)input pair is converted to a (torque, angular speed)output pair. By using a larger rear gear, or by switching to a lower gear in multi-speed bicycles, angular speed of the road wheels is decreased while the torque is increased, product of which (i.e. power) does not change. === Torque multiplier === Torque can be multiplied via three methods: by locating the fulcrum such that the length of a lever is increased; by using a longer lever; or by the use of a speed-reducing gearset or gear box. Such a mechanism multiplies torque, as rotation rate is reduced. == See also == == References == == External links == "Horsepower and Torque" Archived 2007-03-28 at the Wayback Machine An article showing how power, torque, and gearing affect a vehicle's performance. Torque and Angular Momentum in Circular Motion on Project PHYSNET. An interactive simulation of torque Torque Unit Converter A feel for torque Archived 2021-05-08 at the Wayback Machine An order-of-magnitude interactive.
Wikipedia/Moment_of_force
Varignon's theorem is a theorem of French mathematician Pierre Varignon (1654–1722), published in 1687 in his book Projet d'une nouvelle mécanique. The theorem states that the torque of a resultant of two concurrent forces about any point is equal to the algebraic sum of the torques of its components about the same point. In other words, "If many concurrent forces are acting on a body, then the algebraic sum of torques of all the forces about a point in the plane of the forces is equal to the torque of their resultant about the same point." == Proof == Consider a set of N {\displaystyle N} force vectors f 1 , f 2 , . . . , f N {\displaystyle \mathbf {f} _{1},\mathbf {f} _{2},...,\mathbf {f} _{N}} that concur at a point O {\displaystyle \mathbf {O} } in space. Their resultant is: F = ∑ i = 1 N f i {\displaystyle \mathbf {F} =\sum _{i=1}^{N}\mathbf {f} _{i}} . The torque of each vector with respect to some other point O 1 {\displaystyle \mathbf {O} _{1}} is T O 1 f i = ( O − O 1 ) × f i {\displaystyle \mathbf {\mathrm {T} } _{O_{1}}^{\mathbf {f} _{i}}=(\mathbf {O} -\mathbf {O} _{1})\times \mathbf {f} _{i}} . Adding up the torques and pulling out the common factor ( O − O 1 ) {\displaystyle (\mathbf {O} -\mathbf {O_{1}} )} , one sees that the result may be expressed solely in terms of F {\displaystyle \mathbf {F} } , and is in fact the torque of F {\displaystyle \mathbf {F} } with respect to the point O 1 {\displaystyle \mathbf {O} _{1}} : ∑ i = 1 N T O 1 f i = ( O − O 1 ) × ( ∑ i = 1 N f i ) = ( O − O 1 ) × F = T O 1 F {\displaystyle \sum _{i=1}^{N}\mathbf {\mathrm {T} } _{O_{1}}^{\mathbf {f} _{i}}=(\mathbf {O} -\mathbf {O} _{1})\times \left(\sum _{i=1}^{N}\mathbf {f} _{i}\right)=(\mathbf {O} -\mathbf {O} _{1})\times \mathbf {F} =\mathbf {\mathrm {T} } _{O_{1}}^{\mathbf {F} }} . Proving the theorem, i.e. that the sum of torques about O 1 {\displaystyle \mathbf {O} _{1}} is the same as the torque of the sum of the forces about the same point. == References == == External links == Varignon's Theorem at TheFreeDictionary.com
Wikipedia/Varignon's_theorem_(mechanics)
In theoretical physics, the matrix theory is a quantum mechanical model proposed in 1997 by Tom Banks, Willy Fischler, Stephen Shenker, and Leonard Susskind; it is also known as BFSS matrix model, after the authors' initials. == Overview == This theory describes the behavior of a set of nine large matrices. In their original paper, these authors showed, among other things, that the low energy limit of this matrix model is described by eleven-dimensional supergravity. These calculations led them to propose that the BFSS matrix model is exactly equivalent to M-theory. The BFSS matrix model can therefore be used as a prototype for a correct formulation of M-theory and a tool for investigating the properties of M-theory in a relatively simple setting. The BFSS matrix model is also considered the worldvolume theory of a large number of D0-branes in Type IIA string theory. == Noncommutative geometry == In geometry, it is often useful to introduce coordinates. For example, in order to study the geometry of the Euclidean plane, one defines the coordinates x and y as the distances between any point in the plane and a pair of axes. In ordinary geometry, the coordinates of a point are numbers, so they can be multiplied, and the product of two coordinates does not depend on the order of multiplication. That is, xy = yx. This property of multiplication is known as the commutative law, and this relationship between geometry and the commutative algebra of coordinates is the starting point for much of modern geometry. Noncommutative geometry is a branch of mathematics that attempts to generalize this situation. Rather than working with ordinary numbers, one considers some similar objects, such as matrices, whose multiplication does not satisfy the commutative law (that is, objects for which xy is not necessarily equal to yx). One imagines that these noncommuting objects are coordinates on some more general notion of "space" and proves theorems about these generalized spaces by exploiting the analogy with ordinary geometry. In a paper from 1998, Alain Connes, Michael R. Douglas, and Albert Schwarz showed that some aspects of matrix models and M-theory are described by a noncommutative quantum field theory, a special kind of physical theory in which the coordinates on spacetime do not satisfy the commutativity property. This established a link between matrix models and M-theory on the one hand, and noncommutative geometry on the other hand. It quickly led to the discovery of other important links between noncommutative geometry and various physical theories. == Related models == Another notable matrix model capturing aspects of Type IIB string theory, the IKKT matrix model, was constructed in 1996–97 by N. Ishibashi, H. Kawai, Y. Kitazawa, A. Tsuchiya. Recently, the relationship to Nambu dynamics is discussed.(see Nambu dynamics#Quantization) == See also == Matrix string theory == Notes == == References == Banks, Tom; Fischler, Willy; Schenker, Stephen; Susskind, Leonard (1997). "M theory as a matrix model: A conjecture". Physical Review D. 55 (8): 5112–5128. arXiv:hep-th/9610043. Bibcode:1997PhRvD..55.5112B. doi:10.1103/physrevd.55.5112. S2CID 13073785. Connes, Alain (1994). Noncommutative Geometry. Academic Press. ISBN 978-0-12-185860-5. Connes, Alain; Douglas, Michael; Schwarz, Albert (1998). "Noncommutative geometry and matrix theory". Journal of High Energy Physics. 19981 (2): 003. arXiv:hep-th/9711162. Bibcode:1998JHEP...02..003C. doi:10.1088/1126-6708/1998/02/003. S2CID 7562354. Nekrasov, Nikita; Schwarz, Albert (1998). "Instantons on noncommutative R4 and (2,0) superconformal six dimensional theory". Communications in Mathematical Physics. 198 (3): 689–703. arXiv:hep-th/9802068. Bibcode:1998CMaPh.198..689N. doi:10.1007/s002200050490. S2CID 14125789. Seiberg, Nathan; Witten, Edward (1999). "String Theory and Noncommutative Geometry". Journal of High Energy Physics. 1999 (9): 032. arXiv:hep-th/9908142. Bibcode:1999JHEP...09..032S. doi:10.1088/1126-6708/1999/09/032. S2CID 668885.
Wikipedia/BFSS_matrix_model
In physics, two objects are said to be coupled when they are interacting with each other. In classical mechanics, coupling is a connection between two oscillating systems, such as pendulums connected by a spring. The connection affects the oscillatory pattern of both objects. In particle physics, two particles are coupled if they are connected by one of the four fundamental forces. == Wave mechanics == === Coupled harmonic oscillator === If two waves are able to transmit energy to each other, then these waves are said to be "coupled." This normally occurs when the waves share a common component. An example of this is two pendulums connected by a spring. If the pendulums are identical, then their equations of motion are given by m x ¨ = − m g x l 1 − k ( x − y ) {\displaystyle m{\ddot {x}}=-mg{\frac {x}{l_{1}}}-k(x-y)} m y ¨ = − m g y l 2 + k ( x − y ) {\displaystyle m{\ddot {y}}=-mg{\frac {y}{l_{2}}}+k(x-y)} These equations represent the simple harmonic motion of the pendulum with an added coupling factor of the spring. This behavior is also seen in certain molecules (such as CO2 and H2O), wherein two of the atoms will vibrate around a central one in a similar manner. === Coupled LC circuits === In LC circuits, charge oscillates between the capacitor and the inductor and can therefore be modeled as a simple harmonic oscillator. When the magnetic flux from one inductor is able to affect the inductance of an inductor in an unconnected LC circuit, the circuits are said to be coupled. The coefficient of coupling k defines how closely the two circuits are coupled and is given by the equation M L p L s = k {\displaystyle {\frac {M}{\sqrt {L_{p}L_{s}}}}=k} where M is the mutual inductance of the circuits and Lp and Ls are the inductances of the primary and secondary circuits, respectively. If the flux lines of the primary inductor thread every line of the secondary one, then the coefficient of coupling is 1 and M = L p L s {\textstyle M={\sqrt {L_{p}L_{s}}}} In practice, however, there is often leakage, so most systems are not perfectly coupled. == Chemistry == === Spin-spin coupling === Spin-spin coupling occurs when the magnetic field of one atom affects the magnetic field of another nearby atom. This is very common in NMR imaging. If the atoms are not coupled, then there will be two individual peaks, known as a doublet, representing the individual atoms. If coupling is present, then there will be a triplet, one larger peak with two smaller ones to either side. This occurs due to the spins of the individual atoms oscillating in tandem. == Astrophysics == Objects in space which are coupled to each other are under the mutual influence of each other's gravity. For instance, the Earth is coupled to both the Sun and the Moon, as it is under the gravitational influence of both. Common in space are binary systems, two objects gravitationally coupled to each other. Examples of this are binary stars which circle each other. Multiple objects may also be coupled to each other simultaneously, such as with globular clusters and galaxy groups. When smaller particles, such as dust, which are coupled together over time accumulate into much larger objects, accretion is occurring. This is the major process by which stars and planets form. == Plasma == The coupling constant of a plasma is given by the ratio of its average Coulomb-interaction energy to its average kinetic energy—or how strongly the electric force of each atom holds the plasma together. Plasmas can therefore be categorized into weakly- and strongly-coupled plasmas depending upon the value of this ratio. Many of the typical classical plasmas, such as the plasma in the solar corona, are weakly coupled, while the plasma in a white dwarf star is an example of a strongly coupled plasma. == Quantum mechanics == Two coupled quantum systems can be modeled by a Hamiltonian of the form H ^ = H ^ a + H ^ b + V ^ a b {\displaystyle {\hat {H}}={\hat {H}}_{a}+{\hat {H}}_{b}+{\hat {V}}_{ab}} which is the addition of the two Hamiltonians in isolation with an added interaction factor. In most simple systems, H ^ a {\displaystyle {\hat {H}}_{a}} and H ^ b {\displaystyle {\hat {H}}_{b}} can be solved exactly while V ^ a b {\displaystyle {\hat {V}}_{ab}} can be solved through perturbation theory. If the two systems have similar total energy, then the system may undergo Rabi oscillation. === Angular momentum coupling === When angular momenta from two separate sources interact with each other, they are said to be coupled. For example, two electrons orbiting around the same nucleus may have coupled angular momenta. Due to the conservation of angular momentum and the nature of the angular momentum operator, the total angular momentum is always the sum of the individual angular momenta of the electrons, or J = J 1 + J 2 {\displaystyle \mathbf {J} =\mathbf {J_{1}} +\mathbf {J_{2}} } Spin-Orbit interaction (also known as spin-orbit coupling) is a special case of angular momentum coupling. Specifically, it is the interaction between the intrinsic spin of a particle, S, and its orbital angular momentum, L. As they are both forms of angular momentum, they must be conserved. Even if energy is transferred between the two, the total angular momentum, J, of the system must be constant, J = L + S {\displaystyle \mathbf {J} =\mathbf {L} +\mathbf {S} } . == Particle physics and quantum field theory == Particles which interact with each other are said to be coupled. This interaction is caused by one of the fundamental forces, whose strengths are usually given by a dimensionless coupling constant. In quantum electrodynamics, this value is known as the fine-structure constant α, approximately equal to 1/137. For quantum chromodynamics, the constant changes with respect to the distance between the particles. This phenomenon is known as asymptotic freedom. Forces which have a coupling constant greater than 1 are said to be "strongly coupled" while those with constants less than 1 are said to be "weakly coupled." == References ==
Wikipedia/Coupling_(physics)
In theoretical physics, supergravity (supergravity theory; SUGRA for short) is a modern field theory that combines the principles of supersymmetry and general relativity; this is in contrast to non-gravitational supersymmetric theories such as the Minimal Supersymmetric Standard Model. Supergravity is the gauge theory of local supersymmetry. Since the supersymmetry (SUSY) generators form together with the Poincaré algebra and superalgebra, called the super-Poincaré algebra, supersymmetry as a gauge theory makes gravity arise in a natural way. == Gravitons == Like all covariant approaches to quantum gravity, supergravity contains a spin-2 field whose quantum is the graviton. Supersymmetry requires the graviton field to have a superpartner. This field has spin 3/2 and its quantum is the gravitino. The number of gravitino fields is equal to the number of supersymmetries. == History == === Gauge supersymmetry === The first theory of local supersymmetry was proposed by Dick Arnowitt and Pran Nath in 1975 and was called gauge supersymmetry. === Supergravity === The first model of 4-dimensional supergravity (without this denotation) was formulated by Dmitri Vasilievich Volkov and Vyacheslav A. Soroka in 1973, emphasizing the importance of spontaneous supersymmetry breaking for the possibility of a realistic model. The minimal version of 4-dimensional supergravity (with unbroken local supersymmetry) was constructed in detail in 1976 by Dan Freedman, Sergio Ferrara and Peter van Nieuwenhuizen. In 2019 the three were awarded a special Breakthrough Prize in Fundamental Physics for the discovery. The key issue of whether or not the spin 3/2 field is consistently coupled was resolved in the nearly simultaneous paper, by Deser and Zumino, which independently proposed the minimal 4-dimensional model. It was quickly generalized to many different theories in various numbers of dimensions and involving additional (N) supersymmetries. Supergravity theories with N>1 are usually referred to as extended supergravity (SUEGRA). Some supergravity theories were shown to be related to certain higher-dimensional supergravity theories via dimensional reduction (e.g. N=1, 11-dimensional supergravity is dimensionally reduced on T7 to 4-dimensional, ungauged, N = 8 supergravity). The resulting theories were sometimes referred to as Kaluza–Klein theories as Kaluza and Klein constructed in 1919 a 5-dimensional gravitational theory, that when dimensionally reduced on a circle, its 4-dimensional non-massive modes describe electromagnetism coupled to gravity. === mSUGRA === mSUGRA means minimal SUper GRAvity. The construction of a realistic model of particle interactions within the N = 1 supergravity framework where supersymmetry (SUSY) breaks by a super Higgs mechanism carried out by Ali Chamseddine, Richard Arnowitt and Pran Nath in 1982. Collectively now known as minimal supergravity Grand Unification Theories (mSUGRA GUT), gravity mediates the breaking of SUSY through the existence of a hidden sector. mSUGRA naturally generates the Soft SUSY breaking terms which are a consequence of the Super Higgs effect. Radiative breaking of electroweak symmetry through Renormalization Group Equations (RGEs) follows as an immediate consequence. Due to its predictive power, requiring only four input parameters and a sign to determine the low energy phenomenology from the scale of Grand Unification, it is widely investigated in particle physics. === 11D: the maximal SUGRA === One of these supergravities, the 11-dimensional theory, generated considerable excitement as the first potential candidate for the theory of everything. This excitement was built on four pillars, two of which have now been largely discredited: Werner Nahm showed 11 dimensions as the largest number of dimensions consistent with a single graviton, and more dimensions will show particles with spins greater than 2. However, if two of these dimensions are time-like, these problems are avoided in 12 dimensions. Itzhak Bars gives this emphasis. In 1981 Ed Witten showed 11 as the smallest number of dimensions big enough to contain the gauge groups of the Standard Model, namely SU(3) for the strong interactions and SU(2) times U(1) for the electroweak interactions. Many techniques exist to embed the standard model gauge group in supergravity in any number of dimensions like the obligatory gauge symmetry in type I and heterotic string theories, and obtained in type II string theory by compactification on certain Calabi–Yau manifolds. The D-branes engineer gauge symmetries too. In 1978 Eugène Cremmer, Bernard Julia and Joël Scherk (CJS) found the classical action for an 11-dimensional supergravity theory. This remains today the only known classical 11-dimensional theory with local supersymmetry and no fields of spin higher than two. Other 11-dimensional theories known and quantum-mechanically inequivalent reduce to the CJS theory when one imposes the classical equations of motion. However, in the mid-1980s Bernard de Wit and Hermann Nicolai found an alternate theory in D=11 Supergravity with Local SU(8) Invariance. While not manifestly Lorentz-invariant, it is in many ways superior, because it dimensionally-reduces to the 4-dimensional theory without recourse to the classical equations of motion. In 1980 Peter Freund and M. A. Rubin showed that compactification from 11 dimensions preserving all the SUSY generators could occur in two ways, leaving only 4 or 7 macroscopic dimensions, the others compact. The noncompact dimensions have to form an anti-de Sitter space. There are many possible compactifications, but the Freund-Rubin compactification's invariance under all of the supersymmetry transformations preserves the action. Finally, the first two results each appeared to establish 11 dimensions, the third result appeared to specify the theory, and the last result explained why the observed universe appears to be four-dimensional. Many of the details of the theory were fleshed out by Peter van Nieuwenhuizen, Sergio Ferrara and Daniel Z. Freedman. === The end of the SUGRA era === The initial excitement over 11-dimensional supergravity soon waned, as various failings were discovered, and attempts to repair the model failed as well. Problems included: The compact manifolds which were known at the time and which contained the standard model were not compatible with supersymmetry, and could not hold quarks or leptons. One suggestion was to replace the compact dimensions with the 7-sphere, with the symmetry group SO(8), or the squashed 7-sphere, with symmetry group SO(5) times SU(2). Until recently, the physical neutrinos seen in experiments were believed to be massless, and appeared to be left-handed, a phenomenon referred to as the chirality of the Standard Model. It was very difficult to construct a chiral fermion from a compactification — the compactified manifold needed to have singularities, but physics near singularities did not begin to be understood until the advent of orbifold conformal field theories in the late 1980s. Supergravity models generically result in an unrealistically large cosmological constant in four dimensions, and that constant is difficult to remove, and so require fine-tuning. This is still a problem today. Quantization of the theory led to quantum field theory gauge anomalies rendering the theory inconsistent. In the intervening years physicists have learned how to cancel these anomalies. Some of these difficulties could be avoided by moving to a 10-dimensional theory involving superstrings. However, by moving to 10 dimensions one loses the sense of uniqueness of the 11-dimensional theory. The core breakthrough for the 10-dimensional theory, known as the first superstring revolution, was a demonstration by Michael B. Green, John H. Schwarz and David Gross that there are only three supergravity models in 10 dimensions which have gauge symmetries and in which all of the gauge and gravitational anomalies cancel. These were theories built on the groups SO(32) and E 8 × E 8 {\displaystyle E_{8}\times E_{8}} , the direct product of two copies of E8. Today we know that, using D-branes for example, gauge symmetries can be introduced in other 10-dimensional theories as well. === The second superstring revolution === Initial excitement about the 10-dimensional theories, and the string theories that provide their quantum completion, died by the end of the 1980s. There were too many Calabi–Yaus to compactify on, many more than Yau had estimated, as he admitted in December 2005 at the 23rd International Solvay Conference in Physics. None quite gave the standard model, but it seemed as though one could get close with enough effort in many distinct ways. Plus no one understood the theory beyond the regime of applicability of string perturbation theory. There was a comparatively quiet period at the beginning of the 1990s; however, several important tools were developed. For example, it became apparent that the various superstring theories were related by "string dualities", some of which relate weak string-coupling - perturbative - physics in one model with strong string-coupling - non-perturbative - in another. Then the second superstring revolution occurred. Joseph Polchinski realized that obscure string theory objects, called D-branes, which he discovered six years earlier, equate to stringy versions of the p-branes known in supergravity theories. String theory perturbation didn't restrict these p-branes. Thanks to supersymmetry, p-branes in supergravity gained understanding well beyond the limits of string theory. Armed with this new nonperturbative tool, Edward Witten and many others could show all of the perturbative string theories as descriptions of different states in a single theory that Witten named M-theory. Furthermore, he argued that M-theory's long wavelength limit, i.e. when the quantum wavelength associated to objects in the theory appear much larger than the size of the 11th dimension, needs 11-dimensional supergravity descriptors that fell out of favor with the first superstring revolution 10 years earlier, accompanied by the 2- and 5-branes. Therefore, supergravity comes full circle and uses a common framework in understanding features of string theories, M-theory, and their compactifications to lower spacetime dimensions. == Relation to superstrings == The term "low energy limits" labels some 10-dimensional supergravity theories. These arise as the massless, tree-level approximation of string theories. True effective field theories of string theories, rather than truncations, are rarely available. Due to string dualities, the conjectured 11-dimensional M-theory is required to have 11-dimensional supergravity as a "low energy limit". However, this doesn't necessarily mean that string theory/M-theory is the only possible UV completion of supergravity; supergravity research is useful independent of those relations. == 4D N = 1 SUGRA == Before we move on to SUGRA proper, let's recapitulate some important details about general relativity. We have a 4D differentiable manifold M with a Spin(3,1) principal bundle over it. This principal bundle represents the local Lorentz symmetry. In addition, we have a vector bundle T over the manifold with the fiber having four real dimensions and transforming as a vector under Spin(3,1). We have an invertible linear map from the tangent bundle TM to T. This map is the vierbein. The local Lorentz symmetry has a gauge connection associated with it, the spin connection. The following discussion will be in superspace notation, as opposed to the component notation, which isn't manifestly covariant under SUSY. There are actually many different versions of SUGRA out there which are inequivalent in the sense that their actions and constraints upon the torsion tensor are different, but ultimately equivalent in that we can always perform a field redefinition of the supervierbeins and spin connection to get from one version to another. In 4D N=1 SUGRA, we have a 4|4 real differentiable supermanifold M, i.e. we have 4 real bosonic dimensions and 4 real fermionic dimensions. As in the nonsupersymmetric case, we have a Spin(3,1) principal bundle over M. We have an R4|4 vector bundle T over M. The fiber of T transforms under the local Lorentz group as follows; the four real bosonic dimensions transform as a vector and the four real fermionic dimensions transform as a Majorana spinor. This Majorana spinor can be reexpressed as a complex left-handed Weyl spinor and its complex conjugate right-handed Weyl spinor (they're not independent of each other). We also have a spin connection as before. We will use the following conventions; the spatial (both bosonic and fermionic) indices will be indicated by M, N, ... . The bosonic spatial indices will be indicated by μ, ν, ..., the left-handed Weyl spatial indices by α, β,..., and the right-handed Weyl spatial indices by α ˙ {\displaystyle {\dot {\alpha }}} , β ˙ {\displaystyle {\dot {\beta }}} , ... . The indices for the fiber of T will follow a similar notation, except that they will be hatted like this: M ^ , α ^ {\displaystyle {\hat {M}},{\hat {\alpha }}} . See van der Waerden notation for more details. M = ( μ , α , α ˙ ) {\displaystyle M=(\mu ,\alpha ,{\dot {\alpha }})} . The supervierbein is denoted by e N M ^ {\displaystyle e_{N}^{\hat {M}}} , and the spin connection by ω M ^ N ^ P {\displaystyle \omega _{{\hat {M}}{\hat {N}}P}} . The inverse supervierbein is denoted by E M ^ N {\displaystyle E_{\hat {M}}^{N}} . The supervierbein and spin connection are real in the sense that they satisfy the reality conditions e N M ^ ( x , θ ¯ , θ ) ∗ = e N ∗ M ^ ∗ ( x , θ , θ ¯ ) {\displaystyle e_{N}^{\hat {M}}(x,{\overline {\theta }},\theta )^{*}=e_{N^{*}}^{{\hat {M}}^{*}}(x,\theta ,{\overline {\theta }})} where μ ∗ = μ {\displaystyle \mu ^{*}=\mu } , α ∗ = α ˙ {\displaystyle \alpha ^{*}={\dot {\alpha }}} , and α ˙ ∗ = α {\displaystyle {\dot {\alpha }}^{*}=\alpha } and ω ( x , θ ¯ , θ ) ∗ = ω ( x , θ , θ ¯ ) {\displaystyle \omega (x,{\overline {\theta }},\theta )^{*}=\omega (x,\theta ,{\overline {\theta }})} . The covariant derivative is defined as D M ^ f = E M ^ N ( ∂ N f + ω N [ f ] ) {\displaystyle D_{\hat {M}}f=E_{\hat {M}}^{N}\left(\partial _{N}f+\omega _{N}[f]\right)} . The covariant exterior derivative as defined over supermanifolds needs to be super graded. This means that every time we interchange two fermionic indices, we pick up a +1 sign factor, instead of -1. The presence or absence of R symmetries is optional, but if R-symmetry exists, the integrand over the full superspace has to have an R-charge of 0 and the integrand over chiral superspace has to have an R-charge of 2. A chiral superfield X is a superfield which satisfies D ¯ α ˙ ^ X = 0 {\displaystyle {\overline {D}}_{\hat {\dot {\alpha }}}X=0} . In order for this constraint to be consistent, we require the integrability conditions that { D ¯ α ˙ ^ , D ¯ β ˙ ^ } = c α ˙ ^ β ˙ ^ γ ˙ ^ D ¯ γ ˙ ^ {\displaystyle \left\{{\overline {D}}_{\hat {\dot {\alpha }}},{\overline {D}}_{\hat {\dot {\beta }}}\right\}=c_{{\hat {\dot {\alpha }}}{\hat {\dot {\beta }}}}^{\hat {\dot {\gamma }}}{\overline {D}}_{\hat {\dot {\gamma }}}} for some coefficients c. Unlike nonSUSY GR, the torsion has to be nonzero, at least with respect to the fermionic directions. Already, even in flat superspace, D α ^ e α ˙ ^ + D ¯ α ˙ ^ e α ^ ≠ 0 {\displaystyle D_{\hat {\alpha }}e_{\hat {\dot {\alpha }}}+{\overline {D}}_{\hat {\dot {\alpha }}}e_{\hat {\alpha }}\neq 0} . In one version of SUGRA (but certainly not the only one), we have the following constraints upon the torsion tensor: T α _ ^ β _ ^ γ _ ^ = 0 {\displaystyle T_{{\hat {\underline {\alpha }}}{\hat {\underline {\beta }}}}^{\hat {\underline {\gamma }}}=0} T α ^ β ^ μ ^ = 0 {\displaystyle T_{{\hat {\alpha }}{\hat {\beta }}}^{\hat {\mu }}=0} T α ˙ ^ β ˙ ^ μ ^ = 0 {\displaystyle T_{{\hat {\dot {\alpha }}}{\hat {\dot {\beta }}}}^{\hat {\mu }}=0} T α ^ β ˙ ^ μ ^ = 2 i σ α ^ β ˙ ^ μ ^ {\displaystyle T_{{\hat {\alpha }}{\hat {\dot {\beta }}}}^{\hat {\mu }}=2i\sigma _{{\hat {\alpha }}{\hat {\dot {\beta }}}}^{\hat {\mu }}} T μ ^ α _ ^ ν ^ = 0 {\displaystyle T_{{\hat {\mu }}{\hat {\underline {\alpha }}}}^{\hat {\nu }}=0} T μ ^ ν ^ ρ ^ = 0 {\displaystyle T_{{\hat {\mu }}{\hat {\nu }}}^{\hat {\rho }}=0} Here, α _ {\displaystyle {\underline {\alpha }}} is a shorthand notation to mean the index runs over either the left or right Weyl spinors. The superdeterminant of the supervierbein, | e | {\displaystyle \left|e\right|} , gives us the volume factor for M. Equivalently, we have the volume 4|4-superform e μ ^ = 0 ∧ ⋯ ∧ e μ ^ = 3 ∧ e α ^ = 1 ∧ e α ^ = 2 ∧ e α ˙ ^ = 1 ∧ e α ˙ ^ = 2 {\displaystyle e^{{\hat {\mu }}=0}\wedge \cdots \wedge e^{{\hat {\mu }}=3}\wedge e^{{\hat {\alpha }}=1}\wedge e^{{\hat {\alpha }}=2}\wedge e^{{\hat {\dot {\alpha }}}=1}\wedge e^{{\hat {\dot {\alpha }}}=2}} . If we complexify the superdiffeomorphisms, there is a gauge where E α ˙ ^ μ = 0 {\displaystyle E_{\hat {\dot {\alpha }}}^{\mu }=0} , E α ˙ ^ β = 0 {\displaystyle E_{\hat {\dot {\alpha }}}^{\beta }=0} and E α ˙ ^ β ˙ = δ α ˙ β ˙ {\displaystyle E_{\hat {\dot {\alpha }}}^{\dot {\beta }}=\delta _{\dot {\alpha }}^{\dot {\beta }}} . The resulting chiral superspace has the coordinates x and Θ. R is a scalar valued chiral superfield derivable from the supervielbeins and spin connection. If f is any superfield, ( D ¯ 2 − 8 R ) f {\displaystyle \left({\bar {D}}^{2}-8R\right)f} is always a chiral superfield. The action for a SUGRA theory with chiral superfields X, is given by S = ∫ d 4 x d 2 Θ 2 E [ 3 8 ( D ¯ 2 − 8 R ) e − K ( X ¯ , X ) / 3 + W ( X ) ] + c . c . {\displaystyle S=\int d^{4}xd^{2}\Theta 2{\mathcal {E}}\left[{\frac {3}{8}}\left({\bar {D}}^{2}-8R\right)e^{-K({\bar {X}},X)/3}+W(X)\right]+c.c.} where K is the Kähler potential and W is the superpotential, and E {\displaystyle {\mathcal {E}}} is the chiral volume factor. Unlike the case for flat superspace, adding a constant to either the Kähler or superpotential is now physical. A constant shift to the Kähler potential changes the effective Planck constant, while a constant shift to the superpotential changes the effective cosmological constant. As the effective Planck constant now depends upon the value of the chiral superfield X, we need to rescale the supervierbeins (a field redefinition) to get a constant Planck constant. This is called the Einstein frame. == N = 8 supergravity in 4 dimensions == N = 8 supergravity is the most symmetric quantum field theory which involves gravity and a finite number of fields. It can be found from a dimensional reduction of 11D supergravity by making the size of 7 of the dimensions go to zero. It has 8 supersymmetries which is the most any gravitational theory can have since there are 8 half-steps between spin 2 and spin −2. (A graviton has the highest spin in this theory which is a spin 2 particle.) More supersymmetries would mean the particles would have superpartners with spins higher than 2. The only theories with spins higher than 2 which are consistent involve an infinite number of particles (such as string theory and higher-spin theories). Stephen Hawking in his A Brief History of Time speculated that this theory could be the Theory of Everything. However, in later years this was abandoned in favour of string theory. There has been renewed interest in the 21st century with the possibility that this theory may be finite. == Higher-dimensional SUGRA == Higher-dimensional SUGRA is the higher-dimensional, supersymmetric generalization of general relativity. Supergravity can be formulated in any number of dimensions up to eleven. Higher-dimensional SUGRA focuses upon supergravity in greater than four dimensions. The number of supercharges in a spinor depends on the dimension and the signature of spacetime. The supercharges occur in spinors. Thus the limit on the number of supercharges cannot be satisfied in a spacetime of arbitrary dimension. Some theoretical examples in which this is satisfied are: 12-dimensional two-time theory 11-dimensional maximal supergravity 10-dimensional supergravity theories Type IIA supergravity: N = (1, 1) Type IIB supergravity: N = (2, 0) Type I supergravity: N = (1, 0) 9d supergravity theories Maximal 9d supergravity from 10d T-duality N = 1 Gauged supergravity The supergravity theories that have attracted the most interest contain no spins higher than two. This means, in particular, that they do not contain any fields that transform as symmetric tensors of rank higher than two under Lorentz transformations. The consistency of interacting higher spin field theories is, however, presently a field of very active interest. == See also == == References == == Bibliography == === Historical === Volkov, D.V.; Soroka, V.A (1973). "Higgs effect for goldstone particles with spin 1/2". Supersymmetry and Quantum Field Theory. Lecture Notes in Physics. Vol. 18. pp. 529–533. Bibcode:1973JETPL..18..312V. doi:10.1007/BFb0105271. ISBN 978-3-540-64623-5. {{cite book}}: |journal= ignored (help) Nath, P.; Arnowitt, R. (1975). "Generalized super-gauge symmetry as a new framework for unified gauge theories". Physics Letters B. 56 (2): 177. Bibcode:1975PhLB...56..177N. doi:10.1016/0370-2693(75)90297-x. Freedman, D.Z.; van Nieuwenhuizen, P.; Ferrara, S. (1976). "Progress toward a theory of supergravity". Physical Review D. 13 (12): 3214–3218. Bibcode:1976PhRvD..13.3214F. doi:10.1103/physrevd.13.3214. Cremmer, E.; Julia, B.; Scherk, J. (1978). "Supergravity in theory in 11 dimensions". Physics Letters B. 76 (4): 409–412. Bibcode:1978PhLB...76..409C. doi:10.1016/0370-2693(78)90894-8. Freund, P.; Rubin, M. (1980). "Dynamics of dimensional reduction". Physics Letters B. 97 (2): 233–235. Bibcode:1980PhLB...97..233F. doi:10.1016/0370-2693(80)90590-0. Chamseddine, A. H.; Arnowitt, R.; Nath, Pran (1982). "Locally supersymmetric grand unification". Physical Review Letters. 49 (14): 970–974. Bibcode:1982PhRvL..49..970C. doi:10.1103/PhysRevLett.49.970. Green, Michael B.; Schwarz, John H. (1984). "Anomaly cancellation in supersymmetric D = 10 gauge theory and superstring theory". Physics Letters B. 149 (1–3): 117–122. Bibcode:1984PhLB..149..117G. doi:10.1016/0370-2693(84)91565-x. Deser, S. (2018). "A brief history (and geography) of supergravity: The first 3 weeks... and after" (PDF). The European Physical Journal H. 43 (3): 281–291. arXiv:1704.05886. Bibcode:2018EPJH...43..281D. doi:10.1140/epjh/e2018-90005-3. S2CID 119428513. Duplij, S. (2019). "Supergravity was discovered by D.V. Volkov and V.A. Soroka in 1973, wasn't it?". East European Journal of Physics (3): 81–82. arXiv:1910.03259. doi:10.26565/2312-4334-2019-3-10. === General === de Wit, Bernard (2002). "Supergravity". arXiv:hep-th/0212245. Pran, Nath (2017). Supersymmetry, Supergravity, and Unification. Cambridge University Press. ISBN 978-0-521-19702-1. Martin, Stephen P. (1998). "A Supersymmetry Primer". In Kane, Gordon L. (ed.). Perspectives on Supersymmetry. Advanced Series on Directions in High Energy Physics. Vol. 18. World Scientific. pp. 1–98. arXiv:hep-ph/9709356. doi:10.1142/9789812839657_0001. ISBN 978-981-02-3553-6. S2CID 118973381. Drees, Manuel; Godbole, Rohini M.; Roy, Probir (2004). Theory and Phenomenology of Sparticles. World Scientific. ISBN 9-810-23739-1. Bilal, Adel (2001). "Introduction to Supersymmetry". arXiv:hep-th/0101055. Brandt, Friedemann (2002). "Lectures on Supergravity". Fortschritte der Physik. 50 (10–11): 1126–1172. arXiv:hep-th/0204035. Bibcode:2002ForPh..50.1126B. doi:10.1002/1521-3978(200210)50:10/11<1126::AID-PROP1126>3.0.CO;2-B. S2CID 15471713. Sezgin, Ergin (2023). "Survey of supergravities". arXiv:2312.06754 [hep-th]. == Further reading == Dall'Agata, G., Zagermann, M., Supergravity: From First Principles to Modern Applications, Springer, (2021). ISBN 978-3662639788 Freedman, D. Z., Van Proeyen, A., Supergravity, Cambridge University Press, Cambridge, (2012). ISBN 978-0521194013 Lauria, E., Van Proeyen, A., N = 2 Supergravity in D = 4, 5, 6 Dimensions, Springer, (2020). ISBN 978-3030337551 Năstase, H., Introduction to Supergravity and Its Applications, (2024). ISBN 978-1009445597 Nath, P., Supersymmetry, Supergravity, and Unification, Cambridge University Press, Cambridge, (2016) ISBN 978-0521197021 Tanii, Y., Introduction to Supergravity, Springer, (2014). ISBN 978-4431548270 Rausch de Traubenberg, M., Valenzuela, M., A Supergravity Primer, World Scientific Press, Singapore, (2019). ISBN 978-9811210518 West, P. C., Introduction To Supersymmetry And Supergravity, World Scientific Press, Singapore, (1990). ISBN 978-9810200985 Wess, J., Bagger, J., Supersymmetry and Supergravity, Princeton University Press, Princeton, (1992). ISBN 978-0691025308 == External links == Quotations related to Supergravity at Wikiquote
Wikipedia/Supergravity_theory
In theoretical physics, type II string theory is a unified term that includes both type IIA strings and type IIB strings theories. Type II string theory accounts for two of the five consistent superstring theories in ten dimensions. Both theories have N = 2 {\displaystyle {\mathcal {N}}=2} extended supersymmetry which is maximal amount of supersymmetry — namely 32 supercharges — in ten dimensions. Both theories are based on oriented closed strings. On the worldsheet, they differ only in the choice of GSO projection. They were first discovered by Michael Green and John Henry Schwarz in 1982, with the terminology of type I and type II coined to classify the three string theories known at the time. == Type IIA string theory == At low energies, type IIA string theory is described by type IIA supergravity in ten dimensions which is a non-chiral theory (i.e. left–right symmetric) with (1,1) d=10 supersymmetry; the fact that the anomalies in this theory cancel is therefore trivial. In the 1990s it was realized by Edward Witten (building on previous insights by Michael Duff, Paul Townsend, and others) that the limit of type IIA string theory in which the string coupling goes to infinity becomes a new 11-dimensional theory called M-theory. Consequently the low energy type IIA supergravity theory can also be derived from the unique maximal supergravity theory in 11 dimensions (low energy version of M-theory) via a dimensional reduction. The content of the massless sector of the theory (which is relevant in the low energy limit) is given by ( 8 v ⊕ 8 s ) ⊗ ( 8 v ⊕ 8 c ) {\textstyle (8_{v}\oplus 8_{s})\otimes (8_{v}\oplus 8_{c})} representation of SO(8) where 8 v {\displaystyle 8_{v}} is the irreducible vector representation, 8 c {\displaystyle 8_{c}} and 8 s {\displaystyle 8_{s}} are the irreducible representations with odd and even eigenvalues of the fermionic parity operator often called co-spinor and spinor representations. These three representations enjoy a triality symmetry which is evident from its Dynkin diagram. The four sectors of the massless spectrum after GSO projection and decomposition into irreducible representations are NS-NS : 8 v ⊗ 8 v = 1 ⊕ 28 ⊕ 35 = Φ ⊕ B μ ν ⊕ G μ ν {\displaystyle {\text{NS-NS}}:~8_{v}\otimes 8_{v}=1\oplus 28\oplus 35=\Phi \oplus B_{\mu \nu }\oplus G_{\mu \nu }} NS-R : 8 v ⊗ 8 c = 8 s ⊕ 56 c = λ + ⊕ ψ m − {\displaystyle {\text{NS-R}}:8_{v}\otimes 8_{c}=8_{s}\oplus 56_{c}=\lambda ^{+}\oplus \psi _{m}^{-}} R-NS : 8 c ⊗ 8 s = 8 s ⊕ 56 s = λ − ⊕ ψ m + {\displaystyle {\text{R-NS}}:8_{c}\otimes 8_{s}=8_{s}\oplus 56_{s}=\lambda ^{-}\oplus \psi _{m}^{+}} R-R : 8 s ⊗ 8 c = 8 v ⊕ 56 t = C n ⊕ C n m p {\displaystyle {\text{R-R}}:8_{s}\otimes 8_{c}=8_{v}\oplus 56_{t}=C_{n}\oplus C_{nmp}} where R {\displaystyle {\text{R}}} and NS {\displaystyle {\text{NS}}} stands for Ramond and Neveu–Schwarz sectors respectively. The numbers denote the dimension of the irreducible representation and equivalently the number of components of the corresponding fields. The various massless fields obtained are the graviton G μ ν {\displaystyle G_{\mu \nu }} with two superpartner gravitinos ψ m ± {\displaystyle \psi _{m}^{\pm }} which gives rise to local spacetime supersymmetry, a scalar dilaton Φ {\displaystyle \Phi } with two superpartner spinors—the dilatinos λ ± {\displaystyle \lambda ^{\pm }} , a 2-form spin-2 gauge field B μ ν {\displaystyle B_{\mu \nu }} often called the Kalb–Ramond field, a 1-form C n {\displaystyle C_{n}} and a 3-form C n m p {\displaystyle C_{nmp}} . Since the p {\displaystyle {\text{p}}} -form gauge fields naturally couple to extended objects with p+1 {\displaystyle {\text{p+1}}} dimensional world-volume, Type IIA string theory naturally incorporates various extended objects like D0, D2, D4 and D6 branes (using Hodge duality) among the D-branes (which are R {\displaystyle {\text{R}}} R {\displaystyle {\text{R}}} charged) and F1 string and NS5 brane among other objects. The mathematical treatment of type IIA string theory belongs to symplectic topology and algebraic geometry, particularly Gromov–Witten invariants. == Type IIB string theory == At low energies, type IIB string theory is described by type IIB supergravity in ten dimensions which is a chiral theory (left–right asymmetric) with (2,0) d=10 supersymmetry; the fact that the anomalies in this theory cancel is therefore nontrivial. In the 1990s it was realized that type IIB string theory with the string coupling constant g is equivalent to the same theory with the coupling 1/g. This equivalence is known as S-duality. Orientifold of type IIB string theory leads to type I string theory. The mathematical treatment of type IIB string theory belongs to algebraic geometry, specifically the deformation theory of complex structures originally studied by Kunihiko Kodaira and Donald C. Spencer. In 1997 Juan Maldacena gave some arguments indicating that type IIB string theory is equivalent to N = 4 supersymmetric Yang–Mills theory in the 't Hooft limit; it was the first suggestion concerning the AdS/CFT correspondence. == Relationship between the type II theories == In the late 1980s, it was realized that type IIA string theory is related to type IIB string theory by T-duality. == See also == Superstring theory Type I string Heterotic string == References ==
Wikipedia/Type_IIB_string_theory
In mathematics, an integral is the continuous analog of a sum, which is used to calculate areas, volumes, and their generalizations. Integration, the process of computing an integral, is one of the two fundamental operations of calculus, the other being differentiation. Integration was initially used to solve problems in mathematics and physics, such as finding the area under a curve, or determining displacement from velocity. Usage of integration expanded to a wide variety of scientific fields thereafter. A definite integral computes the signed area of the region in the plane that is bounded by the graph of a given function between two points in the real line. Conventionally, areas above the horizontal axis of the plane are positive while areas below are negative. Integrals also refer to the concept of an antiderivative, a function whose derivative is the given function; in this case, they are also called indefinite integrals. The fundamental theorem of calculus relates definite integration to differentiation and provides a method to compute the definite integral of a function when its antiderivative is known; differentiation and integration are inverse operations. Although methods of calculating areas and volumes dated from ancient Greek mathematics, the principles of integration were formulated independently by Isaac Newton and Gottfried Wilhelm Leibniz in the late 17th century, who thought of the area under a curve as an infinite sum of rectangles of infinitesimal width. Bernhard Riemann later gave a rigorous definition of integrals, which is based on a limiting procedure that approximates the area of a curvilinear region by breaking the region into infinitesimally thin vertical slabs. In the early 20th century, Henri Lebesgue generalized Riemann's formulation by introducing what is now referred to as the Lebesgue integral; it is more general than Riemann's in the sense that a wider class of functions are Lebesgue-integrable. Integrals may be generalized depending on the type of the function as well as the domain over which the integration is performed. For example, a line integral is defined for functions of two or more variables, and the interval of integration is replaced by a curve connecting two points in space. In a surface integral, the curve is replaced by a piece of a surface in three-dimensional space. == History == === Pre-calculus integration === The first documented systematic technique capable of determining integrals is the method of exhaustion of the ancient Greek astronomer Eudoxus and philosopher Democritus (ca. 370 BC), which sought to find areas and volumes by breaking them up into an infinite number of divisions for which the area or volume was known. This method was further developed and employed by Archimedes in the 3rd century BC and used to calculate the area of a circle, the surface area and volume of a sphere, area of an ellipse, the area under a parabola, the volume of a segment of a paraboloid of revolution, the volume of a segment of a hyperboloid of revolution, and the area of a spiral. A similar method was independently developed in China around the 3rd century AD by Liu Hui, who used it to find the area of the circle. This method was later used in the 5th century by Chinese father-and-son mathematicians Zu Chongzhi and Zu Geng to find the volume of a sphere. In the Middle East, Hasan Ibn al-Haytham, Latinized as Alhazen (c. 965 – c. 1040 AD) derived a formula for the sum of fourth powers. Alhazen determined the equations to calculate the area enclosed by the curve represented by y = x k {\displaystyle y=x^{k}} (which translates to the integral ∫ x k d x {\displaystyle \int x^{k}\,dx} in contemporary notation), for any given non-negative integer value of k {\displaystyle k} . He used the results to carry out what would now be called an integration of this function, where the formulae for the sums of integral squares and fourth powers allowed him to calculate the volume of a paraboloid. The next significant advances in integral calculus did not begin to appear until the 17th century. At this time, the work of Cavalieri with his method of indivisibles, and work by Fermat, began to lay the foundations of modern calculus, with Cavalieri computing the integrals of xn up to degree n = 9 in Cavalieri's quadrature formula. The case n = −1 required the invention of a function, the hyperbolic logarithm, achieved by quadrature of the hyperbola in 1647. Further steps were made in the early 17th century by Barrow and Torricelli, who provided the first hints of a connection between integration and differentiation. Barrow provided the first proof of the fundamental theorem of calculus. Wallis generalized Cavalieri's method, computing integrals of x to a general power, including negative powers and fractional powers. === Leibniz and Newton === The major advance in integration came in the 17th century with the independent discovery of the fundamental theorem of calculus by Leibniz and Newton. The theorem demonstrates a connection between integration and differentiation. This connection, combined with the comparative ease of differentiation, can be exploited to calculate integrals. In particular, the fundamental theorem of calculus allows one to solve a much broader class of problems. Equal in importance is the comprehensive mathematical framework that both Leibniz and Newton developed. Given the name infinitesimal calculus, it allowed for precise analysis of functions with continuous domains. This framework eventually became modern calculus, whose notation for integrals is drawn directly from the work of Leibniz. === Formalization === While Newton and Leibniz provided a systematic approach to integration, their work lacked a degree of rigour. Bishop Berkeley memorably attacked the vanishing increments used by Newton, calling them "ghosts of departed quantities". Calculus acquired a firmer footing with the development of limits. Integration was first rigorously formalized, using limits, by Riemann. Although all bounded piecewise continuous functions are Riemann-integrable on a bounded interval, subsequently more general functions were considered—particularly in the context of Fourier analysis—to which Riemann's definition does not apply, and Lebesgue formulated a different definition of integral, founded in measure theory (a subfield of real analysis). Other definitions of integral, extending Riemann's and Lebesgue's approaches, were proposed. These approaches based on the real number system are the ones most common today, but alternative approaches exist, such as a definition of integral as the standard part of an infinite Riemann sum, based on the hyperreal number system. === Historical notation === The notation for the indefinite integral was introduced by Gottfried Wilhelm Leibniz in 1675. He adapted the integral symbol, ∫, from the letter ſ (long s), standing for summa (written as ſumma; Latin for "sum" or "total"). The modern notation for the definite integral, with limits above and below the integral sign, was first used by Joseph Fourier in Mémoires of the French Academy around 1819–1820, reprinted in his book of 1822. Isaac Newton used a small vertical bar above a variable to indicate integration, or placed the variable inside a box. The vertical bar was easily confused with .x or x′, which are used to indicate differentiation, and the box notation was difficult for printers to reproduce, so these notations were not widely adopted. === First use of the term === The term was first printed in Latin by Jacob Bernoulli in 1690: "Ergo et horum Integralia aequantur". == Terminology and notation == In general, the integral of a real-valued function f(x) with respect to a real variable x on an interval [a, b] is written as ∫ a b f ( x ) d x . {\displaystyle \int _{a}^{b}f(x)\,dx.} The integral sign ∫ represents integration. The symbol dx, called the differential of the variable x, indicates that the variable of integration is x. The function f(x) is called the integrand, the points a and b are called the limits (or bounds) of integration, and the integral is said to be over the interval [a, b], called the interval of integration. A function is said to be integrable if its integral over its domain is finite. If limits are specified, the integral is called a definite integral. When the limits are omitted, as in ∫ f ( x ) d x , {\displaystyle \int f(x)\,dx,} the integral is called an indefinite integral, which represents a class of functions (the antiderivative) whose derivative is the integrand. The fundamental theorem of calculus relates the evaluation of definite integrals to indefinite integrals. There are several extensions of the notation for integrals to encompass integration on unbounded domains and/or in multiple dimensions (see later sections of this article). In advanced settings, it is not uncommon to leave out dx when only the simple Riemann integral is being used, or the exact type of integral is immaterial. For instance, one might write ∫ a b ( c 1 f + c 2 g ) = c 1 ∫ a b f + c 2 ∫ a b g {\textstyle \int _{a}^{b}(c_{1}f+c_{2}g)=c_{1}\int _{a}^{b}f+c_{2}\int _{a}^{b}g} to express the linearity of the integral, a property shared by the Riemann integral and all generalizations thereof. == Interpretations == Integrals appear in many practical situations. For instance, from the length, width and depth of a swimming pool which is rectangular with a flat bottom, one can determine the volume of water it can contain, the area of its surface, and the length of its edge. But if it is oval with a rounded bottom, integrals are required to find exact and rigorous values for these quantities. In each case, one may divide the sought quantity into infinitely many infinitesimal pieces, then sum the pieces to achieve an accurate approximation. As another example, to find the area of the region bounded by the graph of the function f(x) = x {\textstyle {\sqrt {x}}} between x = 0 and x = 1, one can divide the interval into five pieces (0, 1/5, 2/5, ..., 1), then construct rectangles using the right end height of each piece (thus √0, √1/5, √2/5, ..., √1) and sum their areas to get the approximation 1 5 ( 1 5 − 0 ) + 2 5 ( 2 5 − 1 5 ) + ⋯ + 5 5 ( 5 5 − 4 5 ) ≈ 0.7497 , {\displaystyle \textstyle {\sqrt {\frac {1}{5}}}\left({\frac {1}{5}}-0\right)+{\sqrt {\frac {2}{5}}}\left({\frac {2}{5}}-{\frac {1}{5}}\right)+\cdots +{\sqrt {\frac {5}{5}}}\left({\frac {5}{5}}-{\frac {4}{5}}\right)\approx 0.7497,} which is larger than the exact value. Alternatively, when replacing these subintervals by ones with the left end height of each piece, the approximation one gets is too low: with twelve such subintervals the approximated area is only 0.6203. However, when the number of pieces increases to infinity, it will reach a limit which is the exact value of the area sought (in this case, 2/3). One writes ∫ 0 1 x d x = 2 3 , {\displaystyle \int _{0}^{1}{\sqrt {x}}\,dx={\frac {2}{3}},} which means 2/3 is the result of a weighted sum of function values, √x, multiplied by the infinitesimal step widths, denoted by dx, on the interval [0, 1]. == Formal definitions == There are many ways of formally defining an integral, not all of which are equivalent. The differences exist mostly to deal with differing special cases which may not be integrable under other definitions, but are also occasionally for pedagogical reasons. The most commonly used definitions are Riemann integrals and Lebesgue integrals. === Riemann integral === The Riemann integral is defined in terms of Riemann sums of functions with respect to tagged partitions of an interval. A tagged partition of a closed interval [a, b] on the real line is a finite sequence a = x 0 ≤ t 1 ≤ x 1 ≤ t 2 ≤ x 2 ≤ ⋯ ≤ x n − 1 ≤ t n ≤ x n = b . {\displaystyle a=x_{0}\leq t_{1}\leq x_{1}\leq t_{2}\leq x_{2}\leq \cdots \leq x_{n-1}\leq t_{n}\leq x_{n}=b.\,\!} This partitions the interval [a, b] into n sub-intervals [xi−1, xi] indexed by i, each of which is "tagged" with a specific point ti ∈ [xi−1, xi]. A Riemann sum of a function f with respect to such a tagged partition is defined as ∑ i = 1 n f ( t i ) Δ i ; {\displaystyle \sum _{i=1}^{n}f(t_{i})\,\Delta _{i};} thus each term of the sum is the area of a rectangle with height equal to the function value at the chosen point of the given sub-interval, and width the same as the width of sub-interval, Δi = xi−xi−1. The mesh of such a tagged partition is the width of the largest sub-interval formed by the partition, maxi=1...n Δi. The Riemann integral of a function f over the interval [a, b] is equal to S if: For all ε > 0 {\displaystyle \varepsilon >0} there exists δ > 0 {\displaystyle \delta >0} such that, for any tagged partition [ a , b ] {\displaystyle [a,b]} with mesh less than δ {\displaystyle \delta } , | S − ∑ i = 1 n f ( t i ) Δ i | < ε . {\displaystyle \left|S-\sum _{i=1}^{n}f(t_{i})\,\Delta _{i}\right|<\varepsilon .} When the chosen tags are the maximum (respectively, minimum) value of the function in each interval, the Riemann sum becomes an upper (respectively, lower) Darboux sum, suggesting the close connection between the Riemann integral and the Darboux integral. === Lebesgue integral === It is often of interest, both in theory and applications, to be able to pass to the limit under the integral. For instance, a sequence of functions can frequently be constructed that approximate, in a suitable sense, the solution to a problem. Then the integral of the solution function should be the limit of the integrals of the approximations. However, many functions that can be obtained as limits are not Riemann-integrable, and so such limit theorems do not hold with the Riemann integral. Therefore, it is of great importance to have a definition of the integral that allows a wider class of functions to be integrated. Such an integral is the Lebesgue integral, that exploits the following fact to enlarge the class of integrable functions: if the values of a function are rearranged over the domain, the integral of a function should remain the same. Thus Henri Lebesgue introduced the integral bearing his name, explaining this integral thus in a letter to Paul Montel: I have to pay a certain sum, which I have collected in my pocket. I take the bills and coins out of my pocket and give them to the creditor in the order I find them until I have reached the total sum. This is the Riemann integral. But I can proceed differently. After I have taken all the money out of my pocket I order the bills and coins according to identical values and then I pay the several heaps one after the other to the creditor. This is my integral. As Folland puts it, "To compute the Riemann integral of f, one partitions the domain [a, b] into subintervals", while in the Lebesgue integral, "one is in effect partitioning the range of f ". The definition of the Lebesgue integral thus begins with a measure, μ. In the simplest case, the Lebesgue measure μ(A) of an interval A = [a, b] is its width, b − a, so that the Lebesgue integral agrees with the (proper) Riemann integral when both exist. In more complicated cases, the sets being measured can be highly fragmented, with no continuity and no resemblance to intervals. Using the "partitioning the range of f " philosophy, the integral of a non-negative function f : R → R should be the sum over t of the areas between a thin horizontal strip between y = t and y = t + dt. This area is just μ{ x : f(x) > t} dt. Let f∗(t) = μ{ x : f(x) > t }. The Lebesgue integral of f is then defined by ∫ f = ∫ 0 ∞ f ∗ ( t ) d t {\displaystyle \int f=\int _{0}^{\infty }f^{*}(t)\,dt} where the integral on the right is an ordinary improper Riemann integral (f∗ is a strictly decreasing positive function, and therefore has a well-defined improper Riemann integral). For a suitable class of functions (the measurable functions) this defines the Lebesgue integral. A general measurable function f is Lebesgue-integrable if the sum of the absolute values of the areas of the regions between the graph of f and the x-axis is finite: ∫ E | f | d μ < + ∞ . {\displaystyle \int _{E}|f|\,d\mu <+\infty .} In that case, the integral is, as in the Riemannian case, the difference between the area above the x-axis and the area below the x-axis: ∫ E f d μ = ∫ E f + d μ − ∫ E f − d μ {\displaystyle \int _{E}f\,d\mu =\int _{E}f^{+}\,d\mu -\int _{E}f^{-}\,d\mu } where f + ( x ) = max { f ( x ) , 0 } = { f ( x ) , if f ( x ) > 0 , 0 , otherwise, f − ( x ) = max { − f ( x ) , 0 } = { − f ( x ) , if f ( x ) < 0 , 0 , otherwise. {\displaystyle {\begin{alignedat}{3}&f^{+}(x)&&{}={}\max\{f(x),0\}&&{}={}{\begin{cases}f(x),&{\text{if }}f(x)>0,\\0,&{\text{otherwise,}}\end{cases}}\\&f^{-}(x)&&{}={}\max\{-f(x),0\}&&{}={}{\begin{cases}-f(x),&{\text{if }}f(x)<0,\\0,&{\text{otherwise.}}\end{cases}}\end{alignedat}}} === Other integrals === Although the Riemann and Lebesgue integrals are the most widely used definitions of the integral, a number of others exist, including: The Darboux integral, which is defined by Darboux sums (restricted Riemann sums) yet is equivalent to the Riemann integral. A function is Darboux-integrable if and only if it is Riemann-integrable. Darboux integrals have the advantage of being easier to define than Riemann integrals. The Riemann–Stieltjes integral, an extension of the Riemann integral which integrates with respect to a function as opposed to a variable. The Lebesgue–Stieltjes integral, further developed by Johann Radon, which generalizes both the Riemann–Stieltjes and Lebesgue integrals. The Daniell integral, which subsumes the Lebesgue integral and Lebesgue–Stieltjes integral without depending on measures. The Haar integral, used for integration on locally compact topological groups, introduced by Alfréd Haar in 1933. The Henstock–Kurzweil integral, variously defined by Arnaud Denjoy, Oskar Perron, and (most elegantly, as the gauge integral) Jaroslav Kurzweil, and developed by Ralph Henstock. The Khinchin integral, named after Aleksandr Khinchin. The Itô integral and Stratonovich integral, which define integration with respect to semimartingales such as Brownian motion. The Young integral, which is a kind of Riemann–Stieltjes integral with respect to certain functions of unbounded variation. The rough path integral, which is defined for functions equipped with some additional "rough path" structure and generalizes stochastic integration against both semimartingales and processes such as the fractional Brownian motion. The Choquet integral, a subadditive or superadditive integral created by the French mathematician Gustave Choquet in 1953. The Bochner integral, a generalization of the Lebesgue integral to functions that take values in a Banach space. == Properties == === Linearity === The collection of Riemann-integrable functions on a closed interval [a, b] forms a vector space under the operations of pointwise addition and multiplication by a scalar, and the operation of integration f ↦ ∫ a b f ( x ) d x {\displaystyle f\mapsto \int _{a}^{b}f(x)\;dx} is a linear functional on this vector space. Thus, the collection of integrable functions is closed under taking linear combinations, and the integral of a linear combination is the linear combination of the integrals: ∫ a b ( α f + β g ) ( x ) d x = α ∫ a b f ( x ) d x + β ∫ a b g ( x ) d x . {\displaystyle \int _{a}^{b}(\alpha f+\beta g)(x)\,dx=\alpha \int _{a}^{b}f(x)\,dx+\beta \int _{a}^{b}g(x)\,dx.\,} Similarly, the set of real-valued Lebesgue-integrable functions on a given measure space E with measure μ is closed under taking linear combinations and hence form a vector space, and the Lebesgue integral f ↦ ∫ E f d μ {\displaystyle f\mapsto \int _{E}f\,d\mu } is a linear functional on this vector space, so that: ∫ E ( α f + β g ) d μ = α ∫ E f d μ + β ∫ E g d μ . {\displaystyle \int _{E}(\alpha f+\beta g)\,d\mu =\alpha \int _{E}f\,d\mu +\beta \int _{E}g\,d\mu .} More generally, consider the vector space of all measurable functions on a measure space (E,μ), taking values in a locally compact complete topological vector space V over a locally compact topological field K, f : E → V. Then one may define an abstract integration map assigning to each function f an element of V or the symbol ∞, f ↦ ∫ E f d μ , {\displaystyle f\mapsto \int _{E}f\,d\mu ,\,} that is compatible with linear combinations. In this situation, the linearity holds for the subspace of functions whose integral is an element of V (i.e. "finite"). The most important special cases arise when K is R, C, or a finite extension of the field Qp of p-adic numbers, and V is a finite-dimensional vector space over K, and when K = C and V is a complex Hilbert space. Linearity, together with some natural continuity properties and normalization for a certain class of "simple" functions, may be used to give an alternative definition of the integral. This is the approach of Daniell for the case of real-valued functions on a set X, generalized by Nicolas Bourbaki to functions with values in a locally compact topological vector space. See Hildebrandt 1953 for an axiomatic characterization of the integral. === Inequalities === A number of general inequalities hold for Riemann-integrable functions defined on a closed and bounded interval [a, b] and can be generalized to other notions of integral (Lebesgue and Daniell). Upper and lower bounds. An integrable function f on [a, b], is necessarily bounded on that interval. Thus there are real numbers m and M so that m ≤ f (x) ≤ M for all x in [a, b]. Since the lower and upper sums of f over [a, b] are therefore bounded by, respectively, m(b − a) and M(b − a), it follows that m ( b − a ) ≤ ∫ a b f ( x ) d x ≤ M ( b − a ) . {\displaystyle m(b-a)\leq \int _{a}^{b}f(x)\,dx\leq M(b-a).} Inequalities between functions. If f(x) ≤ g(x) for each x in [a, b] then each of the upper and lower sums of f is bounded above by the upper and lower sums, respectively, of g. Thus ∫ a b f ( x ) d x ≤ ∫ a b g ( x ) d x . {\displaystyle \int _{a}^{b}f(x)\,dx\leq \int _{a}^{b}g(x)\,dx.} This is a generalization of the above inequalities, as M(b − a) is the integral of the constant function with value M over [a, b]. In addition, if the inequality between functions is strict, then the inequality between integrals is also strict. That is, if f(x) < g(x) for each x in [a, b], then ∫ a b f ( x ) d x < ∫ a b g ( x ) d x . {\displaystyle \int _{a}^{b}f(x)\,dx<\int _{a}^{b}g(x)\,dx.} Subintervals. If [c, d] is a subinterval of [a, b] and f (x) is non-negative for all x, then ∫ c d f ( x ) d x ≤ ∫ a b f ( x ) d x . {\displaystyle \int _{c}^{d}f(x)\,dx\leq \int _{a}^{b}f(x)\,dx.} Products and absolute values of functions. If f and g are two functions, then we may consider their pointwise products and powers, and absolute values: ( f g ) ( x ) = f ( x ) g ( x ) , f 2 ( x ) = ( f ( x ) ) 2 , | f | ( x ) = | f ( x ) | . {\displaystyle (fg)(x)=f(x)g(x),\;f^{2}(x)=(f(x))^{2},\;|f|(x)=|f(x)|.} If f is Riemann-integrable on [a, b] then the same is true for |f|, and | ∫ a b f ( x ) d x | ≤ ∫ a b | f ( x ) | d x . {\displaystyle \left|\int _{a}^{b}f(x)\,dx\right|\leq \int _{a}^{b}|f(x)|\,dx.} Moreover, if f and g are both Riemann-integrable then fg is also Riemann-integrable, and ( ∫ a b ( f g ) ( x ) d x ) 2 ≤ ( ∫ a b f ( x ) 2 d x ) ( ∫ a b g ( x ) 2 d x ) . {\displaystyle \left(\int _{a}^{b}(fg)(x)\,dx\right)^{2}\leq \left(\int _{a}^{b}f(x)^{2}\,dx\right)\left(\int _{a}^{b}g(x)^{2}\,dx\right).} This inequality, known as the Cauchy–Schwarz inequality, plays a prominent role in Hilbert space theory, where the left hand side is interpreted as the inner product of two square-integrable functions f and g on the interval [a, b]. Hölder's inequality. Suppose that p and q are two real numbers, 1 ≤ p, q ≤ ∞ with ⁠1/p⁠ + ⁠1/q⁠ = 1, and f and g are two Riemann-integrable functions. Then the functions |f|p and |g|q are also integrable and the following Hölder's inequality holds: | ∫ f ( x ) g ( x ) d x | ≤ ( ∫ | f ( x ) | p d x ) 1 / p ( ∫ | g ( x ) | q d x ) 1 / q . {\displaystyle \left|\int f(x)g(x)\,dx\right|\leq \left(\int \left|f(x)\right|^{p}\,dx\right)^{1/p}\left(\int \left|g(x)\right|^{q}\,dx\right)^{1/q}.} For p = q = 2, Hölder's inequality becomes the Cauchy–Schwarz inequality. Minkowski inequality. Suppose that p ≥ 1 is a real number and f and g are Riemann-integrable functions. Then | f |p, | g |p and | f + g |p are also Riemann-integrable and the following Minkowski inequality holds: ( ∫ | f ( x ) + g ( x ) | p d x ) 1 / p ≤ ( ∫ | f ( x ) | p d x ) 1 / p + ( ∫ | g ( x ) | p d x ) 1 / p . {\displaystyle \left(\int \left|f(x)+g(x)\right|^{p}\,dx\right)^{1/p}\leq \left(\int \left|f(x)\right|^{p}\,dx\right)^{1/p}+\left(\int \left|g(x)\right|^{p}\,dx\right)^{1/p}.} An analogue of this inequality for Lebesgue integral is used in construction of Lp spaces. === Conventions === In this section, f is a real-valued Riemann-integrable function. The integral ∫ a b f ( x ) d x {\displaystyle \int _{a}^{b}f(x)\,dx} over an interval [a, b] is defined if a < b. This means that the upper and lower sums of the function f are evaluated on a partition a = x0 ≤ x1 ≤ . . . ≤ xn = b whose values xi are increasing. Geometrically, this signifies that integration takes place "left to right", evaluating f within intervals [x i , x i +1] where an interval with a higher index lies to the right of one with a lower index. The values a and b, the end-points of the interval, are called the limits of integration of f. Integrals can also be defined if a > b: ∫ a b f ( x ) d x = − ∫ b a f ( x ) d x . {\displaystyle \int _{a}^{b}f(x)\,dx=-\int _{b}^{a}f(x)\,dx.} With a = b, this implies: ∫ a a f ( x ) d x = 0. {\displaystyle \int _{a}^{a}f(x)\,dx=0.} The first convention is necessary in consideration of taking integrals over subintervals of [a, b]; the second says that an integral taken over a degenerate interval, or a point, should be zero. One reason for the first convention is that the integrability of f on an interval [a, b] implies that f is integrable on any subinterval [c, d], but in particular integrals have the property that if c is any element of [a, b], then: ∫ a b f ( x ) d x = ∫ a c f ( x ) d x + ∫ c b f ( x ) d x . {\displaystyle \int _{a}^{b}f(x)\,dx=\int _{a}^{c}f(x)\,dx+\int _{c}^{b}f(x)\,dx.} With the first convention, the resulting relation ∫ a c f ( x ) d x = ∫ a b f ( x ) d x − ∫ c b f ( x ) d x = ∫ a b f ( x ) d x + ∫ b c f ( x ) d x {\displaystyle {\begin{aligned}\int _{a}^{c}f(x)\,dx&{}=\int _{a}^{b}f(x)\,dx-\int _{c}^{b}f(x)\,dx\\&{}=\int _{a}^{b}f(x)\,dx+\int _{b}^{c}f(x)\,dx\end{aligned}}} is then well-defined for any cyclic permutation of a, b, and c. == Fundamental theorem of calculus == The fundamental theorem of calculus is the statement that differentiation and integration are inverse operations: if a continuous function is first integrated and then differentiated, the original function is retrieved. An important consequence, sometimes called the second fundamental theorem of calculus, allows one to compute integrals by using an antiderivative of the function to be integrated. === First theorem === Let f be a continuous real-valued function defined on a closed interval [a, b]. Let F be the function defined, for all x in [a, b], by F ( x ) = ∫ a x f ( t ) d t . {\displaystyle F(x)=\int _{a}^{x}f(t)\,dt.} Then, F is continuous on [a, b], differentiable on the open interval (a, b), and F ′ ( x ) = f ( x ) {\displaystyle F'(x)=f(x)} for all x in (a, b). === Second theorem === Let f be a real-valued function defined on a closed interval [a, b] that admits an antiderivative F on [a, b]. That is, f and F are functions such that for all x in [a, b], f ( x ) = F ′ ( x ) . {\displaystyle f(x)=F'(x).} If f is integrable on [a, b] then ∫ a b f ( x ) d x = F ( b ) − F ( a ) . {\displaystyle \int _{a}^{b}f(x)\,dx=F(b)-F(a).} == Extensions == === Improper integrals === A "proper" Riemann integral assumes the integrand is defined and finite on a closed and bounded interval, bracketed by the limits of integration. An improper integral occurs when one or more of these conditions is not satisfied. In some cases such integrals may be defined by considering the limit of a sequence of proper Riemann integrals on progressively larger intervals. If the interval is unbounded, for instance at its upper end, then the improper integral is the limit as that endpoint goes to infinity: ∫ a ∞ f ( x ) d x = lim b → ∞ ∫ a b f ( x ) d x . {\displaystyle \int _{a}^{\infty }f(x)\,dx=\lim _{b\to \infty }\int _{a}^{b}f(x)\,dx.} If the integrand is only defined or finite on a half-open interval, for instance (a, b], then again a limit may provide a finite result: ∫ a b f ( x ) d x = lim ε → 0 ∫ a + ϵ b f ( x ) d x . {\displaystyle \int _{a}^{b}f(x)\,dx=\lim _{\varepsilon \to 0}\int _{a+\epsilon }^{b}f(x)\,dx.} That is, the improper integral is the limit of proper integrals as one endpoint of the interval of integration approaches either a specified real number, or ∞, or −∞. In more complicated cases, limits are required at both endpoints, or at interior points. === Multiple integration === Just as the definite integral of a positive function of one variable represents the area of the region between the graph of the function and the x-axis, the double integral of a positive function of two variables represents the volume of the region between the surface defined by the function and the plane that contains its domain. For example, a function in two dimensions depends on two real variables, x and y, and the integral of a function f over the rectangle R given as the Cartesian product of two intervals R = [ a , b ] × [ c , d ] {\displaystyle R=[a,b]\times [c,d]} can be written ∫ R f ( x , y ) d A {\displaystyle \int _{R}f(x,y)\,dA} where the differential dA indicates that integration is taken with respect to area. This double integral can be defined using Riemann sums, and represents the (signed) volume under the graph of z = f(x,y) over the domain R. Under suitable conditions (e.g., if f is continuous), Fubini's theorem states that this integral can be expressed as an equivalent iterated integral ∫ a b [ ∫ c d f ( x , y ) d y ] d x . {\displaystyle \int _{a}^{b}\left[\int _{c}^{d}f(x,y)\,dy\right]\,dx.} This reduces the problem of computing a double integral to computing one-dimensional integrals. Because of this, another notation for the integral over R uses a double integral sign: ∬ R f ( x , y ) d A . {\displaystyle \iint _{R}f(x,y)\,dA.} Integration over more general domains is possible. The integral of a function f, with respect to volume, over an n-dimensional region D of R n {\displaystyle \mathbb {R} ^{n}} is denoted by symbols such as: ∫ D f ( x ) d n x = ∫ D f d V . {\displaystyle \int _{D}f(\mathbf {x} )d^{n}\mathbf {x} \ =\int _{D}f\,dV.} === Line integrals and surface integrals === The concept of an integral can be extended to more general domains of integration, such as curved lines and surfaces inside higher-dimensional spaces. Such integrals are known as line integrals and surface integrals respectively. These have important applications in physics, as when dealing with vector fields. A line integral (sometimes called a path integral) is an integral where the function to be integrated is evaluated along a curve. Various different line integrals are in use. In the case of a closed curve it is also called a contour integral. The function to be integrated may be a scalar field or a vector field. The value of the line integral is the sum of values of the field at all points on the curve, weighted by some scalar function on the curve (commonly arc length or, for a vector field, the scalar product of the vector field with a differential vector in the curve). This weighting distinguishes the line integral from simpler integrals defined on intervals. Many simple formulas in physics have natural continuous analogs in terms of line integrals; for example, the fact that work is equal to force, F, multiplied by displacement, s, may be expressed (in terms of vector quantities) as: W = F ⋅ s . {\displaystyle W=\mathbf {F} \cdot \mathbf {s} .} For an object moving along a path C in a vector field F such as an electric field or gravitational field, the total work done by the field on the object is obtained by summing up the differential work done in moving from s to s + ds. This gives the line integral W = ∫ C F ⋅ d s . {\displaystyle W=\int _{C}\mathbf {F} \cdot d\mathbf {s} .} A surface integral generalizes double integrals to integration over a surface (which may be a curved set in space); it can be thought of as the double integral analog of the line integral. The function to be integrated may be a scalar field or a vector field. The value of the surface integral is the sum of the field at all points on the surface. This can be achieved by splitting the surface into surface elements, which provide the partitioning for Riemann sums. For an example of applications of surface integrals, consider a vector field v on a surface S; that is, for each point x in S, v(x) is a vector. Imagine that a fluid flows through S, such that v(x) determines the velocity of the fluid at x. The flux is defined as the quantity of fluid flowing through S in unit amount of time. To find the flux, one need to take the dot product of v with the unit surface normal to S at each point, which will give a scalar field, which is integrated over the surface: ∫ S v ⋅ d S . {\displaystyle \int _{S}{\mathbf {v} }\cdot \,d{\mathbf {S} }.} The fluid flux in this example may be from a physical fluid such as water or air, or from electrical or magnetic flux. Thus surface integrals have applications in physics, particularly with the classical theory of electromagnetism. === Contour integrals === In complex analysis, the integrand is a complex-valued function of a complex variable z instead of a real function of a real variable x. When a complex function is integrated along a curve γ {\displaystyle \gamma } in the complex plane, the integral is denoted as follows ∫ γ f ( z ) d z . {\displaystyle \int _{\gamma }f(z)\,dz.} This is known as a contour integral. === Integrals of differential forms === A differential form is a mathematical concept in the fields of multivariable calculus, differential topology, and tensors. Differential forms are organized by degree. For example, a one-form is a weighted sum of the differentials of the coordinates, such as: E ( x , y , z ) d x + F ( x , y , z ) d y + G ( x , y , z ) d z {\displaystyle E(x,y,z)\,dx+F(x,y,z)\,dy+G(x,y,z)\,dz} where E, F, G are functions in three dimensions. A differential one-form can be integrated over an oriented path, and the resulting integral is just another way of writing a line integral. Here the basic differentials dx, dy, dz measure infinitesimal oriented lengths parallel to the three coordinate axes. A differential two-form is a sum of the form G ( x , y , z ) d x ∧ d y + E ( x , y , z ) d y ∧ d z + F ( x , y , z ) d z ∧ d x . {\displaystyle G(x,y,z)\,dx\wedge dy+E(x,y,z)\,dy\wedge dz+F(x,y,z)\,dz\wedge dx.} Here the basic two-forms d x ∧ d y , d z ∧ d x , d y ∧ d z {\displaystyle dx\wedge dy,dz\wedge dx,dy\wedge dz} measure oriented areas parallel to the coordinate two-planes. The symbol ∧ {\displaystyle \wedge } denotes the wedge product, which is similar to the cross product in the sense that the wedge product of two forms representing oriented lengths represents an oriented area. A two-form can be integrated over an oriented surface, and the resulting integral is equivalent to the surface integral giving the flux of E i + F j + G k {\displaystyle E\mathbf {i} +F\mathbf {j} +G\mathbf {k} } . Unlike the cross product, and the three-dimensional vector calculus, the wedge product and the calculus of differential forms makes sense in arbitrary dimension and on more general manifolds (curves, surfaces, and their higher-dimensional analogs). The exterior derivative plays the role of the gradient and curl of vector calculus, and Stokes' theorem simultaneously generalizes the three theorems of vector calculus: the divergence theorem, Green's theorem, and the Kelvin-Stokes theorem. === Summations === The discrete equivalent of integration is summation. Summations and integrals can be put on the same foundations using the theory of Lebesgue integrals or time-scale calculus. === Functional integrals === An integration that is performed not over a variable (or, in physics, over a space or time dimension), but over a space of functions, is referred to as a functional integral. == Applications == Integrals are used extensively in many areas. For example, in probability theory, integrals are used to determine the probability of some random variable falling within a certain range. Moreover, the integral under an entire probability density function must equal 1, which provides a test of whether a function with no negative values could be a density function or not. Integrals can be used for computing the area of a two-dimensional region that has a curved boundary, as well as computing the volume of a three-dimensional object that has a curved boundary. The area of a two-dimensional region can be calculated using the aforementioned definite integral. The volume of a three-dimensional object such as a disc or washer can be computed by disc integration using the equation for the volume of a cylinder, π r 2 h {\displaystyle \pi r^{2}h} , where r {\displaystyle r} is the radius. In the case of a simple disc created by rotating a curve about the x-axis, the radius is given by f(x), and its height is the differential dx. Using an integral with bounds a and b, the volume of the disc is equal to: π ∫ a b f 2 ( x ) d x . {\displaystyle \pi \int _{a}^{b}f^{2}(x)\,dx.} Integrals are also used in physics, in areas like kinematics to find quantities like displacement, time, and velocity. For example, in rectilinear motion, the displacement of an object over the time interval [ a , b ] {\displaystyle [a,b]} is given by x ( b ) − x ( a ) = ∫ a b v ( t ) d t , {\displaystyle x(b)-x(a)=\int _{a}^{b}v(t)\,dt,} where v ( t ) {\displaystyle v(t)} is the velocity expressed as a function of time. The work done by a force F ( x ) {\displaystyle F(x)} (given as a function of position) from an initial position A {\displaystyle A} to a final position B {\displaystyle B} is: W A → B = ∫ A B F ( x ) d x . {\displaystyle W_{A\rightarrow B}=\int _{A}^{B}F(x)\,dx.} Integrals are also used in thermodynamics, where thermodynamic integration is used to calculate the difference in free energy between two given states. == Computation == === Analytical === The most basic technique for computing definite integrals of one real variable is based on the fundamental theorem of calculus. Let f(x) be the function of x to be integrated over a given interval [a, b]. Then, find an antiderivative of f; that is, a function F such that F′ = f on the interval. Provided the integrand and integral have no singularities on the path of integration, by the fundamental theorem of calculus, ∫ a b f ( x ) d x = F ( b ) − F ( a ) . {\displaystyle \int _{a}^{b}f(x)\,dx=F(b)-F(a).} Sometimes it is necessary to use one of the many techniques that have been developed to evaluate integrals. Most of these techniques rewrite one integral as a different one which is hopefully more tractable. Techniques include integration by substitution, integration by parts, integration by trigonometric substitution, and integration by partial fractions. Alternative methods exist to compute more complex integrals. Many nonelementary integrals can be expanded in a Taylor series and integrated term by term. Occasionally, the resulting infinite series can be summed analytically. The method of convolution using Meijer G-functions can also be used, assuming that the integrand can be written as a product of Meijer G-functions. There are also many less common ways of calculating definite integrals; for instance, Parseval's identity can be used to transform an integral over a rectangular region into an infinite sum. Occasionally, an integral can be evaluated by a trick; for an example of this, see Gaussian integral. Computations of volumes of solids of revolution can usually be done with disk integration or shell integration. Specific results which have been worked out by various techniques are collected in the list of integrals. === Symbolic === Many problems in mathematics, physics, and engineering involve integration where an explicit formula for the integral is desired. Extensive tables of integrals have been compiled and published over the years for this purpose. With the spread of computers, many professionals, educators, and students have turned to computer algebra systems that are specifically designed to perform difficult or tedious tasks, including integration. Symbolic integration has been one of the motivations for the development of the first such systems, like Macsyma and Maple. A major mathematical difficulty in symbolic integration is that in many cases, a relatively simple function does not have integrals that can be expressed in closed form involving only elementary functions, include rational and exponential functions, logarithm, trigonometric functions and inverse trigonometric functions, and the operations of multiplication and composition. The Risch algorithm provides a general criterion to determine whether the antiderivative of an elementary function is elementary and to compute the integral if is elementary. However, functions with closed expressions of antiderivatives are the exception, and consequently, computerized algebra systems have no hope of being able to find an antiderivative for a randomly constructed elementary function. On the positive side, if the 'building blocks' for antiderivatives are fixed in advance, it may still be possible to decide whether the antiderivative of a given function can be expressed using these blocks and operations of multiplication and composition and to find the symbolic answer whenever it exists. The Risch algorithm, implemented in Mathematica, Maple and other computer algebra systems, does just that for functions and antiderivatives built from rational functions, radicals, logarithm, and exponential functions. Some special integrands occur often enough to warrant special study. In particular, it may be useful to have, in the set of antiderivatives, the special functions (like the Legendre functions, the hypergeometric function, the gamma function, the incomplete gamma function and so on). Extending Risch's algorithm to include such functions is possible but challenging and has been an active research subject. More recently a new approach has emerged, using D-finite functions, which are the solutions of linear differential equations with polynomial coefficients. Most of the elementary and special functions are D-finite, and the integral of a D-finite function is also a D-finite function. This provides an algorithm to express the antiderivative of a D-finite function as the solution of a differential equation. This theory also allows one to compute the definite integral of a D-function as the sum of a series given by the first coefficients and provides an algorithm to compute any coefficient. Rule-based integration systems facilitate integration. Rubi, a computer algebra system rule-based integrator, pattern matches an extensive system of symbolic integration rules to integrate a wide variety of integrands. This system uses over 6600 integration rules to compute integrals. The method of brackets is a generalization of Ramanujan's master theorem that can be applied to a wide range of univariate and multivariate integrals. A set of rules are applied to the coefficients and exponential terms of the integrand's power series expansion to determine the integral. The method is closely related to the Mellin transform. === Numerical === Definite integrals may be approximated using several methods of numerical integration. The rectangle method relies on dividing the region under the function into a series of rectangles corresponding to function values and multiplies by the step width to find the sum. A better approach, the trapezoidal rule, replaces the rectangles used in a Riemann sum with trapezoids. The trapezoidal rule weights the first and last values by one half, then multiplies by the step width to obtain a better approximation. The idea behind the trapezoidal rule, that more accurate approximations to the function yield better approximations to the integral, can be carried further: Simpson's rule approximates the integrand by a piecewise quadratic function. Riemann sums, the trapezoidal rule, and Simpson's rule are examples of a family of quadrature rules called the Newton–Cotes formulas. The degree n Newton–Cotes quadrature rule approximates the polynomial on each subinterval by a degree n polynomial. This polynomial is chosen to interpolate the values of the function on the interval. Higher degree Newton–Cotes approximations can be more accurate, but they require more function evaluations, and they can suffer from numerical inaccuracy due to Runge's phenomenon. One solution to this problem is Clenshaw–Curtis quadrature, in which the integrand is approximated by expanding it in terms of Chebyshev polynomials. Romberg's method halves the step widths incrementally, giving trapezoid approximations denoted by T(h0), T(h1), and so on, where hk+1 is half of hk. For each new step size, only half the new function values need to be computed; the others carry over from the previous size. It then interpolate a polynomial through the approximations, and extrapolate to T(0). Gaussian quadrature evaluates the function at the roots of a set of orthogonal polynomials. An n-point Gaussian method is exact for polynomials of degree up to 2n − 1. The computation of higher-dimensional integrals (for example, volume calculations) makes important use of such alternatives as Monte Carlo integration. === Mechanical === The area of an arbitrary two-dimensional shape can be determined using a measuring instrument called planimeter. The volume of irregular objects can be measured with precision by the fluid displaced as the object is submerged. === Geometrical === Area can sometimes be found via geometrical compass-and-straightedge constructions of an equivalent square. === Integration by differentiation === Kempf, Jackson and Morales demonstrated mathematical relations that allow an integral to be calculated by means of differentiation. Their calculus involves the Dirac delta function and the partial derivative operator ∂ x {\displaystyle \partial _{x}} . This can also be applied to functional integrals, allowing them to be computed by functional differentiation. == Examples == === Using the fundamental theorem of calculus === The fundamental theorem of calculus allows straightforward calculations of basic functions: ∫ 0 π sin ⁡ ( x ) d x = − cos ⁡ ( x ) | x = 0 x = π = − cos ⁡ ( π ) − ( − cos ⁡ ( 0 ) ) = 2. {\displaystyle \int _{0}^{\pi }\sin(x)\,dx=-\cos(x){\big |}_{x=0}^{x=\pi }=-\cos(\pi )-{\big (}-\cos(0){\big )}=2.} == See also == Integral equation – Equations with an unknown function under an integral sign Integral symbol – Mathematical symbol used to denote integrals and antiderivatives Lists of integrals == Notes == == References == == Bibliography == == External links == "Integral", Encyclopedia of Mathematics, EMS Press, 2001 [1994] Online Integral Calculator, Wolfram Alpha. === Online books === Keisler, H. Jerome, Elementary Calculus: An Approach Using Infinitesimals, University of Wisconsin Stroyan, K. D., A Brief Introduction to Infinitesimal Calculus, University of Iowa Mauch, Sean, Sean's Applied Math Book, CIT, an online textbook that includes a complete introduction to calculus Crowell, Benjamin, Calculus, Fullerton College, an online textbook Garrett, Paul, Notes on First-Year Calculus Hussain, Faraz, Understanding Calculus, an online textbook Johnson, William Woolsey (1909) Elementary Treatise on Integral Calculus, link from HathiTrust. Kowalk, W. P., Integration Theory, University of Oldenburg. A new concept to an old problem. Online textbook Sloughter, Dan, Difference Equations to Differential Equations, an introduction to calculus Numerical Methods of Integration at Holistic Numerical Methods Institute P. S. Wang, Evaluation of Definite Integrals by Symbolic Manipulation (1972) — a cookbook of definite integral techniques
Wikipedia/Integral_(calculus)
In mathematics, a function space is a set of functions between two fixed sets. Often, the domain and/or codomain will have additional structure which is inherited by the function space. For example, the set of functions from any set X into a vector space has a natural vector space structure given by pointwise addition and scalar multiplication. In other scenarios, the function space might inherit a topological or metric structure, hence the name function space. == In linear algebra == Let F be a field and let X be any set. The functions X → F can be given the structure of a vector space over F where the operations are defined pointwise, that is, for any f, g : X → F, any x in X, and any c in F, define ( f + g ) ( x ) = f ( x ) + g ( x ) ( c ⋅ f ) ( x ) = c ⋅ f ( x ) {\displaystyle {\begin{aligned}(f+g)(x)&=f(x)+g(x)\\(c\cdot f)(x)&=c\cdot f(x)\end{aligned}}} When the domain X has additional structure, one might consider instead the subset (or subspace) of all such functions which respect that structure. For example, if V and also X itself are vector spaces over F, the set of linear maps X → V form a vector space over F with pointwise operations (often denoted Hom(X,V)). One such space is the dual space of X: the set of linear functionals X → F with addition and scalar multiplication defined pointwise. The cardinal dimension of a function space with no extra structure can be found by the Erdős–Kaplansky theorem. == Examples == Function spaces appear in various areas of mathematics: In set theory, the set of functions from X to Y may be denoted {X → Y} or YX. As a special case, the power set of a set X may be identified with the set of all functions from X to {0, 1}, denoted 2X. The set of bijections from X to Y is denoted X ↔ Y {\displaystyle X\leftrightarrow Y} . The factorial notation X! may be used for permutations of a single set X. In functional analysis, the same is seen for continuous linear transformations, including topologies on the vector spaces in the above, and many of the major examples are function spaces carrying a topology; the best known examples include Hilbert spaces and Banach spaces. In functional analysis, the set of all functions from the natural numbers to some set X is called a sequence space. It consists of the set of all possible sequences of elements of X. In topology, one may attempt to put a topology on the space of continuous functions from a topological space X to another one Y, with utility depending on the nature of the spaces. A commonly used example is the compact-open topology, e.g. loop space. Also available is the product topology on the space of set theoretic functions (i.e. not necessarily continuous functions) YX. In this context, this topology is also referred to as the topology of pointwise convergence. In algebraic topology, the study of homotopy theory is essentially that of discrete invariants of function spaces; In the theory of stochastic processes, the basic technical problem is how to construct a probability measure on a function space of paths of the process (functions of time); In category theory, the function space is called an exponential object or map object. It appears in one way as the representation canonical bifunctor; but as (single) functor, of type [ X , − ] {\displaystyle [X,-]} , it appears as an adjoint functor to a functor of type − × X {\displaystyle -\times X} on objects; In functional programming and lambda calculus, function types are used to express the idea of higher-order functions In programming more generally, many higher-order function concepts occur with or without explicit typing, such as closures. In domain theory, the basic idea is to find constructions from partial orders that can model lambda calculus, by creating a well-behaved Cartesian closed category. In the representation theory of finite groups, given two finite-dimensional representations V and W of a group G, one can form a representation of G over the vector space of linear maps Hom(V,W) called the Hom representation. == Functional analysis == Functional analysis is organized around adequate techniques to bring function spaces as topological vector spaces within reach of the ideas that would apply to normed spaces of finite dimension. Here we use the real line as an example domain, but the spaces below exist on suitable open subsets Ω ⊆ R n {\displaystyle \Omega \subseteq \mathbb {R} ^{n}} C ( R ) {\displaystyle C(\mathbb {R} )} continuous functions endowed with the uniform norm topology C c ( R ) {\displaystyle C_{c}(\mathbb {R} )} continuous functions with compact support B ( R ) {\displaystyle B(\mathbb {R} )} bounded functions C 0 ( R ) {\displaystyle C_{0}(\mathbb {R} )} continuous functions which vanish at infinity C r ( R ) {\displaystyle C^{r}(\mathbb {R} )} continuous functions that have r continuous derivatives. C ∞ ( R ) {\displaystyle C^{\infty }(\mathbb {R} )} smooth functions C c ∞ ( R ) {\displaystyle C_{c}^{\infty }(\mathbb {R} )} smooth functions with compact support (i.e. the set of bump functions) C ω ( R ) {\displaystyle C^{\omega }(\mathbb {R} )} real analytic functions L p ( R ) {\displaystyle L^{p}(\mathbb {R} )} , for 1 ≤ p ≤ ∞ {\displaystyle 1\leq p\leq \infty } , is the Lp space of measurable functions whose p-norm ‖ f ‖ p = ( ∫ R | f | p ) 1 / p {\textstyle \|f\|_{p}=\left(\int _{\mathbb {R} }|f|^{p}\right)^{1/p}} is finite S ( R ) {\displaystyle {\mathcal {S}}(\mathbb {R} )} , the Schwartz space of rapidly decreasing smooth functions and its continuous dual, S ′ ( R ) {\displaystyle {\mathcal {S}}'(\mathbb {R} )} tempered distributions D ( R ) {\displaystyle D(\mathbb {R} )} compact support in limit topology W k , p {\displaystyle W^{k,p}} Sobolev space of functions whose weak derivatives up to order k are in L p {\displaystyle L^{p}} O U {\displaystyle {\mathcal {O}}_{U}} holomorphic functions linear functions piecewise linear functions continuous functions, compact open topology all functions, space of pointwise convergence Hardy space Hölder space Càdlàg functions, also known as the Skorokhod space Lip 0 ( R ) {\displaystyle {\text{Lip}}_{0}(\mathbb {R} )} , the space of all Lipschitz functions on R {\displaystyle \mathbb {R} } that vanish at zero. == Uniform Norm == If y is an element of the function space C ( a , b ) {\displaystyle {\mathcal {C}}(a,b)} of all continuous functions that are defined on a closed interval [a, b], the norm ‖ y ‖ ∞ {\displaystyle \|y\|_{\infty }} defined on C ( a , b ) {\displaystyle {\mathcal {C}}(a,b)} is the maximum absolute value of y (x) for a ≤ x ≤ b, ‖ y ‖ ∞ ≡ max a ≤ x ≤ b | y ( x ) | where y ∈ C ( a , b ) {\displaystyle \|y\|_{\infty }\equiv \max _{a\leq x\leq b}|y(x)|\qquad {\text{where}}\ \ y\in {\mathcal {C}}(a,b)} is called the uniform norm or supremum norm ('sup norm'). == Bibliography == Kolmogorov, A. N., & Fomin, S. V. (1967). Elements of the theory of functions and functional analysis. Courier Dover Publications. Stein, Elias; Shakarchi, R. (2011). Functional Analysis: An Introduction to Further Topics in Analysis. Princeton University Press. == See also == List of mathematical functions Clifford algebra Tensor field Spectral theory Functional determinant == References ==
Wikipedia/Functional_space
In physics, Lagrangian mechanics is a formulation of classical mechanics founded on the d'Alembert principle of virtual work. It was introduced by the Italian-French mathematician and astronomer Joseph-Louis Lagrange in his presentation to the Turin Academy of Science in 1760 culminating in his 1788 grand opus, Mécanique analytique. Lagrangian mechanics describes a mechanical system as a pair (M, L) consisting of a configuration space M and a smooth function L {\textstyle L} within that space called a Lagrangian. For many systems, L = T − V, where T and V are the kinetic and potential energy of the system, respectively. The stationary action principle requires that the action functional of the system derived from L must remain at a stationary point (specifically, a maximum, minimum, or saddle point) throughout the time evolution of the system. This constraint allows the calculation of the equations of motion of the system using Lagrange's equations. == Introduction == Newton's laws and the concept of forces are the usual starting point for teaching about mechanical systems. This method works well for many problems, but for others the approach is nightmarishly complicated. For example, in calculation of the motion of a torus rolling on a horizontal surface with a pearl sliding inside, the time-varying constraint forces like the angular velocity of the torus, motion of the pearl in relation to the torus made it difficult to determine the motion of the torus with Newton's equations. Lagrangian mechanics adopts energy rather than force as its basic ingredient, leading to more abstract equations capable of tackling more complex problems. Particularly, Lagrange's approach was to set up independent generalized coordinates for the position and speed of every object, which allows the writing down of a general form of Lagrangian (total kinetic energy minus potential energy of the system) and summing this over all possible paths of motion of the particles yielded a formula for the 'action', which he minimized to give a generalized set of equations. This summed quantity is minimized along the path that the particle actually takes. This choice eliminates the need for the constraint force to enter into the resultant generalized system of equations. There are fewer equations since one is not directly calculating the influence of the constraint on the particle at a given moment. For a wide variety of physical systems, if the size and shape of a massive object are negligible, it is a useful simplification to treat it as a point particle. For a system of N point particles with masses m1, m2, ..., mN, each particle has a position vector, denoted r1, r2, ..., rN. Cartesian coordinates are often sufficient, so r1 = (x1, y1, z1), r2 = (x2, y2, z2) and so on. In three-dimensional space, each position vector requires three coordinates to uniquely define the location of a point, so there are 3N coordinates to uniquely define the configuration of the system. These are all specific points in space to locate the particles; a general point in space is written r = (x, y, z). The velocity of each particle is how fast the particle moves along its path of motion, and is the time derivative of its position, thus v 1 = d r 1 d t , v 2 = d r 2 d t , … , v N = d r N d t . {\displaystyle \mathbf {v} _{1}={\frac {d\mathbf {r} _{1}}{dt}},\mathbf {v} _{2}={\frac {d\mathbf {r} _{2}}{dt}},\ldots ,\mathbf {v} _{N}={\frac {d\mathbf {r} _{N}}{dt}}.} In Newtonian mechanics, the equations of motion are given by Newton's laws. The second law "net force equals mass times acceleration", ∑ F = m d 2 r d t 2 , {\displaystyle \sum \mathbf {F} =m{\frac {d^{2}\mathbf {r} }{dt^{2}}},} applies to each particle. For an N-particle system in 3 dimensions, there are 3N second-order ordinary differential equations in the positions of the particles to solve for. === Lagrangian === Instead of forces, Lagrangian mechanics uses the energies in the system. The central quantity of Lagrangian mechanics is the Lagrangian, a function which summarizes the dynamics of the entire system. Overall, the Lagrangian has units of energy, but no single expression for all physical systems. Any function which generates the correct equations of motion, in agreement with physical laws, can be taken as a Lagrangian. It is nevertheless possible to construct general expressions for large classes of applications. The non-relativistic Lagrangian for a system of particles in the absence of an electromagnetic field is given by L = T − V , {\displaystyle L=T-V,} where T = 1 2 ∑ k = 1 N m k v k 2 {\displaystyle T={\frac {1}{2}}\sum _{k=1}^{N}m_{k}v_{k}^{2}} is the total kinetic energy of the system, equaling the sum Σ of the kinetic energies of the N {\displaystyle N} particles. Each particle labeled k {\displaystyle k} has mass m k , {\displaystyle m_{k},} and vk2 = vk · vk is the magnitude squared of its velocity, equivalent to the dot product of the velocity with itself. Kinetic energy T is the energy of the system's motion and is a function only of the velocities vk, not the positions rk, nor time t, so T = T(v1, v2, ...). V, the potential energy of the system, reflects the energy of interaction between the particles, i.e. how much energy any one particle has due to all the others, together with any external influences. For conservative forces (e.g. Newtonian gravity), it is a function of the position vectors of the particles only, so V = V(r1, r2, ...). For those non-conservative forces which can be derived from an appropriate potential (e.g. electromagnetic potential), the velocities will appear also, V = V(r1, r2, ..., v1, v2, ...). If there is some external field or external driving force changing with time, the potential changes with time, so most generally V = V(r1, r2, ..., v1, v2, ..., t). As already noted, this form of L is applicable to many important classes of system, but not everywhere. For relativistic Lagrangian mechanics it must be replaced as a whole by a function consistent with special relativity (scalar under Lorentz transformations) or general relativity (4-scalar). Where a magnetic field is present, the expression for the potential energy needs restating. And for dissipative forces (e.g., friction), another function must be introduced alongside Lagrangian often referred to as a "Rayleigh dissipation function" to account for the loss of energy. One or more of the particles may each be subject to one or more holonomic constraints; such a constraint is described by an equation of the form f(r, t) = 0. If the number of constraints in the system is C, then each constraint has an equation f1(r, t) = 0, f2(r, t) = 0, ..., fC(r, t) = 0, each of which could apply to any of the particles. If particle k is subject to constraint i, then fi(rk, t) = 0. At any instant of time, the coordinates of a constrained particle are linked together and not independent. The constraint equations determine the allowed paths the particles can move along, but not where they are or how fast they go at every instant of time. Nonholonomic constraints depend on the particle velocities, accelerations, or higher derivatives of position. Lagrangian mechanics can only be applied to systems whose constraints, if any, are all holonomic. Three examples of nonholonomic constraints are: when the constraint equations are non-integrable, when the constraints have inequalities, or when the constraints involve complicated non-conservative forces like friction. Nonholonomic constraints require special treatment, and one may have to revert to Newtonian mechanics or use other methods. If T or V or both depend explicitly on time due to time-varying constraints or external influences, the Lagrangian L(r1, r2, ... v1, v2, ... t) is explicitly time-dependent. If neither the potential nor the kinetic energy depend on time, then the Lagrangian L(r1, r2, ... v1, v2, ...) is explicitly independent of time. In either case, the Lagrangian always has implicit time dependence through the generalized coordinates. With these definitions, Lagrange's equations of the first kind are where k = 1, 2, ..., N labels the particles, there is a Lagrange multiplier λi for each constraint equation fi, and ∂ ∂ r k ≡ ( ∂ ∂ x k , ∂ ∂ y k , ∂ ∂ z k ) , ∂ ∂ r ˙ k ≡ ( ∂ ∂ x ˙ k , ∂ ∂ y ˙ k , ∂ ∂ z ˙ k ) {\displaystyle {\frac {\partial }{\partial \mathbf {r} _{k}}}\equiv \left({\frac {\partial }{\partial x_{k}}},{\frac {\partial }{\partial y_{k}}},{\frac {\partial }{\partial z_{k}}}\right),\quad {\frac {\partial }{\partial {\dot {\mathbf {r} }}_{k}}}\equiv \left({\frac {\partial }{\partial {\dot {x}}_{k}}},{\frac {\partial }{\partial {\dot {y}}_{k}}},{\frac {\partial }{\partial {\dot {z}}_{k}}}\right)} are each shorthands for a vector of partial derivatives ∂/∂ with respect to the indicated variables (not a derivative with respect to the entire vector). Each overdot is a shorthand for a time derivative. This procedure does increase the number of equations to solve compared to Newton's laws, from 3N to 3N + C, because there are 3N coupled second-order differential equations in the position coordinates and multipliers, plus C constraint equations. However, when solved alongside the position coordinates of the particles, the multipliers can yield information about the constraint forces. The coordinates do not need to be eliminated by solving the constraint equations. In the Lagrangian, the position coordinates and velocity components are all independent variables, and derivatives of the Lagrangian are taken with respect to these separately according to the usual differentiation rules (e.g. the partial derivative of L with respect to the z velocity component of particle 2, defined by vz,2 = dz2/dt, is just ∂L/∂vz,2; no awkward chain rules or total derivatives need to be used to relate the velocity component to the corresponding coordinate z2). In each constraint equation, one coordinate is redundant because it is determined from the other coordinates. The number of independent coordinates is therefore n = 3N − C. We can transform each position vector to a common set of n generalized coordinates, conveniently written as an n-tuple q = (q1, q2, ... qn), by expressing each position vector, and hence the position coordinates, as functions of the generalized coordinates and time: r k = r k ( q , t ) = ( x k ( q , t ) , y k ( q , t ) , z k ( q , t ) , t ) . {\displaystyle \mathbf {r} _{k}=\mathbf {r} _{k}(\mathbf {q} ,t)={\big (}x_{k}(\mathbf {q} ,t),y_{k}(\mathbf {q} ,t),z_{k}(\mathbf {q} ,t),t{\big )}.} The vector q is a point in the configuration space of the system. The time derivatives of the generalized coordinates are called the generalized velocities, and for each particle the transformation of its velocity vector, the total derivative of its position with respect to time, is q ˙ j = d q j d t , v k = ∑ j = 1 n ∂ r k ∂ q j q ˙ j + ∂ r k ∂ t . {\displaystyle {\dot {q}}_{j}={\frac {\mathrm {d} q_{j}}{\mathrm {d} t}},\quad \mathbf {v} _{k}=\sum _{j=1}^{n}{\frac {\partial \mathbf {r} _{k}}{\partial q_{j}}}{\dot {q}}_{j}+{\frac {\partial \mathbf {r} _{k}}{\partial t}}.} Given this vk, the kinetic energy in generalized coordinates depends on the generalized velocities, generalized coordinates, and time if the position vectors depend explicitly on time due to time-varying constraints, so T = T ( q , q ˙ , t ) . {\displaystyle T=T(\mathbf {q} ,{\dot {\mathbf {q} }},t).} With these definitions, the Euler–Lagrange equations, or Lagrange's equations of the second kind are mathematical results from the calculus of variations, which can also be used in mechanics. Substituting in the Lagrangian L(q, dq/dt, t) gives the equations of motion of the system. The number of equations has decreased compared to Newtonian mechanics, from 3N to n = 3N − C coupled second-order differential equations in the generalized coordinates. These equations do not include constraint forces at all, only non-constraint forces need to be accounted for. Although the equations of motion include partial derivatives, the results of the partial derivatives are still ordinary differential equations in the position coordinates of the particles. The total time derivative denoted d/dt often involves implicit differentiation. Both equations are linear in the Lagrangian, but generally are nonlinear coupled equations in the coordinates. == From Newtonian to Lagrangian mechanics == === Newton's laws === For simplicity, Newton's laws can be illustrated for one particle without much loss of generality (for a system of N particles, all of these equations apply to each particle in the system). The equation of motion for a particle of constant mass m is Newton's second law of 1687, in modern vector notation F = m a , {\displaystyle \mathbf {F} =m\mathbf {a} ,} where a is its acceleration and F the resultant force acting on it. Where the mass is varying, the equation needs to be generalised to take the time derivative of the momentum. In three spatial dimensions, this is a system of three coupled second-order ordinary differential equations to solve, since there are three components in this vector equation. The solution is the position vector r of the particle at time t, subject to the initial conditions of r and v when t = 0. Newton's laws are easy to use in Cartesian coordinates, but Cartesian coordinates are not always convenient, and for other coordinate systems the equations of motion can become complicated. In a set of curvilinear coordinates ξ = (ξ1, ξ2, ξ3), the law in tensor index notation is the "Lagrangian form" F a = m ( d 2 ξ a d t 2 + Γ a b c d ξ b d t d ξ c d t ) = g a k ( d d t ∂ T ∂ ξ ˙ k − ∂ T ∂ ξ k ) , ξ ˙ a ≡ d ξ a d t , {\displaystyle F^{a}=m\left({\frac {\mathrm {d} ^{2}\xi ^{a}}{\mathrm {d} t^{2}}}+\Gamma ^{a}{}_{bc}{\frac {\mathrm {d} \xi ^{b}}{\mathrm {d} t}}{\frac {\mathrm {d} \xi ^{c}}{\mathrm {d} t}}\right)=g^{ak}\left({\frac {\mathrm {d} }{\mathrm {d} t}}{\frac {\partial T}{\partial {\dot {\xi }}^{k}}}-{\frac {\partial T}{\partial \xi ^{k}}}\right),\quad {\dot {\xi }}^{a}\equiv {\frac {\mathrm {d} \xi ^{a}}{\mathrm {d} t}},} where Fa is the a-th contravariant component of the resultant force acting on the particle, Γabc are the Christoffel symbols of the second kind, T = 1 2 m g b c d ξ b d t d ξ c d t {\displaystyle T={\frac {1}{2}}mg_{bc}{\frac {\mathrm {d} \xi ^{b}}{\mathrm {d} t}}{\frac {\mathrm {d} \xi ^{c}}{\mathrm {d} t}}} is the kinetic energy of the particle, and gbc the covariant components of the metric tensor of the curvilinear coordinate system. All the indices a, b, c, each take the values 1, 2, 3. Curvilinear coordinates are not the same as generalized coordinates. It may seem like an overcomplication to cast Newton's law in this form, but there are advantages. The acceleration components in terms of the Christoffel symbols can be avoided by evaluating derivatives of the kinetic energy instead. If there is no resultant force acting on the particle, F = 0, it does not accelerate, but moves with constant velocity in a straight line. Mathematically, the solutions of the differential equation are geodesics, the curves of extremal length between two points in space (these may end up being minimal, that is the shortest paths, but not necessarily). In flat 3D real space the geodesics are simply straight lines. So for a free particle, Newton's second law coincides with the geodesic equation and states that free particles follow geodesics, the extremal trajectories it can move along. If the particle is subject to forces F ≠ 0, the particle accelerates due to forces acting on it and deviates away from the geodesics it would follow if free. With appropriate extensions of the quantities given here in flat 3D space to 4D curved spacetime, the above form of Newton's law also carries over to Einstein's general relativity, in which case free particles follow geodesics in curved spacetime that are no longer "straight lines" in the ordinary sense. However, we still need to know the total resultant force F acting on the particle, which in turn requires the resultant non-constraint force N plus the resultant constraint force C, F = C + N . {\displaystyle \mathbf {F} =\mathbf {C} +\mathbf {N} .} The constraint forces can be complicated, since they generally depend on time. Also, if there are constraints, the curvilinear coordinates are not independent but related by one or more constraint equations. The constraint forces can either be eliminated from the equations of motion, so only the non-constraint forces remain, or included by including the constraint equations in the equations of motion. === D'Alembert's principle === A fundamental result in analytical mechanics is D'Alembert's principle, introduced in 1708 by Jacques Bernoulli to understand static equilibrium, and developed by D'Alembert in 1743 to solve dynamical problems. The principle asserts for N particles the virtual work, i.e. the work along a virtual displacement, δrk, is zero: ∑ k = 1 N ( N k + C k − m k a k ) ⋅ δ r k = 0. {\displaystyle \sum _{k=1}^{N}(\mathbf {N} _{k}+\mathbf {C} _{k}-m_{k}\mathbf {a} _{k})\cdot \delta \mathbf {r} _{k}=0.} The virtual displacements, δrk, are by definition infinitesimal changes in the configuration of the system consistent with the constraint forces acting on the system at an instant of time, i.e. in such a way that the constraint forces maintain the constrained motion. They are not the same as the actual displacements in the system, which are caused by the resultant constraint and non-constraint forces acting on the particle to accelerate and move it. Virtual work is the work done along a virtual displacement for any force (constraint or non-constraint). Since the constraint forces act perpendicular to the motion of each particle in the system to maintain the constraints, the total virtual work by the constraint forces acting on the system is zero: ∑ k = 1 N C k ⋅ δ r k = 0 , {\displaystyle \sum _{k=1}^{N}\mathbf {C} _{k}\cdot \delta \mathbf {r} _{k}=0,} so that ∑ k = 1 N ( N k − m k a k ) ⋅ δ r k = 0. {\displaystyle \sum _{k=1}^{N}(\mathbf {N} _{k}-m_{k}\mathbf {a} _{k})\cdot \delta \mathbf {r} _{k}=0.} Thus D'Alembert's principle allows us to concentrate on only the applied non-constraint forces, and exclude the constraint forces in the equations of motion. The form shown is also independent of the choice of coordinates. However, it cannot be readily used to set up the equations of motion in an arbitrary coordinate system since the displacements δrk might be connected by a constraint equation, which prevents us from setting the N individual summands to 0. We will therefore seek a system of mutually independent coordinates for which the total sum will be 0 if and only if the individual summands are 0. Setting each of the summands to 0 will eventually give us our separated equations of motion. === Equations of motion from D'Alembert's principle === If there are constraints on particle k, then since the coordinates of the position rk = (xk, yk, zk) are linked together by a constraint equation, so are those of the virtual displacements δrk = (δxk, δyk, δzk). Since the generalized coordinates are independent, we can avoid the complications with the δrk by converting to virtual displacements in the generalized coordinates. These are related in the same form as a total differential, δ r k = ∑ j = 1 n ∂ r k ∂ q j δ q j . {\displaystyle \delta \mathbf {r} _{k}=\sum _{j=1}^{n}{\frac {\partial \mathbf {r} _{k}}{\partial q_{j}}}\delta q_{j}.} There is no partial time derivative with respect to time multiplied by a time increment, since this is a virtual displacement, one along the constraints in an instant of time. The first term in D'Alembert's principle above is the virtual work done by the non-constraint forces Nk along the virtual displacements δrk, and can without loss of generality be converted into the generalized analogues by the definition of generalized forces Q j = ∑ k = 1 N N k ⋅ ∂ r k ∂ q j , {\displaystyle Q_{j}=\sum _{k=1}^{N}\mathbf {N} _{k}\cdot {\frac {\partial \mathbf {r} _{k}}{\partial q_{j}}},} so that ∑ k = 1 N N k ⋅ δ r k = ∑ k = 1 N N k ⋅ ∑ j = 1 n ∂ r k ∂ q j δ q j = ∑ j = 1 n Q j δ q j . {\displaystyle \sum _{k=1}^{N}\mathbf {N} _{k}\cdot \delta \mathbf {r} _{k}=\sum _{k=1}^{N}\mathbf {N} _{k}\cdot \sum _{j=1}^{n}{\frac {\partial \mathbf {r} _{k}}{\partial q_{j}}}\delta q_{j}=\sum _{j=1}^{n}Q_{j}\delta q_{j}.} This is half of the conversion to generalized coordinates. It remains to convert the acceleration term into generalized coordinates, which is not immediately obvious. Recalling the Lagrange form of Newton's second law, the partial derivatives of the kinetic energy with respect to the generalized coordinates and velocities can be found to give the desired result: ∑ k = 1 N m k a k ⋅ ∂ r k ∂ q j = d d t ∂ T ∂ q ˙ j − ∂ T ∂ q j . {\displaystyle \sum _{k=1}^{N}m_{k}\mathbf {a} _{k}\cdot {\frac {\partial \mathbf {r} _{k}}{\partial q_{j}}}={\frac {\mathrm {d} }{\mathrm {d} t}}{\frac {\partial T}{\partial {\dot {q}}_{j}}}-{\frac {\partial T}{\partial q_{j}}}.} Now D'Alembert's principle is in the generalized coordinates as required, ∑ j = 1 n [ Q j − ( d d t ∂ T ∂ q ˙ j − ∂ T ∂ q j ) ] δ q j = 0 , {\displaystyle \sum _{j=1}^{n}\left[Q_{j}-\left({\frac {\mathrm {d} }{\mathrm {d} t}}{\frac {\partial T}{\partial {\dot {q}}_{j}}}-{\frac {\partial T}{\partial q_{j}}}\right)\right]\delta q_{j}=0,} and since these virtual displacements δqj are independent and nonzero, the coefficients can be equated to zero, resulting in Lagrange's equations or the generalized equations of motion, Q j = d d t ∂ T ∂ q ˙ j − ∂ T ∂ q j {\displaystyle Q_{j}={\frac {\mathrm {d} }{\mathrm {d} t}}{\frac {\partial T}{\partial {\dot {q}}_{j}}}-{\frac {\partial T}{\partial q_{j}}}} These equations are equivalent to Newton's laws for the non-constraint forces. The generalized forces in this equation are derived from the non-constraint forces only – the constraint forces have been excluded from D'Alembert's principle and do not need to be found. The generalized forces may be non-conservative, provided they satisfy D'Alembert's principle. === Euler–Lagrange equations and Hamilton's principle === For a non-conservative force which depends on velocity, it may be possible to find a potential energy function V that depends on positions and velocities. If the generalized forces Qi can be derived from a potential V such that Q j = d d t ∂ V ∂ q ˙ j − ∂ V ∂ q j , {\displaystyle Q_{j}={\frac {\mathrm {d} }{\mathrm {d} t}}{\frac {\partial V}{\partial {\dot {q}}_{j}}}-{\frac {\partial V}{\partial q_{j}}},} equating to Lagrange's equations and defining the Lagrangian as L = T − V obtains Lagrange's equations of the second kind or the Euler–Lagrange equations of motion ∂ L ∂ q j − d d t ∂ L ∂ q ˙ j = 0. {\displaystyle {\frac {\partial L}{\partial q_{j}}}-{\frac {\mathrm {d} }{\mathrm {d} t}}{\frac {\partial L}{\partial {\dot {q}}_{j}}}=0.} However, the Euler–Lagrange equations can only account for non-conservative forces if a potential can be found as shown. This may not always be possible for non-conservative forces, and Lagrange's equations do not involve any potential, only generalized forces; therefore they are more general than the Euler–Lagrange equations. The Euler–Lagrange equations also follow from the calculus of variations. The variation of the Lagrangian is δ L = ∑ j = 1 n ( ∂ L ∂ q j δ q j + ∂ L ∂ q ˙ j δ q ˙ j ) , δ q ˙ j ≡ δ d q j d t ≡ d ( δ q j ) d t , {\displaystyle \delta L=\sum _{j=1}^{n}\left({\frac {\partial L}{\partial q_{j}}}\delta q_{j}+{\frac {\partial L}{\partial {\dot {q}}_{j}}}\delta {\dot {q}}_{j}\right),\quad \delta {\dot {q}}_{j}\equiv \delta {\frac {\mathrm {d} q_{j}}{\mathrm {d} t}}\equiv {\frac {\mathrm {d} (\delta q_{j})}{\mathrm {d} t}},} which has a form similar to the total differential of L, but the virtual displacements and their time derivatives replace differentials, and there is no time increment in accordance with the definition of the virtual displacements. An integration by parts with respect to time can transfer the time derivative of δqj to the ∂L/∂(dqj/dt), in the process exchanging d(δqj)/dt for δqj, allowing the independent virtual displacements to be factorized from the derivatives of the Lagrangian, ∫ t 1 t 2 δ L d t = ∫ t 1 t 2 ∑ j = 1 n ( ∂ L ∂ q j δ q j + d d t ( ∂ L ∂ q ˙ j δ q j ) − d d t ∂ L ∂ q ˙ j δ q j ) d t = ∑ j = 1 n [ ∂ L ∂ q ˙ j δ q j ] t 1 t 2 + ∫ t 1 t 2 ∑ j = 1 n ( ∂ L ∂ q j − d d t ∂ L ∂ q ˙ j ) δ q j d t . {\displaystyle {\begin{aligned}\int _{t_{1}}^{t_{2}}\delta L\,\mathrm {d} t&=\int _{t_{1}}^{t_{2}}\sum _{j=1}^{n}\left({\frac {\partial L}{\partial q_{j}}}\delta q_{j}+{\frac {\mathrm {d} }{\mathrm {d} t}}\left({\frac {\partial L}{\partial {\dot {q}}_{j}}}\delta q_{j}\right)-{\frac {\mathrm {d} }{\mathrm {d} t}}{\frac {\partial L}{\partial {\dot {q}}_{j}}}\delta q_{j}\right)\,\mathrm {d} t\\&=\sum _{j=1}^{n}\left[{\frac {\partial L}{\partial {\dot {q}}_{j}}}\delta q_{j}\right]_{t_{1}}^{t_{2}}+\int _{t_{1}}^{t_{2}}\sum _{j=1}^{n}\left({\frac {\partial L}{\partial q_{j}}}-{\frac {\mathrm {d} }{\mathrm {d} t}}{\frac {\partial L}{\partial {\dot {q}}_{j}}}\right)\delta q_{j}\,\mathrm {d} t.\end{aligned}}} Now, if the condition δqj(t1) = δqj(t2) = 0 holds for all j, the terms not integrated are zero. If in addition the entire time integral of δL is zero, then because the δqj are independent, and the only way for a definite integral to be zero is if the integrand equals zero, each of the coefficients of δqj must also be zero. Then we obtain the equations of motion. This can be summarized by Hamilton's principle: ∫ t 1 t 2 δ L d t = 0. {\displaystyle \int _{t_{1}}^{t_{2}}\delta L\,\mathrm {d} t=0.} The time integral of the Lagrangian is another quantity called the action, defined as S = ∫ t 1 t 2 L d t , {\displaystyle S=\int _{t_{1}}^{t_{2}}L\,\mathrm {d} t,} which is a functional; it takes in the Lagrangian function for all times between t1 and t2 and returns a scalar value. Its dimensions are the same as [angular momentum], [energy]·[time], or [length]·[momentum]. With this definition Hamilton's principle is δ S = 0. {\displaystyle \delta S=0.} Instead of thinking about particles accelerating in response to applied forces, one might think of them picking out the path with a stationary action, with the end points of the path in configuration space held fixed at the initial and final times. Hamilton's principle is one of several action principles. Historically, the idea of finding the shortest path a particle can follow subject to a force motivated the first applications of the calculus of variations to mechanical problems, such as the Brachistochrone problem solved by Jean Bernoulli in 1696, as well as Leibniz, Daniel Bernoulli, L'Hôpital around the same time, and Newton the following year. Newton himself was thinking along the lines of the variational calculus, but did not publish. These ideas in turn lead to the variational principles of mechanics, of Fermat, Maupertuis, Euler, Hamilton, and others. Hamilton's principle can be applied to nonholonomic constraints if the constraint equations can be put into a certain form, a linear combination of first order differentials in the coordinates. The resulting constraint equation can be rearranged into first order differential equation. This will not be given here. === Lagrange multipliers and constraints === The Lagrangian L can be varied in the Cartesian rk coordinates, for N particles, ∫ t 1 t 2 ∑ k = 1 N ( ∂ L ∂ r k − d d t ∂ L ∂ r ˙ k ) ⋅ δ r k d t = 0. {\displaystyle \int _{t_{1}}^{t_{2}}\sum _{k=1}^{N}\left({\frac {\partial L}{\partial \mathbf {r} _{k}}}-{\frac {\mathrm {d} }{\mathrm {d} t}}{\frac {\partial L}{\partial {\dot {\mathbf {r} }}_{k}}}\right)\cdot \delta \mathbf {r} _{k}\,\mathrm {d} t=0.} Hamilton's principle is still valid even if the coordinates L is expressed in are not independent, here rk, but the constraints are still assumed to be holonomic. As always the end points are fixed δrk(t1) = δrk(t2) = 0 for all k. What cannot be done is to simply equate the coefficients of δrk to zero because the δrk are not independent. Instead, the method of Lagrange multipliers can be used to include the constraints. Multiplying each constraint equation fi(rk, t) = 0 by a Lagrange multiplier λi for i = 1, 2, ..., C, and adding the results to the original Lagrangian, gives the new Lagrangian L ′ = L ( r 1 , r 2 , … , r ˙ 1 , r ˙ 2 , … , t ) + ∑ i = 1 C λ i ( t ) f i ( r k , t ) . {\displaystyle L'=L(\mathbf {r} _{1},\mathbf {r} _{2},\ldots ,{\dot {\mathbf {r} }}_{1},{\dot {\mathbf {r} }}_{2},\ldots ,t)+\sum _{i=1}^{C}\lambda _{i}(t)f_{i}(\mathbf {r} _{k},t).} The Lagrange multipliers are arbitrary functions of time t, but not functions of the coordinates rk, so the multipliers are on equal footing with the position coordinates. Varying this new Lagrangian and integrating with respect to time gives ∫ t 1 t 2 δ L ′ d t = ∫ t 1 t 2 ∑ k = 1 N ( ∂ L ∂ r k − d d t ∂ L ∂ r ˙ k + ∑ i = 1 C λ i ∂ f i ∂ r k ) ⋅ δ r k d t = 0. {\displaystyle \int _{t_{1}}^{t_{2}}\delta L'\mathrm {d} t=\int _{t_{1}}^{t_{2}}\sum _{k=1}^{N}\left({\frac {\partial L}{\partial \mathbf {r} _{k}}}-{\frac {\mathrm {d} }{\mathrm {d} t}}{\frac {\partial L}{\partial {\dot {\mathbf {r} }}_{k}}}+\sum _{i=1}^{C}\lambda _{i}{\frac {\partial f_{i}}{\partial \mathbf {r} _{k}}}\right)\cdot \delta \mathbf {r} _{k}\,\mathrm {d} t=0.} The introduced multipliers can be found so that the coefficients of δrk are zero, even though the rk are not independent. The equations of motion follow. From the preceding analysis, obtaining the solution to this integral is equivalent to the statement ∂ L ′ ∂ r k − d d t ∂ L ′ ∂ r ˙ k = 0 ⇒ ∂ L ∂ r k − d d t ∂ L ∂ r ˙ k + ∑ i = 1 C λ i ∂ f i ∂ r k = 0 , {\displaystyle {\frac {\partial L'}{\partial \mathbf {r} _{k}}}-{\frac {\mathrm {d} }{\mathrm {d} t}}{\frac {\partial L'}{\partial {\dot {\mathbf {r} }}_{k}}}=0\quad \Rightarrow \quad {\frac {\partial L}{\partial \mathbf {r} _{k}}}-{\frac {\mathrm {d} }{\mathrm {d} t}}{\frac {\partial L}{\partial {\dot {\mathbf {r} }}_{k}}}+\sum _{i=1}^{C}\lambda _{i}{\frac {\partial f_{i}}{\partial \mathbf {r} _{k}}}=0,} which are Lagrange's equations of the first kind. Also, the λi Euler-Lagrange equations for the new Lagrangian return the constraint equations ∂ L ′ ∂ λ i − d d t ∂ L ′ ∂ λ ˙ i = 0 ⇒ f i ( r k , t ) = 0. {\displaystyle {\frac {\partial L'}{\partial \lambda _{i}}}-{\frac {\mathrm {d} }{\mathrm {d} t}}{\frac {\partial L'}{\partial {\dot {\lambda }}_{i}}}=0\quad \Rightarrow \quad f_{i}(\mathbf {r} _{k},t)=0.} For the case of a conservative force given by the gradient of some potential energy V, a function of the rk coordinates only, substituting the Lagrangian L = T − V gives ∂ T ∂ r k − d d t ∂ T ∂ r ˙ k ⏟ − F k + − ∂ V ∂ r k ⏟ N k + ∑ i = 1 C λ i ∂ f i ∂ r k = 0 , {\displaystyle \underbrace {{\frac {\partial T}{\partial \mathbf {r} _{k}}}-{\frac {\mathrm {d} }{\mathrm {d} t}}{\frac {\partial T}{\partial {\dot {\mathbf {r} }}_{k}}}} _{-\mathbf {F} _{k}}+\underbrace {-{\frac {\partial V}{\partial \mathbf {r} _{k}}}} _{\mathbf {N} _{k}}+\sum _{i=1}^{C}\lambda _{i}{\frac {\partial f_{i}}{\partial \mathbf {r} _{k}}}=0,} and identifying the derivatives of kinetic energy as the (negative of the) resultant force, and the derivatives of the potential equaling the non-constraint force, it follows the constraint forces are C k = ∑ i = 1 C λ i ∂ f i ∂ r k , {\displaystyle \mathbf {C} _{k}=\sum _{i=1}^{C}\lambda _{i}{\frac {\partial f_{i}}{\partial \mathbf {r} _{k}}},} thus giving the constraint forces explicitly in terms of the constraint equations and the Lagrange multipliers. == Properties of the Lagrangian == === Non-uniqueness === The Lagrangian of a given system is not unique. A Lagrangian L can be multiplied by a nonzero constant a and shifted by an arbitrary constant b, and the new Lagrangian L′ = aL + b will describe the same motion as L. If one restricts as above to trajectories q over a given time interval [tst, tfin]} and fixed end points Pst = q(tst) and Pfin = q(tfin), then two Lagrangians describing the same system can differ by the "total time derivative" of a function f(q, t): L ′ ( q , q ˙ , t ) = L ( q , q ˙ , t ) + d f ( q , t ) d t , {\displaystyle L'(\mathbf {q} ,{\dot {\mathbf {q} }},t)=L(\mathbf {q} ,{\dot {\mathbf {q} }},t)+{\frac {\mathrm {d} f(\mathbf {q} ,t)}{\mathrm {d} t}},} where d f ( q , t ) d t {\textstyle {\frac {\mathrm {d} f(\mathbf {q} ,t)}{\mathrm {d} t}}} means ∂ f ( q , t ) ∂ t + ∑ i ∂ f ( q , t ) ∂ q i q ˙ i . {\textstyle {\frac {\partial f(\mathbf {q} ,t)}{\partial t}}+\sum _{i}{\frac {\partial f(\mathbf {q} ,t)}{\partial q_{i}}}{\dot {q}}_{i}.} Both Lagrangians L and L′ produce the same equations of motion since the corresponding actions S and S′ are related via S ′ [ q ] = ∫ t st t fin L ′ ( q ( t ) , q ˙ ( t ) , t ) d t = ∫ t st t fin L ( q ( t ) , q ˙ ( t ) , t ) d t + ∫ t st t fin d f ( q ( t ) , t ) d t d t = S [ q ] + f ( P fin , t fin ) − f ( P st , t st ) , {\displaystyle {\begin{aligned}S'[\mathbf {q} ]&=\int _{t_{\text{st}}}^{t_{\text{fin}}}L'(\mathbf {q} (t),{\dot {\mathbf {q} }}(t),t)\,dt\\&=\int _{t_{\text{st}}}^{t_{\text{fin}}}L(\mathbf {q} (t),{\dot {\mathbf {q} }}(t),t)\,dt+\int _{t_{\text{st}}}^{t_{\text{fin}}}{\frac {\mathrm {d} f(\mathbf {q} (t),t)}{\mathrm {d} t}}\,dt\\&=S[\mathbf {q} ]+f(P_{\text{fin}},t_{\text{fin}})-f(P_{\text{st}},t_{\text{st}}),\end{aligned}}} with the last two components f(Pfin, tfin) and f(Pst, tst) independent of q. === Invariance under point transformations === Given a set of generalized coordinates q, if we change these variables to a new set of generalized coordinates Q according to a point transformation Q = Q(q, t) which is invertible as q = q(Q, t), the new Lagrangian L′ is a function of the new coordinates and similarly for the constraints L ′ ( Q , Q ˙ , t ) = L ( q ( Q , t ) , q ˙ ( Q , Q ˙ , t ) , t ) , ϕ j ′ ( Q , t ) = ϕ j ( q ( Q , t ) , t ) {\displaystyle {\begin{aligned}L'(\mathbf {Q} ,{\dot {\mathbf {Q} }},t)&=L(\mathbf {q} (\mathbf {Q} ,t),{\dot {\mathbf {q} }}(\mathbf {Q} ,{\dot {\mathbf {Q} }},t),t),\\\phi _{j}'(\mathbf {Q} ,t)&=\phi _{j}(\mathbf {q} (\mathbf {Q} ,t),t)\end{aligned}}} and by the chain rule for partial differentiation, Lagrange's equations are invariant under this transformation; d d t ∂ L ′ ∂ Q ˙ i = ∂ L ′ ∂ Q i + ∑ j λ j ∂ ϕ j ′ ∂ Q i . {\displaystyle {\frac {\mathrm {d} }{\mathrm {d} t}}{\frac {\partial L'}{\partial {\dot {Q}}_{i}}}={\frac {\partial L'}{\partial Q_{i}}}+\sum _{j}\lambda _{j}{\frac {\partial \phi '_{j}}{\partial Q_{i}}}.} === Cyclic coordinates and conserved momenta === An important property of the Lagrangian is that conserved quantities can easily be read off from it. The generalized momentum "canonically conjugate to" the coordinate qi is defined by p i = ∂ L ∂ q ˙ i . {\displaystyle p_{i}={\frac {\partial L}{\partial {\dot {q}}_{i}}}.} If the Lagrangian L does not depend on some coordinate qi, it follows immediately from the Euler–Lagrange equations that p ˙ i = d d t ∂ L ∂ q ˙ i = ∂ L ∂ q i = 0 {\displaystyle {\dot {p}}_{i}={\frac {\mathrm {d} }{\mathrm {d} t}}{\frac {\partial L}{\partial {\dot {q}}_{i}}}={\frac {\partial L}{\partial q_{i}}}=0} and integrating shows the corresponding generalized momentum equals a constant, a conserved quantity. This is a special case of Noether's theorem. Such coordinates are called "cyclic" or "ignorable". For example, a system may have a Lagrangian L ( r , θ , s ˙ , z ˙ , r ˙ , θ ˙ , ϕ ˙ , t ) , {\displaystyle L(r,\theta ,{\dot {s}},{\dot {z}},{\dot {r}},{\dot {\theta }},{\dot {\phi }},t),} where r and z are lengths along straight lines, s is an arc length along some curve, and θ and φ are angles. Notice z, s, and φ are all absent in the Lagrangian even though their velocities are not. Then the momenta p z = ∂ L ∂ z ˙ , p s = ∂ L ∂ s ˙ , p ϕ = ∂ L ∂ ϕ ˙ , {\displaystyle p_{z}={\frac {\partial L}{\partial {\dot {z}}}},\quad p_{s}={\frac {\partial L}{\partial {\dot {s}}}},\quad p_{\phi }={\frac {\partial L}{\partial {\dot {\phi }}}},} are all conserved quantities. The units and nature of each generalized momentum will depend on the corresponding coordinate; in this case pz is a translational momentum in the z direction, ps is also a translational momentum along the curve s is measured, and pφ is an angular momentum in the plane the angle φ is measured in. However complicated the motion of the system is, all the coordinates and velocities will vary in such a way that these momenta are conserved. === Energy === Given a Lagrangian L , {\displaystyle L,} the Hamiltonian of the corresponding mechanical system is, by definition, H = ( ∑ i = 1 n q ˙ i ∂ L ∂ q ˙ i ) − L . {\displaystyle H={\biggl (}\sum _{i=1}^{n}{\dot {q}}_{i}{\frac {\partial L}{\partial {\dot {q}}_{i}}}{\biggr )}-L.} This quantity will be equivalent to energy if the generalized coordinates are natural coordinates, i.e., they have no explicit time dependence when expressing position vector: r = r ( q 1 , ⋯ , q n ) {\displaystyle \mathbf {r} =\mathbf {r} (q_{1},\cdots ,q_{n})} . From: T = m 2 v 2 = m 2 ∑ i , j ( ∂ r → ∂ q i q ˙ i ) ⋅ ( ∂ r → ∂ q j q ˙ j ) = m 2 ∑ i , j a i j q ˙ i q ˙ j {\displaystyle T={\frac {m}{2}}v^{2}={\frac {m}{2}}\sum _{i,j}\left({\frac {\partial {\vec {r}}}{\partial q_{i}}}{\dot {q}}_{i}\right)\cdot \left({\frac {\partial {\vec {r}}}{\partial q_{j}}}{\dot {q}}_{j}\right)={\frac {m}{2}}\sum _{i,j}a_{ij}{\dot {q}}_{i}{\dot {q}}_{j}} ∑ k = 1 n q ˙ k ∂ L ∂ q ˙ k = ∑ k = 1 n q ˙ k ∂ T ∂ q ˙ k = m 2 ( 2 ∑ i , j a i j q ˙ i q ˙ j ) = 2 T {\displaystyle \sum _{k=1}^{n}{\dot {q}}_{k}{\frac {\partial L}{\partial {\dot {q}}_{k}}}=\sum _{k=1}^{n}{\dot {q}}_{k}{\frac {\partial T}{\partial {\dot {q}}_{k}}}={\frac {m}{2}}\left(2\sum _{i,j}a_{ij}{\dot {q}}_{i}{\dot {q}}_{j}\right)=2T} H = ( ∑ i = 1 n q ˙ i ∂ L ∂ q ˙ i ) − L = 2 T − ( T − V ) = T + V = E {\displaystyle H=\left(\sum _{i=1}^{n}{\dot {q}}_{i}{\frac {\partial L}{\partial {\dot {q}}_{i}}}\right)-L=2T-(T-V)=T+V=E} where a i j = ∂ r ∂ q i ⋅ ∂ r ∂ q j {\displaystyle a_{ij}={\frac {\partial \mathbf {r} }{\partial q_{i}}}\cdot {\frac {\partial \mathbf {r} }{\partial q_{j}}}} is a symmetric matrix that is defined for the derivation. ==== Invariance under coordinate transformations ==== At every time instant t, the energy is invariant under configuration space coordinate changes q → Q, i.e. (using natural coordinates) E ( q , q ˙ , t ) = E ( Q , Q ˙ , t ) . {\displaystyle E(\mathbf {q} ,{\dot {\mathbf {q} }},t)=E(\mathbf {Q} ,{\dot {\mathbf {Q} }},t).} Besides this result, the proof below shows that, under such change of coordinates, the derivatives ∂ L / ∂ q ˙ i {\displaystyle \partial L/\partial {\dot {q}}_{i}} change as coefficients of a linear form. ==== Conservation ==== In Lagrangian mechanics, the system is closed if and only if its Lagrangian L {\displaystyle L} does not explicitly depend on time. The energy conservation law states that the energy E {\displaystyle E} of a closed system is an integral of motion. More precisely, let q = q(t) be an extremal. (In other words, q satisfies the Euler–Lagrange equations). Taking the total time-derivative of L along this extremal and using the EL equations leads to d L d t = q ˙ ∂ L ∂ q + q ¨ ∂ L ∂ q ˙ + ∂ L ∂ t − ∂ L ∂ t = d d t ( ∂ L ∂ q ˙ ) q ˙ + q ¨ ∂ L ∂ q ˙ − L ˙ − ∂ L ∂ t = d d t ( ∂ L ∂ q ˙ q ˙ − L ) = d H d t {\displaystyle {\begin{aligned}{\frac {dL}{dt}}&={\dot {\mathbf {q} }}{\frac {\partial L}{\partial \mathbf {q} }}+{\ddot {\mathbf {q} }}{\frac {\partial L}{\partial \mathbf {\dot {q}} }}+{\frac {\partial L}{\partial t}}\\-{\frac {\partial L}{\partial t}}&={\frac {d}{dt}}\left({\frac {\partial L}{\partial \mathbf {\dot {q}} }}\right){\dot {\mathbf {q} }}+{\ddot {\mathbf {q} }}{\frac {\partial L}{\partial \mathbf {\dot {q}} }}-{\dot {L}}\\-{\frac {\partial L}{\partial t}}&={\frac {d}{dt}}\left({\frac {\partial L}{\partial \mathbf {\dot {q}} }}\mathbf {\dot {q}} -L\right)={\frac {dH}{dt}}\end{aligned}}} If the Lagrangian L does not explicitly depend on time, then ∂L/∂t = 0, then H does not vary with time evolution of particle, indeed, an integral of motion, meaning that H ( q ( t ) , q ˙ ( t ) , t ) = constant of time . {\displaystyle H(\mathbf {q} (t),{\dot {\mathbf {q} }}(t),t)={\text{constant of time}}.} Hence, if the chosen coordinates were natural coordinates, the energy is conserved. ==== Kinetic and potential energies ==== Under all these circumstances, the constant E = T + V {\displaystyle E=T+V} is the total energy of the system. The kinetic and potential energies still change as the system evolves, but the motion of the system will be such that their sum, the total energy, is constant. This is a valuable simplification, since the energy E is a constant of integration that counts as an arbitrary constant for the problem, and it may be possible to integrate the velocities from this energy relation to solve for the coordinates. === Mechanical similarity === If the potential energy is a homogeneous function of the coordinates and independent of time, and all position vectors are scaled by the same nonzero constant α, rk′ = αrk, so that V ( α r 1 , α r 2 , … , α r N ) = α N V ( r 1 , r 2 , … , r N ) {\displaystyle V(\alpha \mathbf {r} _{1},\alpha \mathbf {r} _{2},\ldots ,\alpha \mathbf {r} _{N})=\alpha ^{N}V(\mathbf {r} _{1},\mathbf {r} _{2},\ldots ,\mathbf {r} _{N})} and time is scaled by a factor β, t′ = βt, then the velocities vk are scaled by a factor of α/β and the kinetic energy T by (α/β)2. The entire Lagrangian has been scaled by the same factor if α 2 β 2 = α N ⇒ β = α 1 − N 2 . {\displaystyle {\frac {\alpha ^{2}}{\beta ^{2}}}=\alpha ^{N}\quad \Rightarrow \quad \beta =\alpha ^{1-{\frac {N}{2}}}.} Since the lengths and times have been scaled, the trajectories of the particles in the system follow geometrically similar paths differing in size. The length l traversed in time t in the original trajectory corresponds to a new length l′ traversed in time t′ in the new trajectory, given by the ratios t ′ t = ( l ′ l ) 1 − N 2 . {\displaystyle {\frac {t'}{t}}=\left({\frac {l'}{l}}\right)^{1-{\frac {N}{2}}}.} === Interacting particles === For a given system, if two subsystems A and B are non-interacting, the Lagrangian L of the overall system is the sum of the Lagrangians LA and LB for the subsystems: L = L A + L B . {\displaystyle L=L_{A}+L_{B}.} If they do interact this is not possible. In some situations, it may be possible to separate the Lagrangian of the system L into the sum of non-interacting Lagrangians, plus another Lagrangian LAB containing information about the interaction, L = L A + L B + L A B . {\displaystyle L=L_{A}+L_{B}+L_{AB}.} This may be physically motivated by taking the non-interacting Lagrangians to be kinetic energies only, while the interaction Lagrangian is the system's total potential energy. Also, in the limiting case of negligible interaction, LAB tends to zero reducing to the non-interacting case above. The extension to more than two non-interacting subsystems is straightforward – the overall Lagrangian is the sum of the separate Lagrangians for each subsystem. If there are interactions, then interaction Lagrangians may be added. === Consequences of singular Lagrangians === From the Euler-Lagrange equations, it follows that: d d t ∂ L ∂ q ˙ i − ∂ L ∂ q i = 0 ∂ 2 L ∂ q j ∂ q ˙ i d q j d t + ∂ 2 L ∂ q ˙ j ∂ q ˙ i d q ˙ j d t + ∂ L ∂ t − ∂ L ∂ q i = 0 ∑ j W i j ( q , q ˙ , t ) q ¨ j = ∂ L ∂ q i − ∂ L ∂ t − ∑ j ∂ 2 L ∂ q ˙ i ∂ q j q ˙ j , {\displaystyle {\begin{aligned}&{\frac {d}{dt}}{\frac {\partial L}{\partial {\dot {q}}_{i}}}-{\frac {\partial L}{\partial q_{i}}}=0\\&{\frac {\partial ^{2}L}{\partial q_{j}\partial {\dot {q}}_{i}}}{\frac {dq_{j}}{dt}}+{\frac {\partial ^{2}L}{\partial {\dot {q}}_{j}\partial {\dot {q}}_{i}}}{\frac {d{\dot {q}}_{j}}{dt}}+{\frac {\partial L}{\partial t}}-{\frac {\partial L}{\partial q_{i}}}=0\\&\sum _{j}W_{ij}(q,{\dot {q}},t){\ddot {q}}_{j}={\frac {\partial L}{\partial q_{i}}}-{\frac {\partial L}{\partial t}}-\sum _{j}{\frac {\partial ^{2}L}{\partial {\dot {q}}_{i}\partial q_{j}}}{\dot {q}}_{j},\\\end{aligned}}} where the matrix is defined as W i j = ∂ 2 L ∂ q ˙ i ∂ q ˙ j {\displaystyle W_{ij}={\frac {\partial ^{2}L}{\partial {\dot {q}}_{i}\partial {\dot {q}}_{j}}}} . If the matrix W {\displaystyle W} is non-singular, the above equations can be solved to represent q ¨ {\displaystyle {\ddot {q}}} as a function of ( q ˙ , q , t ) {\displaystyle ({\dot {q}},q,t)} . If the matrix is non-invertible, it would not be possible to represent all q ¨ {\displaystyle {\ddot {q}}} 's as a function of ( q ˙ , q , t ) {\displaystyle ({\dot {q}},q,t)} but also, the Hamiltonian equations of motions will not take the standard form. == Examples == The following examples apply Lagrange's equations of the second kind to mechanical problems. === Conservative force === A particle of mass m moves under the influence of a conservative force derived from the gradient ∇ of a scalar potential, F = − ∇ V ( r ) . {\displaystyle \mathbf {F} =-{\boldsymbol {\nabla }}V(\mathbf {r} ).} If there are more particles, in accordance with the above results, the total kinetic energy is a sum over all the particle kinetic energies, and the potential is a function of all the coordinates. ==== Cartesian coordinates ==== The Lagrangian of the particle can be written L ( x , y , z , x ˙ , y ˙ , z ˙ ) = 1 2 m ( x ˙ 2 + y ˙ 2 + z ˙ 2 ) − V ( x , y , z ) . {\displaystyle L(x,y,z,{\dot {x}},{\dot {y}},{\dot {z}})={\frac {1}{2}}m({\dot {x}}^{2}+{\dot {y}}^{2}+{\dot {z}}^{2})-V(x,y,z).} The equations of motion for the particle are found by applying the Euler–Lagrange equation, for the x coordinate d d t ( ∂ L ∂ x ˙ ) = ∂ L ∂ x , {\displaystyle {\frac {\mathrm {d} }{\mathrm {d} t}}\left({\frac {\partial L}{\partial {\dot {x}}}}\right)={\frac {\partial L}{\partial x}},} with derivatives ∂ L ∂ x = − ∂ V ∂ x , ∂ L ∂ x ˙ = m x ˙ , d d t ( ∂ L ∂ x ˙ ) = m x ¨ , {\displaystyle {\frac {\partial L}{\partial x}}=-{\frac {\partial V}{\partial x}},\quad {\frac {\partial L}{\partial {\dot {x}}}}=m{\dot {x}},\quad {\frac {\mathrm {d} }{\mathrm {d} t}}\left({\frac {\partial L}{\partial {\dot {x}}}}\right)=m{\ddot {x}},} hence m x ¨ = − ∂ V ∂ x , {\displaystyle m{\ddot {x}}=-{\frac {\partial V}{\partial x}},} and similarly for the y and z coordinates. Collecting the equations in vector form we find m r ¨ = − ∇ V {\displaystyle m{\ddot {\mathbf {r} }}=-{\boldsymbol {\nabla }}V} which is Newton's second law of motion for a particle subject to a conservative force. ==== Polar coordinates in 2D and 3D ==== Using the spherical coordinates (r, θ, φ) as commonly used in physics (ISO 80000-2:2019 convention), where r is the radial distance to origin, θ is polar angle (also known as colatitude, zenith angle, normal angle, or inclination angle), and φ is the azimuthal angle, the Lagrangian for a central potential is L = m 2 ( r ˙ 2 + r 2 θ ˙ 2 + r 2 sin 2 ⁡ θ φ ˙ 2 ) − V ( r ) . {\displaystyle L={\frac {m}{2}}({\dot {r}}^{2}+r^{2}{\dot {\theta }}^{2}+r^{2}\sin ^{2}\theta \,{\dot {\varphi }}^{2})-V(r).} So, in spherical coordinates, the Euler–Lagrange equations are m r ¨ − m r ( θ ˙ 2 + sin 2 ⁡ θ φ ˙ 2 ) + ∂ V ∂ r = 0 , {\displaystyle m{\ddot {r}}-mr({\dot {\theta }}^{2}+\sin ^{2}\theta \,{\dot {\varphi }}^{2})+{\frac {\partial V}{\partial r}}=0,} d d t ( m r 2 θ ˙ ) − m r 2 sin ⁡ θ cos ⁡ θ φ ˙ 2 = 0 , {\displaystyle {\frac {\mathrm {d} }{\mathrm {d} t}}(mr^{2}{\dot {\theta }})-mr^{2}\sin \theta \cos \theta \,{\dot {\varphi }}^{2}=0,} d d t ( m r 2 sin 2 ⁡ θ φ ˙ ) = 0. {\displaystyle {\frac {\mathrm {d} }{\mathrm {d} t}}(mr^{2}\sin ^{2}\theta \,{\dot {\varphi }})=0.} The φ coordinate is cyclic since it does not appear in the Lagrangian, so the conserved momentum in the system is the angular momentum p φ = ∂ L ∂ φ ˙ = m r 2 sin 2 ⁡ θ φ ˙ , {\displaystyle p_{\varphi }={\frac {\partial L}{\partial {\dot {\varphi }}}}=mr^{2}\sin ^{2}\theta {\dot {\varphi }},} in which r, θ and dφ/dt can all vary with time, but only in such a way that pφ is constant. The Lagrangian in two-dimensional polar coordinates is recovered by fixing θ to the constant value π/2. === Pendulum on a movable support === Consider a pendulum of mass m and length ℓ, which is attached to a support with mass M, which can move along a line in the x {\displaystyle x} -direction. Let x {\displaystyle x} be the coordinate along the line of the support, and let us denote the position of the pendulum by the angle θ {\displaystyle \theta } from the vertical. The coordinates and velocity components of the pendulum bob are x p e n d = x + ℓ sin ⁡ θ ⇒ x ˙ p e n d = x ˙ + ℓ θ ˙ cos ⁡ θ y p e n d = − ℓ cos ⁡ θ ⇒ y ˙ p e n d = ℓ θ ˙ sin ⁡ θ . {\displaystyle {\begin{array}{rll}&x_{\mathrm {pend} }=x+\ell \sin \theta &\quad \Rightarrow \quad {\dot {x}}_{\mathrm {pend} }={\dot {x}}+\ell {\dot {\theta }}\cos \theta \\&y_{\mathrm {pend} }=-\ell \cos \theta &\quad \Rightarrow \quad {\dot {y}}_{\mathrm {pend} }=\ell {\dot {\theta }}\sin \theta .\end{array}}} The generalized coordinates can be taken to be x {\displaystyle x} and θ {\displaystyle \theta } . The kinetic energy of the system is then T = 1 2 M x ˙ 2 + 1 2 m ( x ˙ p e n d 2 + y ˙ p e n d 2 ) {\displaystyle T={\frac {1}{2}}M{\dot {x}}^{2}+{\frac {1}{2}}m\left({\dot {x}}_{\mathrm {pend} }^{2}+{\dot {y}}_{\mathrm {pend} }^{2}\right)} and the potential energy is V = m g y p e n d {\displaystyle V=mgy_{\mathrm {pend} }} giving the Lagrangian L = T − V = 1 2 M x ˙ 2 + 1 2 m [ ( x ˙ + ℓ θ ˙ cos ⁡ θ ) 2 + ( ℓ θ ˙ sin ⁡ θ ) 2 ] + m g ℓ cos ⁡ θ = 1 2 ( M + m ) x ˙ 2 + m x ˙ ℓ θ ˙ cos ⁡ θ + 1 2 m ℓ 2 θ ˙ 2 + m g ℓ cos ⁡ θ . {\displaystyle {\begin{array}{rcl}L&=&T-V\\&=&{\frac {1}{2}}M{\dot {x}}^{2}+{\frac {1}{2}}m\left[\left({\dot {x}}+\ell {\dot {\theta }}\cos \theta \right)^{2}+\left(\ell {\dot {\theta }}\sin \theta \right)^{2}\right]+mg\ell \cos \theta \\&=&{\frac {1}{2}}\left(M+m\right){\dot {x}}^{2}+m{\dot {x}}\ell {\dot {\theta }}\cos \theta +{\frac {1}{2}}m\ell ^{2}{\dot {\theta }}^{2}+mg\ell \cos \theta .\end{array}}} Since x is absent from the Lagrangian, it is a cyclic coordinate. The conserved momentum is p x = ∂ L ∂ x ˙ = ( M + m ) x ˙ + m ℓ θ ˙ cos ⁡ θ , {\displaystyle p_{x}={\frac {\partial L}{\partial {\dot {x}}}}=(M+m){\dot {x}}+m\ell {\dot {\theta }}\cos \theta ,} and the Lagrange equation for the support coordinate x {\displaystyle x} is ( M + m ) x ¨ + m ℓ θ ¨ cos ⁡ θ − m ℓ θ ˙ 2 sin ⁡ θ = 0. {\displaystyle (M+m){\ddot {x}}+m\ell {\ddot {\theta }}\cos \theta -m\ell {\dot {\theta }}^{2}\sin \theta =0.} The Lagrange equation for the angle θ is d d t [ m ( x ˙ ℓ cos ⁡ θ + ℓ 2 θ ˙ ) ] + m ℓ ( x ˙ θ ˙ + g ) sin ⁡ θ = 0 ; {\displaystyle {\frac {\mathrm {d} }{\mathrm {d} t}}\left[m({\dot {x}}\ell \cos \theta +\ell ^{2}{\dot {\theta }})\right]+m\ell ({\dot {x}}{\dot {\theta }}+g)\sin \theta =0;} and simplifying θ ¨ + x ¨ ℓ cos ⁡ θ + g ℓ sin ⁡ θ = 0. {\displaystyle {\ddot {\theta }}+{\frac {\ddot {x}}{\ell }}\cos \theta +{\frac {g}{\ell }}\sin \theta =0.} These equations may look quite complicated, but finding them with Newton's laws would have required carefully identifying all forces, which would have been much more laborious and prone to errors. By considering limit cases, the correctness of this system can be verified: For example, x ¨ → 0 {\displaystyle {\ddot {x}}\to 0} should give the equations of motion for a simple pendulum that is at rest in some inertial frame, while θ ¨ → 0 {\displaystyle {\ddot {\theta }}\to 0} should give the equations for a pendulum in a constantly accelerating system, etc. Furthermore, it is trivial to obtain the results numerically, given suitable starting conditions and a chosen time step, by stepping through the results iteratively. === Two-body central force problem === Two bodies of masses m1 and m2 with position vectors r1 and r2 are in orbit about each other due to an attractive central potential V. We may write down the Lagrangian in terms of the position coordinates as they are, but it is an established procedure to convert the two-body problem into a one-body problem as follows. Introduce the Jacobi coordinates; the separation of the bodies r = r2 − r1 and the location of the center of mass R = (m1r1 + m2r2)/(m1 + m2). The Lagrangian is then L = 1 2 M R ˙ 2 ⏟ L cm + 1 2 μ r ˙ 2 − V ( | r | ) ⏟ L rel {\displaystyle L=\underbrace {{\frac {1}{2}}M{\dot {\mathbf {R} }}^{2}} _{L_{\text{cm}}}+\underbrace {{\frac {1}{2}}\mu {\dot {\mathbf {r} }}^{2}-V(|\mathbf {r} |)} _{L_{\text{rel}}}} where M = m1 + m2 is the total mass, μ = m1m2/(m1 + m2) is the reduced mass, and V the potential of the radial force, which depends only on the magnitude of the separation |r| = |r2 − r1|. The Lagrangian splits into a center-of-mass term Lcm and a relative motion term Lrel. The Euler–Lagrange equation for R is simply M R ¨ = 0 , {\displaystyle M{\ddot {\mathbf {R} }}=0,} which states the center of mass moves in a straight line at constant velocity. Since the relative motion only depends on the magnitude of the separation, it is ideal to use polar coordinates (r, θ) and take r = |r|, L rel = 1 2 μ ( r ˙ 2 + r 2 θ ˙ 2 ) − V ( r ) , {\displaystyle L_{\text{rel}}={\frac {1}{2}}\mu \left({\dot {r}}^{2}+r^{2}{\dot {\theta }}^{2}\right)-V(r),} so θ is a cyclic coordinate with the corresponding conserved (angular) momentum p θ = ∂ L rel ∂ θ ˙ = μ r 2 θ ˙ = ℓ . {\displaystyle p_{\theta }={\frac {\partial L_{\text{rel}}}{\partial {\dot {\theta }}}}=\mu r^{2}{\dot {\theta }}=\ell .} The radial coordinate r and angular velocity dθ/dt can vary with time, but only in such a way that ℓ is constant. The Lagrange equation for r is μ r θ ˙ 2 − d V d r = μ r ¨ . {\displaystyle \mu r{\dot {\theta }}^{2}-{\frac {dV}{dr}}=\mu {\ddot {r}}.} This equation is identical to the radial equation obtained using Newton's laws in a co-rotating reference frame, that is, a frame rotating with the reduced mass so it appears stationary. Eliminating the angular velocity dθ/dt from this radial equation, μ r ¨ = − d V d r + ℓ 2 μ r 3 . {\displaystyle \mu {\ddot {r}}=-{\frac {\mathrm {d} V}{\mathrm {d} r}}+{\frac {\ell ^{2}}{\mu r^{3}}}.} which is the equation of motion for a one-dimensional problem in which a particle of mass μ is subjected to the inward central force −dV/dr and a second outward force, called in this context the (Lagrangian) centrifugal force (see centrifugal force#Other uses of the term): F c f = μ r θ ˙ 2 = ℓ 2 μ r 3 . {\displaystyle F_{\mathrm {cf} }=\mu r{\dot {\theta }}^{2}={\frac {\ell ^{2}}{\mu r^{3}}}.} Of course, if one remains entirely within the one-dimensional formulation, ℓ enters only as some imposed parameter of the external outward force, and its interpretation as angular momentum depends upon the more general two-dimensional problem from which the one-dimensional problem originated. If one arrives at this equation using Newtonian mechanics in a co-rotating frame, the interpretation is evident as the centrifugal force in that frame due to the rotation of the frame itself. If one arrives at this equation directly by using the generalized coordinates (r, θ) and simply following the Lagrangian formulation without thinking about frames at all, the interpretation is that the centrifugal force is an outgrowth of using polar coordinates. As Hildebrand says: "Since such quantities are not true physical forces, they are often called inertia forces. Their presence or absence depends, not upon the particular problem at hand, but upon the coordinate system chosen." In particular, if Cartesian coordinates are chosen, the centrifugal force disappears, and the formulation involves only the central force itself, which provides the centripetal force for a curved motion. This viewpoint, that fictitious forces originate in the choice of coordinates, often is expressed by users of the Lagrangian method. This view arises naturally in the Lagrangian approach, because the frame of reference is (possibly unconsciously) selected by the choice of coordinates. For example, see for a comparison of Lagrangians in an inertial and in a noninertial frame of reference. See also the discussion of "total" and "updated" Lagrangian formulations in. Unfortunately, this usage of "inertial force" conflicts with the Newtonian idea of an inertial force. In the Newtonian view, an inertial force originates in the acceleration of the frame of observation (the fact that it is not an inertial frame of reference), not in the choice of coordinate system. To keep matters clear, it is safest to refer to the Lagrangian inertial forces as generalized inertial forces, to distinguish them from the Newtonian vector inertial forces. That is, one should avoid following Hildebrand when he says (p. 155) "we deal always with generalized forces, velocities accelerations, and momenta. For brevity, the adjective "generalized" will be omitted frequently." It is known that the Lagrangian of a system is not unique. Within the Lagrangian formalism the Newtonian fictitious forces can be identified by the existence of alternative Lagrangians in which the fictitious forces disappear, sometimes found by exploiting the symmetry of the system. == Extensions to include non-conservative forces == === Dissipative forces === Dissipation (i.e. non-conservative systems) can also be treated with an effective Lagrangian formulated by a certain doubling of the degrees of freedom. In a more general formulation, the forces could be both conservative and viscous. If an appropriate transformation can be found from the Fi, Rayleigh suggests using a dissipation function, D, of the following form: D = 1 2 ∑ j = 1 m ∑ k = 1 m C j k q ˙ j q ˙ k , {\displaystyle D={\frac {1}{2}}\sum _{j=1}^{m}\sum _{k=1}^{m}C_{jk}{\dot {q}}_{j}{\dot {q}}_{k},} where Cjk are constants that are related to the damping coefficients in the physical system, though not necessarily equal to them. If D is defined this way, then Q j = − ∂ V ∂ q j − ∂ D ∂ q ˙ j {\displaystyle Q_{j}=-{\frac {\partial V}{\partial q_{j}}}-{\frac {\partial D}{\partial {\dot {q}}_{j}}}} and d d t ( ∂ L ∂ q ˙ j ) − ∂ L ∂ q j + ∂ D ∂ q ˙ j = 0. {\displaystyle {\frac {\mathrm {d} }{\mathrm {d} t}}\left({\frac {\partial L}{\partial {\dot {q}}_{j}}}\right)-{\frac {\partial L}{\partial q_{j}}}+{\frac {\partial D}{\partial {\dot {q}}_{j}}}=0.} === Electromagnetism === A test particle is a particle whose mass and charge are assumed to be so small that its effect on external system is insignificant. It is often a hypothetical simplified point particle with no properties other than mass and charge. Real particles like electrons and up quarks are more complex and have additional terms in their Lagrangians. Not only can the fields form non conservative potentials, these potentials can also be velocity dependent. The Lagrangian for a charged particle with electrical charge q, interacting with an electromagnetic field, is the prototypical example of a velocity-dependent potential. The electric scalar potential ϕ = ϕ(r, t) and magnetic vector potential A = A(r, t) are defined from the electric field E = E(r, t) and magnetic field B = B(r, t) as follows: E = − ∇ ϕ − ∂ A ∂ t , B = ∇ × A . {\displaystyle \mathbf {E} =-{\boldsymbol {\nabla }}\phi -{\frac {\partial \mathbf {A} }{\partial t}},\quad \mathbf {B} ={\boldsymbol {\nabla }}\times \mathbf {A} .} The Lagrangian of a massive charged test particle in an electromagnetic field L = 1 2 m r ˙ 2 + q r ˙ ⋅ A − q ϕ , {\displaystyle L={\tfrac {1}{2}}m{\dot {\mathbf {r} }}^{2}+q\,{\dot {\mathbf {r} }}\cdot \mathbf {A} -q\phi ,} is called minimal coupling. This is a good example of when the common rule of thumb that the Lagrangian is the kinetic energy minus the potential energy is incorrect. Combined with Euler–Lagrange equation, it produces the Lorentz force law m r ¨ = q E + q r ˙ × B {\displaystyle m{\ddot {\mathbf {r} }}=q\mathbf {E} +q{\dot {\mathbf {r} }}\times \mathbf {B} } Under gauge transformation: A → A + ∇ f , ϕ → ϕ − f ˙ , {\displaystyle \mathbf {A} \rightarrow \mathbf {A} +{\boldsymbol {\nabla }}f,\quad \phi \rightarrow \phi -{\dot {f}},} where f(r,t) is any scalar function of space and time, the aforementioned Lagrangian transforms like: L → L + q ( r ˙ ⋅ ∇ + ∂ ∂ t ) f = L + q d f d t , {\displaystyle L\rightarrow L+q\left({\dot {\mathbf {r} }}\cdot {\boldsymbol {\nabla }}+{\frac {\partial }{\partial t}}\right)f=L+q{\frac {df}{dt}},} which still produces the same Lorentz force law. Note that the canonical momentum (conjugate to position r) is the kinetic momentum plus a contribution from the A field (known as the potential momentum): p = ∂ L ∂ r ˙ = m r ˙ + q A . {\displaystyle \mathbf {p} ={\frac {\partial L}{\partial {\dot {\mathbf {r} }}}}=m{\dot {\mathbf {r} }}+q\mathbf {A} .} This relation is also used in the minimal coupling prescription in quantum mechanics and quantum field theory. From this expression, we can see that the canonical momentum p is not gauge invariant, and therefore not a measurable physical quantity; However, if r is cyclic (i.e. Lagrangian is independent of position r), which happens if the ϕ and A fields are uniform, then this canonical momentum p given here is the conserved momentum, while the measurable physical kinetic momentum mv is not. == Other contexts and formulations == The ideas in Lagrangian mechanics have numerous applications in other areas of physics, and can adopt generalized results from the calculus of variations. === Alternative formulations of classical mechanics === A closely related formulation of classical mechanics is Hamiltonian mechanics. The Hamiltonian is defined by H = ∑ i = 1 n q ˙ i ∂ L ∂ q ˙ i − L {\displaystyle H=\sum _{i=1}^{n}{\dot {q}}_{i}{\frac {\partial L}{\partial {\dot {q}}_{i}}}-L} and can be obtained by performing a Legendre transformation on the Lagrangian, which introduces new variables canonically conjugate to the original variables. For example, given a set of generalized coordinates, the variables canonically conjugate are the generalized momenta. This doubles the number of variables, but makes differential equations first order. The Hamiltonian is a particularly ubiquitous quantity in quantum mechanics (see Hamiltonian (quantum mechanics)). Routhian mechanics is a hybrid formulation of Lagrangian and Hamiltonian mechanics, which is not often used in practice but an efficient formulation for cyclic coordinates. === Momentum space formulation === The Euler–Lagrange equations can also be formulated in terms of the generalized momenta rather than generalized coordinates. Performing a Legendre transformation on the generalized coordinate Lagrangian L(q, dq/dt, t) obtains the generalized momenta Lagrangian L′(p, dp/dt, t) in terms of the original Lagrangian, as well the EL equations in terms of the generalized momenta. Both Lagrangians contain the same information, and either can be used to solve for the motion of the system. In practice generalized coordinates are more convenient to use and interpret than generalized momenta. === Higher derivatives of generalized coordinates === There is no mathematical reason to restrict the derivatives of generalized coordinates to first order only. It is possible to derive modified EL equations for a Lagrangian containing higher order derivatives, see Euler–Lagrange equation for details. However, from the physical point-of-view there is an obstacle to include time derivatives higher than the first order, which is implied by Ostrogradsky's construction of a canonical formalism for nondegenerate higher derivative Lagrangians, see Ostrogradsky instability === Optics === Lagrangian mechanics can be applied to geometrical optics, by applying variational principles to rays of light in a medium, and solving the EL equations gives the equations of the paths the light rays follow. === Relativistic formulation === Lagrangian mechanics can be formulated in special relativity and general relativity. Some features of Lagrangian mechanics are retained in the relativistic theories but difficulties quickly appear in other respects. In particular, the EL equations take the same form, and the connection between cyclic coordinates and conserved momenta still applies, however the Lagrangian must be modified and is not simply the kinetic minus the potential energy of a particle. Also, it is not straightforward to handle multiparticle systems in a manifestly covariant way, it may be possible if a particular frame of reference is singled out. === Quantum mechanics === In quantum mechanics, action and quantum-mechanical phase are related via the Planck constant, and the principle of stationary action can be understood in terms of constructive interference of wave functions. In 1948, Feynman discovered the path integral formulation extending the principle of least action to quantum mechanics for electrons and photons. In this formulation, particles travel every possible path between the initial and final states; the probability of a specific final state is obtained by summing over all possible trajectories leading to it. In the classical regime, the path integral formulation cleanly reproduces Hamilton's principle, and Fermat's principle in optics. === Classical field theory === In Lagrangian mechanics, the generalized coordinates form a discrete set of variables that define the configuration of a system. In classical field theory, the physical system is not a set of discrete particles, but rather a continuous field ϕ(r, t) defined over a region of 3D space. Associated with the field is a Lagrangian density L ( ϕ , ∇ ϕ , ϕ ˙ , r , t ) {\displaystyle {\mathcal {L}}(\phi ,\nabla \phi ,{\dot {\phi }},\mathbf {r} ,t)} defined in terms of the field and its space and time derivatives at a location r and time t. Analogous to the particle case, for non-relativistic applications the Lagrangian density is also the kinetic energy density of the field, minus its potential energy density (this is not true in general, and the Lagrangian density has to be "reverse engineered"). The Lagrangian is then the volume integral of the Lagrangian density over 3D space L ( t ) = ∫ L d 3 r {\displaystyle L(t)=\int {\mathcal {L}}\,\mathrm {d} ^{3}\mathbf {r} } where d3r is a 3D differential volume element. The Lagrangian is a function of time since the Lagrangian density has implicit space dependence via the fields, and may have explicit spatial dependence, but these are removed in the integral, leaving only time in as the variable for the Lagrangian. === Noether's theorem === The action principle, and the Lagrangian formalism, are tied closely to Noether's theorem, which connects physical conserved quantities to continuous symmetries of a physical system. If the Lagrangian is invariant under a symmetry, then the resulting equations of motion are also invariant under that symmetry. This characteristic is very helpful in showing that theories are consistent with either special relativity or general relativity. == See also == == Footnotes == == Notes == == References == == Further reading == Gupta, Kiran Chandra, Classical mechanics of particles and rigid bodies (Wiley, 1988). Cassel, Kevin (2013). Variational methods with applications in science and engineering. Cambridge: Cambridge University Press. ISBN 978-1-107-02258-4. Goldstein, Herbert, et al. Classical Mechanics. 3rd ed., Pearson, 2002. == External links == David Tong. "Cambridge Lecture Notes on Classical Dynamics". DAMTP. Retrieved 2017-06-08. Principle of least action interactive Excellent interactive explanation/webpage Joseph Louis de Lagrange - Œuvres complètes (Gallica-Math) Constrained motion and generalized coordinates, page 4
Wikipedia/Lagrangian_(physics)
In theoretical physics, relativistic Lagrangian mechanics is Lagrangian mechanics applied in the context of special relativity and general relativity. == Introduction == The relativistic Lagrangian can be derived in relativistic mechanics to be of the form: L = − m 0 c 2 γ ( r ˙ ) − V ( r , r ˙ , t ) . {\displaystyle L=-{\frac {m_{0}c^{2}}{\gamma ({\dot {\mathbf {r} }})}}-V(\mathbf {r} ,{\dot {\mathbf {r} }},t)\,.} Although, unlike non-relativistic mechanics, the relativistic Lagrangian is not expressed as difference of kinetic energy with potential energy, the relativistic Hamiltonian corresponds to total energy in a similar manner but without including rest energy. The form of the Lagrangian also makes the relativistic action functional proportional to the proper time of the path in spacetime. In covariant form, the Lagrangian is taken to be: Λ = g α β d x α d σ d x β d σ , {\displaystyle \Lambda =g_{\alpha \beta }{\frac {dx^{\alpha }}{d\sigma }}{\frac {dx^{\beta }}{d\sigma }},} where σ is an affine parameter which parametrizes the spacetime curve. == Lagrangian formulation in special relativity == Lagrangian mechanics can be formulated in special relativity as follows. Consider one particle (N particles are considered later). === Coordinate formulation === If a system is described by a Lagrangian L, the Euler–Lagrange equations d d t ∂ L ∂ r ˙ = ∂ L ∂ r {\displaystyle {\frac {d}{dt}}{\frac {\partial L}{\partial {\dot {\mathbf {r} }}}}={\frac {\partial L}{\partial \mathbf {r} }}} retain their form in special relativity, provided the Lagrangian generates equations of motion consistent with special relativity. Here r = (x, y, z) is the position vector of the particle as measured in some lab frame where Cartesian coordinates are used for simplicity, and v = r ˙ = d r d t = ( d x d t , d y d t , d z d t ) {\displaystyle \mathbf {v} ={\dot {\mathbf {r} }}={\frac {d\mathbf {r} }{dt}}=\left({\frac {dx}{dt}},{\frac {dy}{dt}},{\frac {dz}{dt}}\right)} is the coordinate velocity, the derivative of position r with respect to coordinate time t. (Throughout this article, overdots are with respect to coordinate time, not proper time). It is possible to transform the position coordinates to generalized coordinates exactly as in non-relativistic mechanics, r = r(q, t). Taking the total differential of r obtains the transformation of velocity v to the generalized coordinates, generalized velocities, and coordinate time v = ∑ j = 1 n ∂ r ∂ q j q ˙ j + ∂ r ∂ t , q ˙ j = d q j d t {\displaystyle \mathbf {v} =\sum _{j=1}^{n}{\frac {\partial \mathbf {r} }{\partial q_{j}}}{\dot {q}}_{j}+{\frac {\partial \mathbf {r} }{\partial t}}\,,\quad {\dot {q}}_{j}={\frac {dq_{j}}{dt}}} remains the same. However, the energy of a moving particle is different from non-relativistic mechanics. It is instructive to look at the total relativistic energy of a free test particle. An observer in the lab frame defines events by coordinates r and coordinate time t, and measures the particle to have coordinate velocity v = dr/dt. By contrast, an observer moving with the particle will record a different time, this is the proper time, τ. Expanding in a power series, the first term is the particle's rest energy, plus its non-relativistic kinetic energy, followed by higher order relativistic corrections; E = m 0 c 2 d t d τ = m 0 c 2 1 − r ˙ 2 ( t ) c 2 = m 0 c 2 + 1 2 m 0 r ˙ 2 ( t ) + 3 8 m 0 r ˙ 4 ( t ) c 2 + ⋯ , {\displaystyle E=m_{0}c^{2}{\frac {dt}{d\tau }}={\frac {m_{0}c^{2}}{\sqrt {1-{\frac {{\dot {\mathbf {r} }}^{2}(t)}{c^{2}}}}}}=m_{0}c^{2}+{1 \over 2}m_{0}{\dot {\mathbf {r} }}^{2}(t)+{3 \over 8}m_{0}{\frac {{\dot {\mathbf {r} }}^{4}(t)}{c^{2}}}+\cdots \,,} where c is the speed of light in vacuum. The differentials in t and τ are related by the Lorentz factor γ, d t = γ ( r ˙ ) d τ , γ ( r ˙ ) = 1 1 − r ˙ 2 c 2 , r ˙ = d r d t , r ˙ 2 ( t ) = r ˙ ( t ) ⋅ r ˙ ( t ) . {\displaystyle dt=\gamma ({\dot {\mathbf {r} }})d\tau \,,\quad \gamma ({\dot {\mathbf {r} }})={\frac {1}{\sqrt {1-{\frac {{\dot {\mathbf {r} }}^{2}}{c^{2}}}}}}\,,\quad {\dot {\mathbf {r} }}={\frac {d\mathbf {r} }{dt}}\,,\quad {\dot {\mathbf {r} }}^{2}(t)={\dot {\mathbf {r} }}(t)\cdot {\dot {\mathbf {r} }}(t)\,.} where · is the dot product. The relativistic kinetic energy for an uncharged particle of rest mass m0 is T = ( γ ( r ˙ ) − 1 ) m 0 c 2 {\displaystyle T=(\gamma ({\dot {\mathbf {r} }})-1)m_{0}c^{2}} and we may naïvely guess the relativistic Lagrangian for a particle to be this relativistic kinetic energy minus the potential energy. However, even for a free particle for which V = 0, this is wrong. Following the non-relativistic approach, we expect the derivative of this seemingly correct Lagrangian with respect to the velocity to be the relativistic momentum, which it is not. The definition of a generalized momentum can be retained, and the advantageous connection between cyclic coordinates and conserved quantities will continue to apply. The momenta can be used to "reverse-engineer" the Lagrangian. For the case of the free massive particle, in Cartesian coordinates, the x component of relativistic momentum is p x = ∂ L ∂ x ˙ = γ ( r ˙ ) m 0 x ˙ , {\displaystyle p_{x}={\frac {\partial L}{\partial {\dot {x}}}}=\gamma ({\dot {\mathbf {r} }})m_{0}{\dot {x}}\,,\quad } and similarly for the y and z components. Integrating this equation with respect to dx/dt gives L = − m 0 c 2 γ ( r ˙ ) + X ( y ˙ , z ˙ ) , {\displaystyle L=-{\frac {m_{0}c^{2}}{\gamma ({\dot {\mathbf {r} }})}}+X({\dot {y}},{\dot {z}})\,,} where X is an arbitrary function of dy/dt and dz/dt from the integration. Integrating py and pz obtains similarly L = − m 0 c 2 γ ( r ˙ ) + Y ( x ˙ , z ˙ ) , L = − m 0 c 2 γ ( r ˙ ) + Z ( x ˙ , y ˙ ) , {\displaystyle L=-{\frac {m_{0}c^{2}}{\gamma ({\dot {\mathbf {r} }})}}+Y({\dot {x}},{\dot {z}})\,,\quad L=-{\frac {m_{0}c^{2}}{\gamma ({\dot {\mathbf {r} }})}}+Z({\dot {x}},{\dot {y}})\,,} where Y and Z are arbitrary functions of their indicated variables. Since the functions X, Y, Z are arbitrary, without loss of generality we can conclude the common solution to these integrals, a possible Lagrangian that will correctly generate all the components of relativistic momentum, is L = − m 0 c 2 γ ( r ˙ ) , {\displaystyle L=-{\frac {m_{0}c^{2}}{\gamma ({\dot {\mathbf {r} }})}}\,,} where X = Y = Z = 0. Alternatively, since we wish to build a Lagrangian out of relativistically invariant quantities, take the action as proportional to the integral of the Lorentz invariant line element in spacetime, the length of the particle's world line between proper times τ1 and τ2, S = ε ∫ τ 1 τ 2 d τ = ε ∫ t 1 t 2 d t γ ( r ˙ ) , L = ε γ ( r ˙ ) = ε 1 − r ˙ 2 c 2 , {\displaystyle S=\varepsilon \int _{\tau _{1}}^{\tau _{2}}d\tau =\varepsilon \int _{t_{1}}^{t_{2}}{\frac {dt}{\gamma ({\dot {\mathbf {r} }})}}\,,\quad L={\frac {\varepsilon }{\gamma ({\dot {\mathbf {r} }})}}=\varepsilon {\sqrt {1-{\frac {{\dot {\mathbf {r} }}^{2}}{c^{2}}}}}\,,} where ε is a constant to be found, and after converting the proper time of the particle to the coordinate time as measured in the lab frame, the integrand is the Lagrangian by definition. The momentum must be the relativistic momentum, p = ∂ L ∂ r ˙ = ( − ε c 2 ) γ ( r ˙ ) r ˙ = m 0 γ ( r ˙ ) r ˙ , {\displaystyle \mathbf {p} ={\frac {\partial L}{\partial {\dot {\mathbf {r} }}}}=\left({\frac {-\varepsilon }{c^{2}}}\right)\gamma ({\dot {\mathbf {r} }}){\dot {\mathbf {r} }}=m_{0}\gamma ({\dot {\mathbf {r} }}){\dot {\mathbf {r} }}\,,} which requires ε = −m0c2, in agreement with the previously obtained Lagrangian. Either way, the position vector r is absent from the Lagrangian and therefore cyclic, so the Euler–Lagrange equations are consistent with the constancy of relativistic momentum, d d t ∂ L ∂ r ˙ = ∂ L ∂ r ⇒ d d t ( m 0 γ ( r ˙ ) r ˙ ) = 0 , {\displaystyle {\frac {d}{dt}}{\frac {\partial L}{\partial {\dot {\mathbf {r} }}}}={\frac {\partial L}{\partial \mathbf {r} }}\quad \Rightarrow \quad {\frac {d}{dt}}(m_{0}\gamma ({\dot {\mathbf {r} }}){\dot {\mathbf {r} }})=0\,,} which must be the case for a free particle. Also, expanding the relativistic free particle Lagrangian in a power series to first order in (v/c)2, L = − m 0 c 2 [ 1 + 1 2 ( − r ˙ 2 c 2 ) + ⋯ ] ≈ − m 0 c 2 + m 0 2 r ˙ 2 , {\displaystyle L=-m_{0}c^{2}\left[1+{\frac {1}{2}}\left(-{\frac {{\dot {\mathbf {r} }}^{2}}{c^{2}}}\right)+\cdots \right]\approx -m_{0}c^{2}+{\frac {m_{0}}{2}}{\dot {\mathbf {r} }}^{2}\,,} in the non-relativistic limit when v is small, the higher order terms not shown are negligible, and the Lagrangian is the non-relativistic kinetic energy as it should be. The remaining term is the negative of the particle's rest energy, a constant term which can be ignored in the Lagrangian. For the case of an interacting particle subject to a potential V, which may be non-conservative, it is possible for a number of interesting cases to simply subtract this potential from the free particle Lagrangian, L = − m 0 c 2 γ ( r ˙ ) − V ( r , r ˙ , t ) . {\displaystyle L=-{\frac {m_{0}c^{2}}{\gamma ({\dot {\mathbf {r} }})}}-V(\mathbf {r} ,{\dot {\mathbf {r} }},t)\,.} and the Euler–Lagrange equations lead to the relativistic version of Newton's second law. The derivative of relativistic momentum with respect to the time coordinate is equal to the force acting on the particle: F = d d t ∂ V ∂ r ˙ − ∂ V ∂ r = d d t ( m 0 γ ( r ˙ ) r ˙ ) , {\displaystyle \mathbf {F} ={\frac {d}{dt}}{\frac {\partial V}{\partial {\dot {\mathbf {r} }}}}-{\frac {\partial V}{\partial \mathbf {r} }}={\frac {d}{dt}}(m_{0}\gamma ({\dot {\mathbf {r} }}){\dot {\mathbf {r} }})\,,} assuming the potential V can generate the corresponding force F in this way. If the potential cannot obtain the force as shown, then the Lagrangian would need modification to obtain the correct equations of motion. Although this has been shown by taking Cartesian coordinates, it follows due to invariance of Euler Lagrange equations, that it is also satisfied in any arbitrary co-ordinate system as it physically corresponds to action minimization being independent of the co-ordinate system used to describe it. In a similar manner, several properties in Lagrangian mechanics are preserved whenever they are also independent of the specific form of the Lagrangian or the laws of motion governing the particles. For example, it is also true that if the Lagrangian is explicitly independent of time and the potential V(r) independent of velocities, then the total relativistic energy E = ∂ L ∂ r ˙ ⋅ r ˙ − L = γ ( r ˙ ) m 0 c 2 + V ( r ) {\displaystyle E={\frac {\partial L}{\partial {\dot {\mathbf {r} }}}}\cdot {\dot {\mathbf {r} }}-L=\gamma ({\dot {\mathbf {r} }})m_{0}c^{2}+V(\mathbf {r} )} is conserved, although the identification is less obvious since the first term is the relativistic energy of the particle which includes the rest mass of the particle, not merely the relativistic kinetic energy. Also, the argument for homogeneous functions does not apply to relativistic Lagrangians. The extension to N particles is straightforward, the relativistic Lagrangian is just a sum of the "free particle" terms, minus the potential energy of their interaction; L = − c 2 ∑ k = 1 N m 0 k γ ( r ˙ k ) − V ( r 1 , r 2 , … , r ˙ 1 , r ˙ 2 , … , t ) , {\displaystyle L=-c^{2}\sum _{k=1}^{N}{\frac {m_{0k}}{\gamma ({\dot {\mathbf {r} }}_{k})}}-V(\mathbf {r} _{1},\mathbf {r} _{2},\ldots ,{\dot {\mathbf {r} }}_{1},{\dot {\mathbf {r} }}_{2},\ldots ,t)\,,} where all the positions and velocities are measured in the same lab frame, including the time. The advantage of this coordinate formulation is that it can be applied to a variety of systems, including multiparticle systems. The disadvantage is that some lab frame has been singled out as a preferred frame, and none of the equations are manifestly covariant (in other words, they do not take the same form in all frames of reference). For an observer moving relative to the lab frame, everything must be recalculated; the position r, the momentum p, total energy E, potential energy, etc. In particular, if this other observer moves with constant relative velocity then Lorentz transformations must be used. However, the action will remain the same since it is Lorentz invariant by construction. A seemingly different but completely equivalent form of the Lagrangian for a free massive particle, which will readily extend to general relativity as shown below, can be obtained by inserting d τ = 1 c η α β d x α d t d x β d t d t , {\displaystyle d\tau ={\frac {1}{c}}{\sqrt {\eta _{\alpha \beta }{\frac {dx^{\alpha }}{dt}}{\frac {dx^{\beta }}{dt}}}}dt\,,} into the Lorentz invariant action so that S = ε ∫ t 1 t 2 1 c η α β d x α d t d x β d t d t ⇒ L = ε c η α β d x α d t d x β d t {\displaystyle S=\varepsilon \int _{t_{1}}^{t_{2}}{\frac {1}{c}}{\sqrt {\eta _{\alpha \beta }{\frac {dx^{\alpha }}{dt}}{\frac {dx^{\beta }}{dt}}}}dt\quad \Rightarrow \quad L={\frac {\varepsilon }{c}}{\sqrt {\eta _{\alpha \beta }{\frac {dx^{\alpha }}{dt}}{\frac {dx^{\beta }}{dt}}}}} where ε = −m0c2 is retained for simplicity. Although the line element and action are Lorentz invariant, the Lagrangian is not, because it has explicit dependence on the lab coordinate time. Still, the equations of motion follow from Hamilton's principle δ S = 0 . {\displaystyle \delta S=0\,.} Since the action is proportional to the length of the particle's worldline (in other words its trajectory in spacetime), this route illustrates that finding the stationary action is asking to find the trajectory of shortest or largest length in spacetime. Correspondingly, the equations of motion of the particle are akin to the equations describing the trajectories of shortest or largest length in spacetime, geodesics. For the case of an interacting particle in a potential V, the Lagrangian is still L = ε c η α β d x α d t d x β d t − V , {\displaystyle L={\frac {\varepsilon }{c}}{\sqrt {\eta _{\alpha \beta }{\frac {dx^{\alpha }}{dt}}{\frac {dx^{\beta }}{dt}}}}-V,} which can also extend to many particles as shown above, each particle has its own set of position coordinates to define its position. === Covariant formulation === In the covariant formulation, time is placed on equal footing with space, so the coordinate time as measured in some frame is part of the configuration space alongside the spatial coordinates (and other generalized coordinates). For a particle, either massless or massive, the Lorentz invariant action is (abusing notation) S = ∫ σ 1 σ 2 Λ ( x ν ( σ ) , u ν ( σ ) , σ ) d σ , {\displaystyle S=\int _{\sigma _{1}}^{\sigma _{2}}\Lambda (x^{\nu }(\sigma ),u^{\nu }(\sigma ),\sigma )d\sigma ,} where lower and upper indices are used according to covariance and contravariance of vectors, σ is an affine parameter, and uμ = dxμ/dσ is the four-velocity of the particle. For massive particles, σ can be the arc length s, or proper time τ, along the particle's world line, d s 2 = c 2 d τ 2 = g α β d x α d x β . {\displaystyle ds^{2}=c^{2}d\tau ^{2}=g_{\alpha \beta }dx^{\alpha }dx^{\beta }.} For massless particles, it cannot because the proper time of a massless particle is always zero; g α β d x α d x β = 0 . {\displaystyle g_{\alpha \beta }dx^{\alpha }dx^{\beta }=0\,.} For a free particle, the Lagrangian has the form Λ = g α β d x α d σ d x β d σ {\displaystyle \Lambda =g_{\alpha \beta }{\frac {dx^{\alpha }}{d\sigma }}{\frac {dx^{\beta }}{d\sigma }}} where the irrelevant factor of 1/2 is allowed to be scaled away by the scaling property of Lagrangians. No inclusion of mass is necessary since this also applies to massless particles. The Euler–Lagrange equations in the spacetime coordinates are d d σ ∂ Λ ∂ u α − ∂ Λ ∂ x α = d 2 x α d σ 2 + Γ β γ α d x β d σ d x γ d σ = 0 , {\displaystyle {\frac {d}{d\sigma }}{\frac {\partial \Lambda }{\partial u^{\alpha }}}-{\frac {\partial \Lambda }{\partial x^{\alpha }}}={\frac {d^{2}x^{\alpha }}{d\sigma ^{2}}}+\Gamma _{\beta \gamma }^{\alpha }{\frac {dx^{\beta }}{d\sigma }}{\frac {dx^{\gamma }}{d\sigma }}=0\,,} which is the geodesic equation for affinely parameterized geodesics in spacetime. In other words, the free particle follows geodesics. Geodesics for massless particles are called "null geodesics", since they lie in a "light cone" or "null cone" of spacetime (the null comes about because their inner product via the metric is equal to 0), massive particles follow "timelike geodesics", and hypothetical particles that travel faster than light known as tachyons follow "spacelike geodesics". This manifestly covariant formulation does not extend to an N-particle system, since then the affine parameter of any one particle cannot be defined as a common parameter for all the other particles. == Examples in special relativity == === Special relativistic 1d free particle === For a 1d relativistic free particle, the Lagrangian is L = − m 0 c 2 1 − x ˙ 2 ( t ) c 2 . {\displaystyle L=-m_{0}c^{2}{\sqrt {1-{\frac {{\dot {x}}^{2}(t)}{c^{2}}}}}\,.} This results in the following equation of motion: m 0 x ¨ 1 ( 1 − x ˙ 2 c 2 ) 3 2 = 0 . {\displaystyle m_{0}{\ddot {x}}\,{\frac {1}{\left(1-{\frac {{\dot {x}}^{2}}{c^{2}}}\right)^{\frac {3}{2}}}}=0\,.} === Special relativistic 1d harmonic oscillator === For a 1d relativistic simple harmonic oscillator, the Lagrangian is L = − m c 2 1 − x ˙ 2 ( t ) c 2 − k 2 x 2 . {\displaystyle L=-mc^{2}{\sqrt {1-{\frac {{\dot {x}}^{2}(t)}{c^{2}}}}}-{\frac {k}{2}}x^{2}\,.} where k is the spring constant. === Special relativistic constant force === For a particle under a constant force, the Lagrangian is L = − m c 2 1 − x ˙ 2 ( t ) c 2 − m g x , {\displaystyle L=-mc^{2}{\sqrt {1-{\frac {{\dot {x}}^{2}(t)}{c^{2}}}}}-mgx\,,} where g is the force per unit mass. This results in the following equation of motion: 1 ( 1 − x ˙ 2 c 2 ) 3 2 x ¨ = − g . {\displaystyle {\frac {1}{(1-{\frac {{\dot {x}}^{2}}{c^{2}}})^{\frac {3}{2}}}}{\ddot {x}}=-g\,.} Which, given initial conditions of x ( t = 0 ) = x 0 x ˙ ( t = 0 ) = v 0 {\displaystyle {\begin{aligned}x(t=0)&=x_{0}\\{\dot {x}}(t=0)&=v_{0}\end{aligned}}} results in the position of the particle as a function of time being x ( t ) = x 0 + c 2 g [ 1 1 − v 0 2 c 2 − 1 1 − v 0 2 c 2 + g 2 t 2 c 2 − 2 v 0 g t c 2 1 − v 0 2 c 2 ] . {\displaystyle x(t)=x_{0}+{\frac {c^{2}}{g}}\left[{\frac {1}{\sqrt {1-{\frac {v_{0}^{2}}{c^{2}}}}}}-{\sqrt {{\frac {1}{1-{\frac {v_{0}^{2}}{c^{2}}}}}+{\frac {g^{2}t^{2}}{c^{2}}}-{\frac {2v_{0}gt}{c^{2}{\sqrt {1-{\frac {v_{0}^{2}}{c^{2}}}}}}}}}\right]\,.} === Special relativistic test particle in an electromagnetic field === In special relativity, the Lagrangian of a massive charged test particle in an electromagnetic field modifies to L = − m c 2 1 − v 2 c 2 − q ϕ + q r ˙ ⋅ A . {\displaystyle L=-mc^{2}{\sqrt {1-{\frac {v^{2}}{c^{2}}}}}-q\phi +q{\dot {\mathbf {r} }}\cdot \mathbf {A} \,.} The Lagrangian equations in r lead to the Lorentz force law, in terms of the relativistic momentum d d t ( m r ˙ 1 − v 2 c 2 ) = q E + q r ˙ × B . {\displaystyle {\frac {d}{dt}}\left({\frac {m{\dot {\mathbf {r} }}}{\sqrt {1-{\frac {v^{2}}{c^{2}}}}}}\right)=q\mathbf {E} +q{\dot {\mathbf {r} }}\times \mathbf {B} \,.} In the language of four-vectors and tensor index notation, the Lagrangian takes the form L ( τ ) = 1 2 m u μ ( τ ) u μ ( τ ) + q u μ ( τ ) A μ ( x ) , {\displaystyle L(\tau )={\frac {1}{2}}mu^{\mu }(\tau )u_{\mu }(\tau )+qu^{\mu }(\tau )A_{\mu }(x),} where uμ = dxμ/dτ is the four-velocity of the test particle, and Aμ the electromagnetic four-potential. The Euler–Lagrange equations are (notice the total derivative with respect to proper time instead of coordinate time) ∂ L ∂ x ν − d d τ ∂ L ∂ u ν = 0 {\displaystyle {\frac {\partial L}{\partial x^{\nu }}}-{\frac {d}{d\tau }}{\frac {\partial L}{\partial u^{\nu }}}=0} obtains q u μ ∂ A μ ∂ x ν = d d τ ( m u ν + q A ν ) . {\displaystyle qu^{\mu }{\frac {\partial A_{\mu }}{\partial x^{\nu }}}={\frac {d}{d\tau }}(mu_{\nu }+qA_{\nu })\,.} Under the total derivative with respect to proper time, the first term is the relativistic momentum, the second term is d A ν d τ = ∂ A ν ∂ x μ d x μ d τ = ∂ A ν ∂ x μ u μ , {\displaystyle {\frac {dA_{\nu }}{d\tau }}={\frac {\partial A_{\nu }}{\partial x^{\mu }}}{\frac {dx^{\mu }}{d\tau }}={\frac {\partial A_{\nu }}{\partial x^{\mu }}}u^{\mu }\,,} then rearranging, and using the definition of the antisymmetric electromagnetic tensor, gives the covariant form of the Lorentz force law in the more familiar form, d d τ ( m u ν ) = q u μ F ν μ , F ν μ = ∂ A μ ∂ x ν − ∂ A ν ∂ x μ . {\displaystyle {\frac {d}{d\tau }}(mu_{\nu })=qu^{\mu }F_{\nu \mu }\,,\quad F_{\nu \mu }={\frac {\partial A_{\mu }}{\partial x^{\nu }}}-{\frac {\partial A_{\nu }}{\partial x^{\mu }}}\,.} == Lagrangian formulation in general relativity == The Lagrangian is that of a single particle plus an interaction term LI L = − m c 2 d τ d t + L I . {\displaystyle L=-mc^{2}{\frac {d\tau }{dt}}+L_{\text{I}}\,.} Varying this with respect to the position of the particle xα as a function of time t gives δ L = m d t 2 d τ δ ( g μ ν d x μ d t d x ν d t ) + δ L I = m d t 2 d τ ( g μ ν , α δ x α d x μ d t d x ν d t + 2 g α ν d δ x α d t d x ν d t ) + ∂ L I ∂ x α δ x α + ∂ L I ∂ d x α d t d δ x α d t = 1 2 m g μ ν , α δ x α d x μ d τ d x ν d t − d d t ( m g α ν d x ν d τ ) δ x α + ∂ L I ∂ x α δ x α − d d t ( ∂ L I ∂ d x α d t ) δ x α + d ( ⋯ ) d t . {\displaystyle {\begin{aligned}\delta L&=m{\frac {dt}{2d\tau }}\delta \left(g_{\mu \nu }{\frac {dx^{\mu }}{dt}}{\frac {dx^{\nu }}{dt}}\right)+\delta L_{\text{I}}\\&=m{\frac {dt}{2d\tau }}\left(g_{\mu \nu ,\alpha }\delta x^{\alpha }{\frac {dx^{\mu }}{dt}}{\frac {dx^{\nu }}{dt}}+2g_{\alpha \nu }{\frac {d\delta x^{\alpha }}{dt}}{\frac {dx^{\nu }}{dt}}\right)+{\frac {\partial L_{\text{I}}}{\partial x^{\alpha }}}\delta x^{\alpha }+{\frac {\partial L_{\text{I}}}{\partial {\frac {dx^{\alpha }}{dt}}}}{\frac {d\delta x^{\alpha }}{dt}}\\&={\frac {1}{2}}mg_{\mu \nu ,\alpha }\delta x^{\alpha }{\frac {dx^{\mu }}{d\tau }}{\frac {dx^{\nu }}{dt}}-{\frac {d}{dt}}\left(mg_{\alpha \nu }{\frac {dx^{\nu }}{d\tau }}\right)\delta x^{\alpha }+{\frac {\partial L_{\text{I}}}{\partial x^{\alpha }}}\delta x^{\alpha }-{\frac {d}{dt}}\left({\frac {\partial L_{\text{I}}}{\partial {\frac {dx^{\alpha }}{dt}}}}\right)\delta x^{\alpha }+{\frac {d(\cdots )}{dt}}\,.\end{aligned}}} This gives the equation of motion 0 = 1 2 m g μ ν , α d x μ d τ d x ν d t − d d t ( m g α ν d x ν d τ ) + f α {\displaystyle 0={\frac {1}{2}}mg_{\mu \nu ,\alpha }{\frac {dx^{\mu }}{d\tau }}{\frac {dx^{\nu }}{dt}}-{\frac {d}{dt}}\left(mg_{\alpha \nu }{\frac {dx^{\nu }}{d\tau }}\right)+f_{\alpha }} where f α = ∂ L I ∂ x α − d d t ( ∂ L I ∂ d x α d t ) {\displaystyle f_{\alpha }={\frac {\partial L_{\text{I}}}{\partial x^{\alpha }}}-{\frac {d}{dt}}\left({\frac {\partial L_{\text{I}}}{\partial {\frac {dx^{\alpha }}{dt}}}}\right)} is the non-gravitational force on the particle. (For m to be independent of time, we must have fαdxα/dt = 0.) Rearranging gets the force equation d d t ( m d x ν d τ ) = − m Γ μ σ ν d x μ d τ d x σ d t + g ν α f α {\displaystyle {\frac {d}{dt}}\left(m{\frac {dx^{\nu }}{d\tau }}\right)=-m\Gamma _{\mu \sigma }^{\nu }{\frac {dx^{\mu }}{d\tau }}{\frac {dx^{\sigma }}{dt}}+g^{\nu \alpha }f_{\alpha }} where Γ is the Christoffel symbol, which describes the gravitational field. If we let p ν = m d x ν d τ {\displaystyle p^{\nu }=m{\frac {dx^{\nu }}{d\tau }}} be the (kinetic) linear momentum for a particle with mass, then d p ν d t = − Γ μ σ ν p μ d x σ d t + g ν α f α {\displaystyle {\frac {dp^{\nu }}{dt}}=-\Gamma _{\mu \sigma }^{\nu }p^{\mu }{\frac {dx^{\sigma }}{dt}}+g^{\nu \alpha }f_{\alpha }} and d x ν d t = p ν p 0 {\displaystyle {\frac {dx^{\nu }}{dt}}={\frac {p^{\nu }}{p^{0}}}} hold even for a massless particle. == Examples in general relativity == === General relativistic test particle in an electromagnetic field === In general relativity, the first term generalizes (includes) both the classical kinetic energy and the interaction with the gravitational field. For a charged particle in an electromagnetic field, the Lagrangian is given by L ( x , x ˙ ) = − m c 2 − c − 2 g μ ν ( x ( τ ) ) d x μ ( τ ) d τ d x ν ( τ ) d τ + q d x μ ( τ ) d τ A μ ( x ( τ ) ) . {\displaystyle L(x,{\dot {x}})=-mc^{2}{\sqrt {-c^{-2}g_{\mu \nu }(x(\tau )){\frac {dx^{\mu }(\tau )}{d\tau }}{\frac {dx^{\nu }(\tau )}{d\tau }}}}+q{\frac {dx^{\mu }(\tau )}{d\tau }}A_{\mu }(x(\tau ))\,.} If the four spacetime coordinates xμ are given in arbitrary units (i.e. unitless), then gμν is the rank 2 symmetric metric tensor, which is also the gravitational potential. Also, Aμ is the electromagnetic 4-vector potential. There exists an equivalent formulation of the relativistic Lagrangian, which has two advantages: it allows for a generalization to massless particles and tachyons; it is based on an energy functional instead of a length functional, such that it does not contain a square root. In this alternative formulation, the Lagrangian is given by L ( x , x ˙ , e ) = 1 2 e ( λ ) g μ ν ( x ( λ ) ) d x μ ( λ ) d λ d x ν ( λ ) d λ − e ( λ ) m 2 c 2 2 + q A μ ( x ( λ ) ) d x μ ( λ ) d λ {\displaystyle L(x,{\dot {x}},e)={\frac {1}{2\,e(\lambda )}}g_{\mu \nu }(x(\lambda ))\,{\frac {dx^{\mu }(\lambda )}{d\lambda }}{\frac {dx^{\nu }(\lambda )}{d\lambda }}-{\frac {e(\lambda )\,m^{2}\,c^{2}}{2}}+q\,A_{\mu }(x(\lambda )){\frac {dx^{\mu }(\lambda )}{d\lambda }}} , where λ {\displaystyle \lambda } is an arbitrary affine parameter and e {\displaystyle e} is an auxiliary parameter that can be viewed as an einbein field along the worldline. In the original Lagrangian with the square root the energy-momentum relation appears as a primary constraint that is also a first class constraint. In this reformulation this is no longer the case. Instead, the energy-momentum relation appears as the equation of motion for the auxiliary field e {\displaystyle e} . Therefore, the constraint is now a secondary constraint that is still a first class constraint, reflecting the invariance of the action under reparameterization of the affine parameter λ {\displaystyle \lambda } . After the equation of motion has been derived, one must gauge fix the auxiliary field e {\displaystyle e} . The standard gauge choice is as follows: If m 2 > 0 {\displaystyle m^{2}>0} , one fixes e = | m | − 1 {\displaystyle e=|m|^{-1}} . This choice automatically fixes λ = τ {\displaystyle \lambda =\tau } , i.e. the affine parameter is fixed to be the proper time. If m 2 < 0 {\displaystyle m^{2}<0} , one fixes e = | m c | − 1 {\displaystyle e=|m\,c|^{-1}} . This choice automatically fixes λ = s {\displaystyle \lambda =s} , i.e. the affine parameter is fixed to be the proper length. If m = 0 {\displaystyle m=0} , there is no choice that fixes the affine parameter λ {\displaystyle \lambda } to a physical parameter. Consequently, there is some freedom in fixing the auxiliary field. The two common choices are: Fix e = 1 {\displaystyle e=1} . In this case, e {\displaystyle e} does not carry a dependence on the affine parameter λ {\displaystyle \lambda } , but the affine parameter is measured in units of time per unit of mass, i.e. [ λ ] = T / M {\displaystyle [\lambda ]=T/M} . Fix e = c 2 E ( λ ) {\displaystyle e=c^{2}E(\lambda )} , where E {\displaystyle E} is the energy of the particle. In this case, the affine parameter is measured in units of time, i.e. [ λ ] = T {\displaystyle [\lambda ]=T} , but e {\displaystyle e} retains a dependence on the affine parameter λ {\displaystyle \lambda } . == See also == Relativistic mechanics Fundamental lemma of the calculus of variations Canonical coordinates Functional derivative Generalized coordinates Hamiltonian mechanics Hamiltonian optics Lagrangian analysis (applications of Lagrangian mechanics) Lagrangian point Lagrangian system Non-autonomous mechanics Restricted three-body problem Plateau's problem == Footnotes == == Citations == == References == Penrose, Roger (2007). The Road to Reality. Vintage books. ISBN 978-0-679-77631-4. Landau, L. D.; Lifshitz, E. M. (15 January 1976). Mechanics (3rd ed.). Butterworth Heinemann. p. 134. ISBN 978-0-7506-2896-9. Landau, Lev; Lifshitz, Evgeny (1975). The Classical Theory of Fields. Elsevier Ltd. ISBN 978-0-7506-2768-9. Hand, L. N.; Finch, J. D. (13 November 1998). Analytical Mechanics (2nd ed.). Cambridge University Press. p. 23. ISBN 978-0-521-57572-0. Louis N. Hand; Janet D. Finch (1998). Analytical mechanics. Cambridge University Press. pp. 140–141. ISBN 0-521-57572-9. Goldstein, Herbert (1980). Classical Mechanics (2nd ed.). San Francisco, CA: Addison Wesley. pp. 352–353. ISBN 0-201-02918-9. Goldstein, Herbert; Poole, Charles P. Jr.; Safko, John L. (2002). Classical Mechanics (3rd ed.). San Francisco, CA: Addison Wesley. pp. 347–349. ISBN 0-201-65702-3. Lanczos, Cornelius (1986). "II §5 Auxiliary conditions: the Lagrangian λ-method". The variational principles of mechanics (Reprint of University of Toronto 1970 4th ed.). Courier Dover. p. 43. ISBN 0-486-65067-7. Feynman, R. P.; Leighton, R. B.; Sands, M. (1964). The Feynman Lectures on Physics. Vol. 2. Addison Wesley. ISBN 0-201-02117-X. {{cite book}}: ISBN / Date incompatibility (help) Foster, J; Nightingale, J.D. (1995). A Short Course in General Relativity (2nd ed.). Springer. ISBN 0-03-063366-4. Hobson, M. P.; Efstathiou, G. P.; Lasenby, A. N. (2006). General Relativity: An Introduction for Physicists. Cambridge University Press. pp. 79–80. ISBN 978-0-521-82951-9.
Wikipedia/Relativistic_Lagrangian_mechanics
In physics, a homogeneous material or system has the same properties at every point; it is uniform without irregularities. A uniform electric field (which has the same strength and the same direction at each point) would be compatible with homogeneity (all points experience the same physics). A material constructed with different constituents can be described as effectively homogeneous in the electromagnetic materials domain, when interacting with a directed radiation field (light, microwave frequencies, etc.). Mathematically, homogeneity has the connotation of invariance, as all components of the equation have the same degree of value whether or not each of these components are scaled to different values, for example, by multiplication or addition. Cumulative distribution fits this description. "The state of having identical cumulative distribution function or values". == Context == The definition of homogeneous strongly depends on the context used. For example, a composite material is made up of different individual materials, known as "constituents" of the material, but may be defined as a homogeneous material when assigned a function. For example, asphalt paves our roads, but is a composite material consisting of asphalt binder and mineral aggregate, and then laid down in layers and compacted. However, homogeneity of materials does not necessarily mean isotropy. In the previous example, a composite material may not be isotropic. In another context, a material is not homogeneous in so far as it is composed of atoms and molecules. However, at the normal level of our everyday world, a pane of glass, or a sheet of metal is described as glass, or stainless steel. In other words, these are each described as a homogeneous material. A few other instances of context are: dimensional homogeneity (see below) is the quality of an equation having quantities of same units on both sides; homogeneity (in space) implies conservation of momentum; and homogeneity in time implies conservation of energy. === Homogeneous alloy === In the context of composite metals is an alloy. A blend of a metal with one or more metallic or nonmetallic materials is an alloy. The components of an alloy do not combine chemically but, rather, are very finely mixed. An alloy might be homogeneous or might contain small particles of components that can be viewed with a microscope. Brass is an example of an alloy, being a homogeneous mixture of copper and zinc. Another example is steel, which is an alloy of iron with carbon and possibly other metals. The purpose of alloying is to produce desired properties in a metal that naturally lacks them. Brass, for example, is harder than copper and has a more gold-like color. Steel is harder than iron and can even be made rust proof (stainless steel). === Homogeneous cosmology === Homogeneity, in another context plays a role in cosmology. From the perspective of 19th-century cosmology (and before), the universe was infinite, unchanging, homogeneous, and therefore filled with stars. However, German astronomer Heinrich Olbers asserted that if this were true, then the entire night sky would be filled with light and bright as day; this is known as Olbers' paradox. Olbers presented a technical paper in 1826 that attempted to answer this conundrum. The faulty premise, unknown in Olbers' time, was that the universe is not infinite, static, and homogeneous. The Big Bang cosmology replaced this model (expanding, finite, and inhomogeneous universe). However, modern astronomers supply reasonable explanations to answer this question. One of at least several explanations is that distant stars and galaxies are red shifted, which weakens their apparent light and makes the night sky dark. However, the weakening is not sufficient to actually explain Olbers' paradox. Many cosmologists think that the fact that the Universe is finite in time, that is that the Universe has not been around forever, is the solution to the paradox. The fact that the night sky is dark is thus an indication for the Big Bang. == Translation invariance == By translation invariance, one means independence of (absolute) position, especially when referring to a law of physics, or to the evolution of a physical system. Fundamental laws of physics should not (explicitly) depend on position in space. That would make them quite useless. In some sense, this is also linked to the requirement that experiments should be reproducible. This principle is true for all laws of mechanics (Newton's laws, etc.), electrodynamics, quantum mechanics, etc. In practice, this principle is usually violated, since one studies only a small subsystem of the universe, which of course "feels" the influence of the rest of the universe. This situation gives rise to "external fields" (electric, magnetic, gravitational, etc.) which make the description of the evolution of the system depend upon its position (potential wells, etc.). This only stems from the fact that the objects creating these external fields are not considered as (a "dynamical") part of the system. Translational invariance as described above is equivalent to shift invariance in system analysis, although here it is most commonly used in linear systems, whereas in physics the distinction is not usually made. The notion of isotropy, for properties independent of direction, is not a consequence of homogeneity. For example, a uniform electric field (i.e., which has the same strength and the same direction at each point) would be compatible with homogeneity (at each point physics will be the same), but not with isotropy, since the field singles out one "preferred" direction. === Consequences === In the Lagrangian formalism, homogeneity in space implies conservation of momentum, and homogeneity in time implies conservation of energy. This is shown, using variational calculus, in standard textbooks like the classical reference text of Landau & Lifshitz. This is a particular application of Noether's theorem. == Dimensional homogeneity == As said in the introduction, dimensional homogeneity is the quality of an equation having quantities of same units on both sides. A valid equation in physics must be homogeneous, since equality cannot apply between quantities of different nature. This can be used to spot errors in formula or calculations. For example, if one is calculating a speed, units must always combine to [length]/[time]; if one is calculating an energy, units must always combine to [mass][length]2/[time]2, etc. For example, the following formulae could be valid expressions for some energy: E k = 1 2 m v 2 ; E = m c 2 ; E = p v ; E = h c / λ {\displaystyle E_{\text{k}}={\frac {1}{2}}mv^{2};~~E=mc^{2};~~E=pv;~~E=hc/\lambda } if m is a mass, v and c are velocities, p is a momentum, h is the Planck constant, λ a length. On the other hand, if the units of the right hand side do not combine to [mass][length]2/[time]2, it cannot be a valid expression for some energy. Being homogeneous does not necessarily mean the equation will be true, since it does not take into account numerical factors. For example, E = mv2 could be or could not be the correct formula for the energy of a particle of mass m traveling at speed v, and one cannot know if hc/λ should be divided or multiplied by 2π. Nevertheless, this is a very powerful tool in finding characteristic units of a given problem, see dimensional analysis. == See also == Translational invariance Miscibility Phase (matter) == References ==
Wikipedia/Homogeneity_(physics)
In particle physics, flavour or flavor refers to the species of an elementary particle. The Standard Model counts six flavours of quarks and six flavours of leptons. They are conventionally parameterized with flavour quantum numbers that are assigned to all subatomic particles. They can also be described by some of the family symmetries proposed for the quark-lepton generations. == Quantum numbers == In classical mechanics, a force acting on a point-like particle can only alter the particle's dynamical state, i.e., its momentum, angular momentum, etc. Quantum field theory, however, allows interactions that can alter other facets of a particle's nature described by non-dynamical, discrete quantum numbers. In particular, the action of the weak force is such that it allows the conversion of quantum numbers describing mass and electric charge of both quarks and leptons from one discrete type to another. This is known as a flavour change, or flavour transmutation. Due to their quantum description, flavour states may also undergo quantum superposition. In atomic physics the principal quantum number of an electron specifies the electron shell in which it resides, which determines the energy level of the whole atom. Analogously, the five flavour quantum numbers (isospin, strangeness, charm, bottomness or topness) can characterize the quantum state of quarks, by the degree to which it exhibits six distinct flavours (u, d, c, s, t, b). Composite particles can be created from multiple quarks, forming hadrons, such as mesons and baryons, each possessing unique aggregate characteristics, such as different masses, electric charges, and decay modes. A hadron's overall flavour quantum numbers depend on the numbers of constituent quarks of each particular flavour. === Conservation laws === All of the various charges discussed above are conserved by the fact that the corresponding charge operators can be understood as generators of symmetries that commute with the Hamiltonian. Thus, the eigenvalues of the various charge operators are conserved. Absolutely conserved quantum numbers in the Standard Model are: electric charge (Q) weak isospin (T3) baryon number (B) lepton number (L) In some theories, such as the grand unified theory, the individual baryon and lepton number conservation can be violated, if the difference between them (B − L) is conserved (see Chiral anomaly). Strong interactions conserve all flavours, but all flavour quantum numbers are violated (changed, non-conserved) by electroweak interactions. == Flavour symmetry == If there are two or more particles which have identical interactions, then they may be interchanged without affecting the physics. All (complex) linear combinations of these two particles give the same physics, as long as the combinations are orthogonal, or perpendicular, to each other. In other words, the theory possesses symmetry transformations such as M ( u d ) {\displaystyle M\left({u \atop d}\right)} , where u and d are the two fields (representing the various generations of leptons and quarks, see below), and M is any 2×2 unitary matrix with a unit determinant. Such matrices form a Lie group called SU(2) (see special unitary group). This is an example of flavour symmetry. In quantum chromodynamics, flavour is a conserved global symmetry. In the electroweak theory, on the other hand, this symmetry is broken, and flavour changing processes exist, such as quark decay or neutrino oscillations. == Flavour quantum numbers == === Leptons === All leptons carry a lepton number L = 1. In addition, leptons carry weak isospin, T3, which is −⁠1/2⁠ for the three charged leptons (i.e. electron, muon and tau) and +⁠1/2⁠ for the three associated neutrinos. Each doublet of a charged lepton and a neutrino consisting of opposite T3 are said to constitute one generation of leptons. In addition, one defines a quantum number called weak hypercharge, YW, which is −1 for all left-handed leptons. Weak isospin and weak hypercharge are gauged in the Standard Model. Leptons may be assigned the six flavour quantum numbers: electron number, muon number, tau number, and corresponding numbers for the neutrinos (electron neutrino, muon neutrino and tau neutrino). These are conserved in strong and electromagnetic interactions, but violated by weak interactions. Therefore, such flavour quantum numbers are not of great use. A separate quantum number for each generation is more useful: electronic lepton number (+1 for electrons and electron neutrinos), muonic lepton number (+1 for muons and muon neutrinos), and tauonic lepton number (+1 for tau leptons and tau neutrinos). However, even these numbers are not absolutely conserved, as neutrinos of different generations can mix; that is, a neutrino of one flavour can transform into another flavour. The strength of such mixings is specified by a matrix called the Pontecorvo–Maki–Nakagawa–Sakata matrix (PMNS matrix). === Quarks === All quarks carry a baryon number B = ⁠++1/3⁠ , and all anti-quarks have B = ⁠−+1/3⁠ . They also all carry weak isospin, T3 = ⁠±+1/2⁠ . The positively charged quarks (up, charm, and top quarks) are called up-type quarks and have T3 = ⁠++1/2⁠ ; the negatively charged quarks (down, strange, and bottom quarks) are called down-type quarks and have T3 = ⁠−+1/2⁠ . Each doublet of up and down type quarks constitutes one generation of quarks. For all the quark flavour quantum numbers listed below, the convention is that the flavour charge and the electric charge of a quark have the same sign. Thus any flavour carried by a charged meson has the same sign as its charge. Quarks have the following flavour quantum numbers: The third component of isospin (usually just "isospin") (I3), which has value I3 = ⁠1/2⁠ for the up quark and I3 = −⁠1/2⁠ for the down quark. Strangeness (S): Defined as S = −n s + n s̅ , where ns represents the number of strange quarks (s) and ns̅ represents the number of strange antiquarks (s). This quantum number was introduced by Murray Gell-Mann. This definition gives the strange quark a strangeness of −1 for the above-mentioned reason. Charm (C): Defined as C = n c − n c̅ , where nc represents the number of charm quarks (c) and nc̅ represents the number of charm antiquarks. The charm quark's value is +1. Bottomness (or beauty) (B′): Defined as B′ = −n b + n b̅ , where nb represents the number of bottom quarks (b) and nb̅ represents the number of bottom antiquarks. Topness (or truth) (T): Defined as T = n t − n t̅ , where nt represents the number of top quarks (t) and nt̅ represents the number of top antiquarks. However, because of the extremely short half-life of the top quark (predicted lifetime of only 5×10−25 s), by the time it can interact strongly it has already decayed to another flavour of quark (usually to a bottom quark). For that reason the top quark doesn't hadronize, that is it never forms any meson or baryon. These five quantum numbers, together with baryon number (which is not a flavour quantum number), completely specify numbers of all 6 quark flavours separately (as n q − n q̅ , i.e. an antiquark is counted with the minus sign). They are conserved by both the electromagnetic and strong interactions (but not the weak interaction). From them can be built the derived quantum numbers: Hypercharge (Y): Y = B + S + C + B′ + T Electric charge (Q): Q = I3 + ⁠1/2⁠Y (see Gell-Mann–Nishijima formula) The terms "strange" and "strangeness" predate the discovery of the quark, but continued to be used after its discovery for the sake of continuity (i.e. the strangeness of each type of hadron remained the same); strangeness of anti-particles being referred to as +1, and particles as −1 as per the original definition. Strangeness was introduced to explain the rate of decay of newly discovered particles, such as the kaon, and was used in the Eightfold Way classification of hadrons and in subsequent quark models. These quantum numbers are preserved under strong and electromagnetic interactions, but not under weak interactions. For first-order weak decays, that is processes involving only one quark decay, these quantum numbers (e.g. charm) can only vary by 1, that is, for a decay involving a charmed quark or antiquark either as the incident particle or as a decay byproduct, ΔC = ±1 ; likewise, for a decay involving a bottom quark or antiquark ΔB′ = ±1 . Since first-order processes are more common than second-order processes (involving two quark decays), this can be used as an approximate "selection rule" for weak decays. A special mixture of quark flavours is an eigenstate of the weak interaction part of the Hamiltonian, so will interact in a particularly simple way with the W bosons (charged weak interactions violate flavour). On the other hand, a fermion of a fixed mass (an eigenstate of the kinetic and strong interaction parts of the Hamiltonian) is an eigenstate of flavour. The transformation from the former basis to the flavour-eigenstate/mass-eigenstate basis for quarks underlies the Cabibbo–Kobayashi–Maskawa matrix (CKM matrix). This matrix is analogous to the PMNS matrix for neutrinos, and quantifies flavour changes under charged weak interactions of quarks. The CKM matrix allows for CP violation if there are at least three generations. === Antiparticles and hadrons === Flavour quantum numbers are additive. Hence antiparticles have flavour equal in magnitude to the particle but opposite in sign. Hadrons inherit their flavour quantum number from their valence quarks: this is the basis of the classification in the quark model. The relations between the hypercharge, electric charge and other flavour quantum numbers hold for hadrons as well as quarks. == Flavour problem == The flavour problem (also known as the flavour puzzle) is the inability of current Standard Model flavour physics to explain why the free parameters of particles in the Standard Model have the values they have, and why there are specified values for mixing angles in the PMNS and CKM matrices. These free parameters - the fermion masses and their mixing angles - appear to be specifically tuned. Understanding the reason for such tuning would be the solution to the flavor puzzle. There are very fundamental questions involved in this puzzle such as why there are three generations of quarks (up-down, charm-strange, and top-bottom quarks) and leptons (electron, muon and tau neutrino), as well as how and why the mass and mixing hierarchy arises among different flavours of these fermions. == Quantum chromodynamics == Quantum chromodynamics (QCD) contains six flavours of quarks. However, their masses differ and as a result they are not strictly interchangeable with each other. The up and down flavours are close to having equal masses, and the theory of these two quarks possesses an approximate SU(2) symmetry (isospin symmetry). === Chiral symmetry description === Under some circumstances (for instance when the quark masses are much smaller than the chiral symmetry breaking scale of 250 MeV), the masses of quarks do not substantially contribute to the system's behavior, and to zeroth approximation the masses of the lightest quarks can be ignored for most purposes, as if they had zero mass. The simplified behavior of flavour transformations can then be successfully modeled as acting independently on the left- and right-handed parts of each quark field. This approximate description of the flavour symmetry is described by a chiral group SUL(Nf) × SUR(Nf). === Vector symmetry description === If all quarks had non-zero but equal masses, then this chiral symmetry is broken to the vector symmetry of the "diagonal flavour group" SU(Nf), which applies the same transformation to both helicities of the quarks. This reduction of symmetry is a form of explicit symmetry breaking. The strength of explicit symmetry breaking is controlled by the current quark masses in QCD. Even if quarks are massless, chiral flavour symmetry can be spontaneously broken if the vacuum of the theory contains a chiral condensate (as it does in low-energy QCD). This gives rise to an effective mass for the quarks, often identified with the valence quark mass in QCD. === Symmetries of QCD === Analysis of experiments indicate that the current quark masses of the lighter flavours of quarks are much smaller than the QCD scale, ΛQCD, hence chiral flavour symmetry is a good approximation to QCD for the up, down and strange quarks. The success of chiral perturbation theory and the even more naive chiral models spring from this fact. The valence quark masses extracted from the quark model are much larger than the current quark mass. This indicates that QCD has spontaneous chiral symmetry breaking with the formation of a chiral condensate. Other phases of QCD may break the chiral flavour symmetries in other ways. == History == === Isospin === Isospin, strangeness and hypercharge predate the quark model. The first of those quantum numbers, Isospin, was introduced as a concept in 1932 by Werner Heisenberg, to explain symmetries of the then newly discovered neutron (symbol n): The mass of the neutron and the proton (symbol p) are almost identical: They are nearly degenerate, and both are thus often referred to as “nucleons”, a term that ignores their intrinsic differences. Although the proton has a positive electric charge, and the neutron is neutral, they are almost identical in all other aspects, and their nuclear binding-force interactions (old name for the residual color force) are so strong compared to the electrical force between some, that there is very little point in paying much attention to their differences. The strength of the strong interaction between any pair of nucleons is the same, independent of whether they are interacting as protons or as neutrons. Protons and neutrons were grouped together as nucleons and treated as different states of the same particle, because they both have nearly the same mass and interact in nearly the same way, if the (much weaker) electromagnetic interaction is neglected. Heisenberg noted that the mathematical formulation of this symmetry was in certain respects similar to the mathematical formulation of non-relativistic spin, whence the name "isospin" derives. The neutron and the proton are assigned to the doublet (the spin-1⁄2, 2, or fundamental representation) of SU(2), with the proton and neutron being then associated with different isospin projections I3 = ++1⁄2 and −+1⁄2 respectively. The pions are assigned to the triplet (the spin-1, 3, or adjoint representation) of SU(2). Though there is a difference from the theory of spin: The group action does not preserve flavor (in fact, the group action is specifically an exchange of flavour). When constructing a physical theory of nuclear forces, one could simply assume that it does not depend on isospin, although the total isospin should be conserved. The concept of isospin proved useful in classifying hadrons discovered in the 1950s and 1960s (see particle zoo), where particles with similar mass are assigned an SU(2) isospin multiplet. === Strangeness and hypercharge === The discovery of strange particles like the kaon led to a new quantum number that was conserved by the strong interaction: strangeness (or equivalently hypercharge). The Gell-Mann–Nishijima formula was identified in 1953, which relates strangeness and hypercharge with isospin and electric charge. === The eightfold way and quark model === Once the kaons and their property of strangeness became better understood, it started to become clear that these, too, seemed to be a part of an enlarged symmetry that contained isospin as a subgroup. The larger symmetry was named the Eightfold Way by Murray Gell-Mann, and was promptly recognized to correspond to the adjoint representation of SU(3). To better understand the origin of this symmetry, Gell-Mann proposed the existence of up, down and strange quarks which would belong to the fundamental representation of the SU(3) flavor symmetry. === GIM-Mechanism and charm === To explain the observed absence of flavor-changing neutral currents, the GIM mechanism was proposed in 1970, which introduced the charm quark and predicted the J/psi meson. The J/psi meson was indeed found in 1974, which confirmed the existence of charm quarks. This discovery is known as the November Revolution. The flavor quantum number associated with the charm quark became known as charm. === Bottomness and topness === The bottom and top quarks were predicted in 1973 in order to explain CP violation, which also implied two new flavor quantum numbers: bottomness and topness. == See also == Standard Model (mathematical formulation) Cabibbo–Kobayashi–Maskawa matrix Strong CP problem and chirality (physics) Chiral symmetry breaking and quark matter Quark flavour tagging, such as B-tagging, is an example of particle identification in experimental particle physics. == References == == Further reading == Lessons in Particle Physics Luis Anchordoqui and Francis Halzen, University of Wisconsin, 18th Dec. 2009 == External links == The particle data group.
Wikipedia/Flavor_(physics)
The Philosophy of Science Association (PSA) is an international academic organization founded in 1933 that promotes research, teaching, and free discussion of issues in the philosophy of science from diverse standpoints. The PSA engages in activities such as the publishing of periodicals, essays and monographs in the field of the philosophy of science; holding biennial conferences; awarding of prizes for distinguished work in the field; supporting early-career scholars; and sponsoring in public engagement events. == History of the Association == The PSA was founded in 1933 and incorporated in Michigan 1975. The administrative offices of the PSA have been located at the University of Cincinnati College of Arts and Sciences since 2021. == Philosophy of Science == Philosophy of Science, the official journal of the Philosophy of Science Association (PSA), has been published continuously since 1934. Philosophy of Science publishes the best work in philosophy of science, broadly construed, five times a year. Every January, April, July, and October (the regular issues) the journal publishes articles, book reviews, discussion notes, and essay reviews; every December it publishes proceedings from the most recent Biennial Meeting of the Philosophy of Science Association. == Biennial Conference == The PSA hosts a biennial conference in the fall with an attendance of around 700 scholars and other professionals from over 35 countries. The biennial meeting has grown significantly in recent years, from around 350 attendees in 2008 to close to 700 in 2018. The meeting provides an opportunity for scholars from around the world to present and get feedback on their work and to learn about the latest research in the field. The main program for the meeting consists of symposia, individual papers, and posters, as well as sessions sponsored by cognate societies. The meeting also provides opportunities to mentor and support early-career scholars, to award distinguished scholarship in the field, to hold a Public Forum about issues of broad public interest in the city where we are meeting, to engage in dialogue with scientists about philosophical issues, and to provide training opportunities for all scholars about such issues as applying for grants, publishing, and social engagement. == Awards == In 2012, it began presenting the Hempel Award, named for the eminent 20th-century philosopher of science Carl Gustav Hempel, for lifetime achievement in the philosophy of science. The first recipient was Bas van Fraassen. A full list of recipients can be viewed on the association's website. == References == == External links == Philosophy of Science Association website Philosophy of Science journal website PSA newsletter Philosophy of Science archive
Wikipedia/Philosophy_of_Science_Association
In mathematical physics, inversion transformations are a natural extension of Poincaré transformations to include all conformal, one-to-one transformations on coordinate space-time. They are less studied in physics because, unlike the rotations and translations of Poincaré symmetry, an object cannot be physically transformed by the inversion symmetry. Some physical theories are invariant under this symmetry, in these cases it is what is known as a 'hidden symmetry'. Other hidden symmetries of physics include gauge symmetry and general covariance. == Early use == In 1831 the mathematician Ludwig Immanuel Magnus began to publish on transformations of the plane generated by inversion in a circle of radius R. His work initiated a large body of publications, now called inversive geometry. The most prominently named mathematician became August Ferdinand Möbius once he reduced the planar transformations to complex number arithmetic. In the company of physicists employing the inversion transformation early on was Lord Kelvin, and the association with him leads it to be called the Kelvin transform. == Transformation on coordinates == In the following we shall use imaginary time ( t ′ = i t {\displaystyle t'=it} ) so that space-time is Euclidean and the equations are simpler. The Poincaré transformations are given by the coordinate transformation on space-time parametrized by the 4-vectors V V μ ′ = O μ ν V ν + P μ {\displaystyle V_{\mu }^{\prime }=O_{\mu }^{\nu }V_{\nu }+P_{\mu }\,} where O {\displaystyle O} is an orthogonal matrix and P {\displaystyle P} is a 4-vector. Applying this transformation twice on a 4-vector gives a third transformation of the same form. The basic invariant under this transformation is the space-time length given by the distance between two space-time points given by 4-vectors x and y: r = | x − y | . {\displaystyle r=|x-y|.\,} These transformations are subgroups of general 1-1 conformal transformations on space-time. It is possible to extend these transformations to include all 1-1 conformal transformations on space-time V μ ′ = ( A τ ν V ν + B τ ) ( C τ μ ν V ν + D τ μ ) − 1 . {\displaystyle V_{\mu }^{\prime }=\left(A_{\tau }^{\nu }V_{\nu }+B_{\tau }\right)\left(C_{\tau \mu }^{\nu }V_{\nu }+D_{\tau \mu }\right)^{-1}.} We must also have an equivalent condition to the orthogonality condition of the Poincaré transformations: A A T + B C = D D T + C B {\displaystyle AA^{T}+BC=DD^{T}+CB\,} Because one can divide the top and bottom of the transformation by D , {\displaystyle D,} we lose no generality by setting D {\displaystyle D} to the unit matrix. We end up with V μ ′ = ( O μ ν V ν + P τ ) ( δ τ μ + Q τ μ ν V ν ) − 1 . {\displaystyle V_{\mu }^{\prime }=\left(O_{\mu }^{\nu }V_{\nu }+P_{\tau }\right)\left(\delta _{\tau \mu }+Q_{\tau \mu }^{\nu }V_{\nu }\right)^{-1}.\,} Applying this transformation twice on a 4-vector gives a transformation of the same form. The new symmetry of 'inversion' is given by the 3-tensor Q . {\displaystyle Q.} This symmetry becomes Poincaré symmetry if we set Q = 0. {\displaystyle Q=0.} When Q = 0 {\displaystyle Q=0} the second condition requires that O {\displaystyle O} is an orthogonal matrix. This transformation is 1-1 meaning that each point is mapped to a unique point only if we theoretically include the points at infinity. == Invariants == The invariants for this symmetry in 4 dimensions is unknown however it is known that the invariant requires a minimum of 4 space-time points. In one dimension, the invariant is the well known cross-ratio from Möbius transformations: ( x − X ) ( y − Y ) ( x − Y ) ( y − X ) . {\displaystyle {\frac {(x-X)(y-Y)}{(x-Y)(y-X)}}.} Because the only invariants under this symmetry involve a minimum of 4 points, this symmetry cannot be a symmetry of point particle theory. Point particle theory relies on knowing the lengths of paths of particles through space-time (e.g., from x {\displaystyle x} to y {\displaystyle y} ). The symmetry can be a symmetry of a string theory in which the strings are uniquely determined by their endpoints. The propagator for this theory for a string starting at the endpoints ( x , X ) {\displaystyle (x,X)} and ending at the endpoints ( y , Y ) {\displaystyle (y,Y)} is a conformal function of the 4-dimensional invariant. A string field in endpoint-string theory is a function over the endpoints. ϕ ( x , X ) . {\displaystyle \phi (x,X).\,} == Physical evidence == Although it is natural to generalize the Poincaré transformations in order to find hidden symmetries in physics and thus narrow down the number of possible theories of high-energy physics, it is difficult to experimentally examine this symmetry as it is not possible to transform an object under this symmetry. The indirect evidence of this symmetry is given by how accurately fundamental theories of physics that are invariant under this symmetry make predictions. Other indirect evidence is whether theories that are invariant under this symmetry lead to contradictions such as giving probabilities greater than 1. So far there has been no direct evidence that the fundamental constituents of the Universe are strings. The symmetry could also be a broken symmetry meaning that although it is a symmetry of physics, the Universe has 'frozen out' in one particular direction so this symmetry is no longer evident. == See also == Rotation group SO(3) Coordinate rotations and reflections Spacetime symmetries CPT symmetry Field (physics) superstrings == References ==
Wikipedia/Inversion_transformations
Causal perturbation theory is a mathematically rigorous approach to renormalization theory, which makes it possible to put the theoretical setup of perturbative quantum field theory on a sound mathematical basis. It goes back to a 1973 work by Henri Epstein and Vladimir Jurko Glaser. == Overview == When developing quantum electrodynamics in the 1940s, Shin'ichiro Tomonaga, Julian Schwinger, Richard Feynman, and Freeman Dyson discovered that, in perturbative calculations, problems with divergent integrals abounded. The divergences appeared in calculations involving Feynman diagrams with closed loops of virtual particles. It is an important observation that in perturbative quantum field theory, time-ordered products of distributions arise in a natural way and may lead to ultraviolet divergences in the corresponding calculations. From the generalized functions point of view, the problem of divergences is rooted in the fact that the theory of distributions is a purely linear theory, in the sense that the product of two distributions cannot consistently be defined (in general), as was proved by Laurent Schwartz in the 1950s. Epstein and Glaser solved this problem for a special class of distributions that fulfill a causality condition, which itself is a basic requirement in axiomatic quantum field theory. In their original work, Epstein and Glaser studied only theories involving scalar (spinless) particles. Since then, the causal approach has been applied also to a wide range of gauge theories, which represent the most important quantum field theories in modern physics. == References == == Additional reading == Scharf, G (1995). Finite Quantum Electrodynamics : The Causal Approach (2nd ed.). Berlin New York: Springer. ISBN 978-3-540-60142-5. OCLC 32890905. Scharf, G (2001). Quantum gauge theories : a true ghost-story (1st ed.). New York: John Wiley & Sons. ISBN 978-0-471-41480-3. OCLC 45394191. Dütsch, Michael; Scharf, Günter (1999). "Perturbative gauge invariance: the electroweak theory". Annalen der Physik (in German). 8 (5). Wiley: 359–387. arXiv:hep-th/9612091. Bibcode:1999AnP...511..359D. doi:10.1002/(sici)1521-3889(199905)8:5<359::aid-andp359>3.0.co;2-m. ISSN 0003-3804. S2CID 122295550.
Wikipedia/Causal_perturbation_theory
A Van der Waals molecule is a weakly bound complex of atoms or molecules held together by intermolecular attractions such as Van der Waals forces or by hydrogen bonds. The name originated in the beginning of the 1970s when stable molecular clusters were regularly observed in molecular beam microwave spectroscopy. == Examples == Examples of well-studied vdW molecules are Ar2, H2-Ar, H2O-Ar, benzene-Ar, (H2O)2, and (HF)2. Others include the largest diatomic molecule He2, and LiHe. A notable example is the He-HCN complex, studied for its large amplitude motions and the applicability of the adiabatic approximation in separating its angular and radial motions. Research has shown that even in such 'floppy' systems, the adiabatic approximation can be effectively utilized to simplify quantum mechanical analyses. == Supersonic beam spectroscopy == In (supersonic) molecular beams temperatures are very low (usually less than 5 K). At these low temperatures Van der Waals (vdW) molecules are stable and can be investigated by microwave, far-infrared spectroscopy and other modes of spectroscopy. Also in cold equilibrium gases vdW molecules are formed, albeit in small, temperature dependent concentrations. Rotational and vibrational transitions in vdW molecules have been observed in gases, mainly by UV and IR spectroscopy. Van der Waals molecules are usually very non-rigid and different versions are separated by low energy barriers, so that tunneling splittings, observable in far-infrared spectra, are relatively large. Thus, in the far-infrared one may observe intermolecular vibrations, rotations, and tunneling motions of Van der Waals molecules. The VRT spectroscopic study of Van der Waals molecules is one of the most direct routes to the understanding of intermolecular forces. In study of helium-containing van der Waals complexes, the adiabatic or Born–Oppenheimer approximation has been adapted to separate angular and radial motions. Despite the challenges posed by the weak interactions leading to large amplitude motions, research demonstrates that this approximation can still be valid, offering a quicker computational method for Diffusion Monte Carlo studies of molecular rotation within ultra-cold helium droplets. The non-rigid nature of these complexes, especially those with helium, complicates traditional quantum mechanical approaches. However, recent studies have validated the use of the adiabatic approximation for separating different types of molecular motion, even in these 'floppy' systems. == See also == Van der Waals radius Van der Waals strain Van der Waals surface Category:Van der Waals molecules–articles about specific chemicals == References == == Further reading == So far three special issues of Chemical Reviews have been devoted to vdW molecules: I. Vol. 88(6) (1988). II. Vol. 94(7) (1994). III. Vol. 100(11) (2000). Early reviews of vdW molecules: G. E. Ewing, Accounts of Chemical Research, Vol. 8, pp. 185-192, (1975): Structure and Properties of Van der Waals molecules. B. L. Blaney and G. E. Ewing, Annual Review of Physical Chemistry, Vol. 27, pp. 553-586 (1976): Van der Waals Molecules. About VRT spectroscopy: G. A. Blake, et al., Review Scientific Instruments, Vol. 62, p. 1693, 1701 (1991). H. Linnartz, W.L. Meerts, and M. Havenith, Chemical Physics, Vol. 193, p. 327 (1995).
Wikipedia/Van_der_Waals_molecule
Classical mechanics is a physical theory describing the motion of objects such as projectiles, parts of machinery, spacecraft, planets, stars, and galaxies. The development of classical mechanics involved substantial change in the methods and philosophy of physics. The qualifier classical distinguishes this type of mechanics from physics developed after the revolutions in physics of the early 20th century, all of which revealed limitations in classical mechanics. The earliest formulation of classical mechanics is often referred to as Newtonian mechanics. It consists of the physical concepts based on the 17th century foundational works of Sir Isaac Newton, and the mathematical methods invented by Newton, Gottfried Wilhelm Leibniz, Leonhard Euler and others to describe the motion of bodies under the influence of forces. Later, methods based on energy were developed by Euler, Joseph-Louis Lagrange, William Rowan Hamilton and others, leading to the development of analytical mechanics (which includes Lagrangian mechanics and Hamiltonian mechanics). These advances, made predominantly in the 18th and 19th centuries, extended beyond earlier works; they are, with some modification, used in all areas of modern physics. If the present state of an object that obeys the laws of classical mechanics is known, it is possible to determine how it will move in the future, and how it has moved in the past. Chaos theory shows that the long term predictions of classical mechanics are not reliable. Classical mechanics provides accurate results when studying objects that are not extremely massive and have speeds not approaching the speed of light. With objects about the size of an atom's diameter, it becomes necessary to use quantum mechanics. To describe velocities approaching the speed of light, special relativity is needed. In cases where objects become extremely massive, general relativity becomes applicable. Some modern sources include relativistic mechanics in classical physics, as representing the field in its most developed and accurate form. == Branches == === Traditional division === Classical mechanics was traditionally divided into three main branches. Statics is the branch of classical mechanics that is concerned with the analysis of force and torque acting on a physical system that does not experience an acceleration, but rather is in equilibrium with its environment. Kinematics describes the motion of points, bodies (objects), and systems of bodies (groups of objects) without considering the forces that cause them to move. Kinematics, as a field of study, is often referred to as the "geometry of motion" and is occasionally seen as a branch of mathematics. Dynamics goes beyond merely describing objects' behavior and also considers the forces which explain it. Some authors (for example, Taylor (2005) and Greenwood (1997)) include special relativity within classical dynamics. === Forces vs. energy === Another division is based on the choice of mathematical formalism. Classical mechanics can be mathematically presented in multiple different ways. The physical content of these different formulations is the same, but they provide different insights and facilitate different types of calculations. While the term "Newtonian mechanics" is sometimes used as a synonym for non-relativistic classical physics, it can also refer to a particular formalism based on Newton's laws of motion. Newtonian mechanics in this sense emphasizes force as a vector quantity. In contrast, analytical mechanics uses scalar properties of motion representing the system as a whole—usually its kinetic energy and potential energy. The equations of motion are derived from the scalar quantity by some underlying principle about the scalar's variation. Two dominant branches of analytical mechanics are Lagrangian mechanics, which uses generalized coordinates and corresponding generalized velocities in tangent bundle space (the tangent bundle of the configuration space and sometimes called "state space"), and Hamiltonian mechanics, which uses coordinates and corresponding momenta in phase space (the cotangent bundle of the configuration space). Both formulations are equivalent by a Legendre transformation on the generalized coordinates, velocities and momenta; therefore, both contain the same information for describing the dynamics of a system. There are other formulations such as Hamilton–Jacobi theory, Routhian mechanics, and Appell's equation of motion. All equations of motion for particles and fields, in any formalism, can be derived from the widely applicable result called the principle of least action. One result is Noether's theorem, a statement which connects conservation laws to their associated symmetries. === By region of application === Alternatively, a division can be made by region of application: Celestial mechanics, relating to stars, planets and other celestial bodies Continuum mechanics, for materials modelled as a continuum, e.g., solids and fluids (i.e., liquids and gases). Relativistic mechanics (i.e. including the special and general theories of relativity), for bodies whose speed is close to the speed of light. Statistical mechanics, which provides a framework for relating the microscopic properties of individual atoms and molecules to the macroscopic or bulk thermodynamic properties of materials. == Description of objects and their motion == For simplicity, classical mechanics often models real-world objects as point particles, that is, objects with negligible size. The motion of a point particle is determined by a small number of parameters: its position, mass, and the forces applied to it. Classical mechanics also describes the more complex motions of extended non-pointlike objects. Euler's laws provide extensions to Newton's laws in this area. The concepts of angular momentum rely on the same calculus used to describe one-dimensional motion. The rocket equation extends the notion of rate of change of an object's momentum to include the effects of an object "losing mass". (These generalizations/extensions are derived from Newton's laws, say, by decomposing a solid body into a collection of points.) In reality, the kind of objects that classical mechanics can describe always have a non-zero size. (The behavior of very small particles, such as the electron, is more accurately described by quantum mechanics.) Objects with non-zero size have more complicated behavior than hypothetical point particles, because of the additional degrees of freedom, e.g., a baseball can spin while it is moving. However, the results for point particles can be used to study such objects by treating them as composite objects, made of a large number of collectively acting point particles. The center of mass of a composite object behaves like a point particle. Classical mechanics assumes that matter and energy have definite, knowable attributes such as location in space and speed. Non-relativistic mechanics also assumes that forces act instantaneously (see also Action at a distance). === Kinematics === The position of a point particle is defined in relation to a coordinate system centered on an arbitrary fixed reference point in space called the origin O. A simple coordinate system might describe the position of a particle P with a vector notated by an arrow labeled r that points from the origin O to point P. In general, the point particle does not need to be stationary relative to O. In cases where P is moving relative to O, r is defined as a function of t, time. In pre-Einstein relativity (known as Galilean relativity), time is considered an absolute, i.e., the time interval that is observed to elapse between any given pair of events is the same for all observers. In addition to relying on absolute time, classical mechanics assumes Euclidean geometry for the structure of space. ==== Velocity and speed ==== The velocity, or the rate of change of displacement with time, is defined as the derivative of the position with respect to time: v = d r d t {\displaystyle \mathbf {v} ={\mathrm {d} \mathbf {r} \over \mathrm {d} t}\,\!} . In classical mechanics, velocities are directly additive and subtractive. For example, if one car travels east at 60 km/h and passes another car traveling in the same direction at 50 km/h, the slower car perceives the faster car as traveling east at 60 − 50 = 10 km/h. However, from the perspective of the faster car, the slower car is moving 10 km/h to the west, often denoted as −10 km/h where the sign implies opposite direction. Velocities are directly additive as vector quantities; they must be dealt with using vector analysis. Mathematically, if the velocity of the first object in the previous discussion is denoted by the vector u = ud and the velocity of the second object by the vector v = ve, where u is the speed of the first object, v is the speed of the second object, and d and e are unit vectors in the directions of motion of each object respectively, then the velocity of the first object as seen by the second object is: u ′ = u − v . {\displaystyle \mathbf {u} '=\mathbf {u} -\mathbf {v} \,.} Similarly, the first object sees the velocity of the second object as: v ′ = v − u . {\displaystyle \mathbf {v'} =\mathbf {v} -\mathbf {u} \,.} When both objects are moving in the same direction, this equation can be simplified to: u ′ = ( u − v ) d . {\displaystyle \mathbf {u} '=(u-v)\mathbf {d} \,.} Or, by ignoring direction, the difference can be given in terms of speed only: u ′ = u − v . {\displaystyle u'=u-v\,.} ==== Acceleration ==== The acceleration, or rate of change of velocity, is the derivative of the velocity with respect to time (the second derivative of the position with respect to time): a = d v d t = d 2 r d t 2 . {\displaystyle \mathbf {a} ={\mathrm {d} \mathbf {v} \over \mathrm {d} t}={\mathrm {d^{2}} \mathbf {r} \over \mathrm {d} t^{2}}.} Acceleration represents the velocity's change over time. Velocity can change in magnitude, direction, or both. Occasionally, a decrease in the magnitude of velocity "v" is referred to as deceleration, but generally any change in the velocity over time, including deceleration, is referred to as acceleration. ==== Frames of reference ==== While the position, velocity and acceleration of a particle can be described with respect to any observer in any state of motion, classical mechanics assumes the existence of a special family of reference frames in which the mechanical laws of nature take a comparatively simple form. These special reference frames are called inertial frames. An inertial frame is an idealized frame of reference within which an object with zero net force acting upon it moves with a constant velocity; that is, it is either at rest or moving uniformly in a straight line. In an inertial frame Newton's law of motion, F = m a {\displaystyle F=ma} , is valid.: 185  Non-inertial reference frames accelerate in relation to another inertial frame. A body rotating with respect to an inertial frame is not an inertial frame. When viewed from an inertial frame, particles in the non-inertial frame appear to move in ways not explained by forces from existing fields in the reference frame. Hence, it appears that there are other forces that enter the equations of motion solely as a result of the relative acceleration. These forces are referred to as fictitious forces, inertia forces, or pseudo-forces. Consider two reference frames S and S'. For observers in each of the reference frames an event has space-time coordinates of (x,y,z,t) in frame S and (x',y',z',t') in frame S'. Assuming time is measured the same in all reference frames, if we require x = x' when t = 0, then the relation between the space-time coordinates of the same event observed from the reference frames S' and S, which are moving at a relative velocity u in the x direction, is: x ′ = x − t u , y ′ = y , z ′ = z , t ′ = t . {\displaystyle {\begin{aligned}x'&=x-tu,\\y'&=y,\\z'&=z,\\t'&=t.\end{aligned}}} This set of formulas defines a group transformation known as the Galilean transformation (informally, the Galilean transform). This group is a limiting case of the Poincaré group used in special relativity. The limiting case applies when the velocity u is very small compared to c, the speed of light. The transformations have the following consequences: v′ = v − u (the velocity v′ of a particle from the perspective of S′ is slower by u than its velocity v from the perspective of S) a′ = a (the acceleration of a particle is the same in any inertial reference frame) F′ = F (the force on a particle is the same in any inertial reference frame) the speed of light is not a constant in classical mechanics, nor does the special position given to the speed of light in relativistic mechanics have a counterpart in classical mechanics. For some problems, it is convenient to use rotating coordinates (reference frames). Thereby one can either keep a mapping to a convenient inertial frame, or introduce additionally a fictitious centrifugal force and Coriolis force. == Newtonian mechanics == A force in physics is any action that causes an object's velocity to change; that is, to accelerate. A force originates from within a field, such as an electro-static field (caused by static electrical charges), electro-magnetic field (caused by moving charges), or gravitational field (caused by mass), among others. Newton was the first to mathematically express the relationship between force and momentum. Some physicists interpret Newton's second law of motion as a definition of force and mass, while others consider it a fundamental postulate, a law of nature. Either interpretation has the same mathematical consequences, historically known as "Newton's second law": F = d p d t = d ( m v ) d t . {\displaystyle \mathbf {F} ={\mathrm {d} \mathbf {p} \over \mathrm {d} t}={\mathrm {d} (m\mathbf {v} ) \over \mathrm {d} t}.} The quantity mv is called the (canonical) momentum. The net force on a particle is thus equal to the rate of change of the momentum of the particle with time. Since the definition of acceleration is a = dv/dt, the second law can be written in the simplified and more familiar form: F = m a . {\displaystyle \mathbf {F} =m\mathbf {a} \,.} So long as the force acting on a particle is known, Newton's second law is sufficient to describe the motion of a particle. Once independent relations for each force acting on a particle are available, they can be substituted into Newton's second law to obtain an ordinary differential equation, which is called the equation of motion. As an example, assume that friction is the only force acting on the particle, and that it may be modeled as a function of the velocity of the particle, for example: F R = − λ v , {\displaystyle \mathbf {F} _{\rm {R}}=-\lambda \mathbf {v} \,,} where λ is a positive constant, the negative sign states that the force is opposite the sense of the velocity. Then the equation of motion is − λ v = m a = m d v d t . {\displaystyle -\lambda \mathbf {v} =m\mathbf {a} =m{\mathrm {d} \mathbf {v} \over \mathrm {d} t}\,.} This can be integrated to obtain v = v 0 e − λ t / m {\displaystyle \mathbf {v} =\mathbf {v} _{0}e^{{-\lambda t}/{m}}} where v0 is the initial velocity. This means that the velocity of this particle decays exponentially to zero as time progresses. In this case, an equivalent viewpoint is that the kinetic energy of the particle is absorbed by friction (which converts it to heat energy in accordance with the conservation of energy), and the particle is slowing down. This expression can be further integrated to obtain the position r of the particle as a function of time. Important forces include the gravitational force and the Lorentz force for electromagnetism. In addition, Newton's third law can sometimes be used to deduce the forces acting on a particle: if it is known that particle A exerts a force F on another particle B, it follows that B must exert an equal and opposite reaction force, −F, on A. The strong form of Newton's third law requires that F and −F act along the line connecting A and B, while the weak form does not. Illustrations of the weak form of Newton's third law are often found for magnetic forces. === Work and energy === If a constant force F is applied to a particle that makes a displacement Δr, the work done by the force is defined as the scalar product of the force and displacement vectors: W = F ⋅ Δ r . {\displaystyle W=\mathbf {F} \cdot \Delta \mathbf {r} \,.} More generally, if the force varies as a function of position as the particle moves from r1 to r2 along a path C, the work done on the particle is given by the line integral W = ∫ C F ( r ) ⋅ d r . {\displaystyle W=\int _{C}\mathbf {F} (\mathbf {r} )\cdot \mathrm {d} \mathbf {r} \,.} If the work done in moving the particle from r1 to r2 is the same no matter what path is taken, the force is said to be conservative. Gravity is a conservative force, as is the force due to an idealized spring, as given by Hooke's law. The force due to friction is non-conservative. The kinetic energy Ek of a particle of mass m travelling at speed v is given by E k = 1 2 m v 2 . {\displaystyle E_{\mathrm {k} }={\tfrac {1}{2}}mv^{2}\,.} For extended objects composed of many particles, the kinetic energy of the composite body is the sum of the kinetic energies of the particles. The work–energy theorem states that for a particle of constant mass m, the total work W done on the particle as it moves from position r1 to r2 is equal to the change in kinetic energy Ek of the particle: W = Δ E k = E k 2 − E k 1 = 1 2 m ( v 2 2 − v 1 2 ) . {\displaystyle W=\Delta E_{\mathrm {k} }=E_{\mathrm {k_{2}} }-E_{\mathrm {k_{1}} }={\tfrac {1}{2}}m\left(v_{2}^{\,2}-v_{1}^{\,2}\right).} Conservative forces can be expressed as the gradient of a scalar function, known as the potential energy and denoted Ep: F = − ∇ E p . {\displaystyle \mathbf {F} =-\mathbf {\nabla } E_{\mathrm {p} }\,.} If all the forces acting on a particle are conservative, and Ep is the total potential energy (which is defined as a work of involved forces to rearrange mutual positions of bodies), obtained by summing the potential energies corresponding to each force F ⋅ Δ r = − ∇ E p ⋅ Δ r = − Δ E p . {\displaystyle \mathbf {F} \cdot \Delta \mathbf {r} =-\mathbf {\nabla } E_{\mathrm {p} }\cdot \Delta \mathbf {r} =-\Delta E_{\mathrm {p} }\,.} The decrease in the potential energy is equal to the increase in the kinetic energy − Δ E p = Δ E k ⇒ Δ ( E k + E p ) = 0 . {\displaystyle -\Delta E_{\mathrm {p} }=\Delta E_{\mathrm {k} }\Rightarrow \Delta (E_{\mathrm {k} }+E_{\mathrm {p} })=0\,.} This result is known as conservation of energy and states that the total energy, ∑ E = E k + E p , {\displaystyle \sum E=E_{\mathrm {k} }+E_{\mathrm {p} }\,,} is constant in time. It is often useful, because many commonly encountered forces are conservative. == Lagrangian mechanics == Lagrangian mechanics is a formulation of classical mechanics founded on the stationary-action principle (also known as the principle of least action). It was introduced by the Italian-French mathematician and astronomer Joseph-Louis Lagrange in his presentation to the Turin Academy of Science in 1760 culminating in his 1788 grand opus, Mécanique analytique. Lagrangian mechanics describes a mechanical system as a pair ( M , L ) {\textstyle (M,L)} consisting of a configuration space M {\textstyle M} and a smooth function L {\textstyle L} within that space called a Lagrangian. For many systems, L = T − V , {\textstyle L=T-V,} where T {\textstyle T} and V {\displaystyle V} are the kinetic and potential energy of the system, respectively. The stationary action principle requires that the action functional of the system derived from L {\textstyle L} must remain at a stationary point (a maximum, minimum, or saddle) throughout the time evolution of the system. This constraint allows the calculation of the equations of motion of the system using Lagrange's equations. == Hamiltonian mechanics == Hamiltonian mechanics emerged in 1833 as a reformulation of Lagrangian mechanics. Introduced by Sir William Rowan Hamilton, Hamiltonian mechanics replaces (generalized) velocities q ˙ i {\displaystyle {\dot {q}}^{i}} used in Lagrangian mechanics with (generalized) momenta. Both theories provide interpretations of classical mechanics and describe the same physical phenomena. Hamiltonian mechanics has a close relationship with geometry (notably, symplectic geometry and Poisson structures) and serves as a link between classical and quantum mechanics. In this formalism, the dynamics of a system are governed by Hamilton's equations, which express the time derivatives of position and momentum variables in terms of partial derivatives of a function called the Hamiltonian: d q d t = ∂ H ∂ p , d p d t = − ∂ H ∂ q . {\displaystyle {\frac {\mathrm {d} {\boldsymbol {q}}}{\mathrm {d} t}}={\frac {\partial {\mathcal {H}}}{\partial {\boldsymbol {p}}}},\quad {\frac {\mathrm {d} {\boldsymbol {p}}}{\mathrm {d} t}}=-{\frac {\partial {\mathcal {H}}}{\partial {\boldsymbol {q}}}}.} The Hamiltonian is the Legendre transform of the Lagrangian, and in many situations of physical interest it is equal to the total energy of the system. == Limits of validity == Many branches of classical mechanics are simplifications or approximations of more accurate forms; two of the most accurate being general relativity and relativistic statistical mechanics. Geometric optics is an approximation to the quantum theory of light, and does not have a superior "classical" form. When both quantum mechanics and classical mechanics cannot apply, such as at the quantum level with many degrees of freedom, quantum field theory (QFT) is of use. QFT deals with small distances, and large speeds with many degrees of freedom as well as the possibility of any change in the number of particles throughout the interaction. When treating large degrees of freedom at the macroscopic level, statistical mechanics becomes useful. Statistical mechanics describes the behavior of large (but countable) numbers of particles and their interactions as a whole at the macroscopic level. Statistical mechanics is mainly used in thermodynamics for systems that lie outside the bounds of the assumptions of classical thermodynamics. In the case of high velocity objects approaching the speed of light, classical mechanics is enhanced by special relativity. In case that objects become extremely heavy (i.e., their Schwarzschild radius is not negligibly small for a given application), deviations from Newtonian mechanics become apparent and can be quantified by using the parameterized post-Newtonian formalism. In that case, general relativity (GR) becomes applicable. However, until now there is no theory of quantum gravity unifying GR and QFT in the sense that it could be used when objects become extremely small and heavy.[4][5] === Newtonian approximation to special relativity === In special relativity, the momentum of a particle is given by p = m v 1 − v 2 c 2 , {\displaystyle \mathbf {p} ={\frac {m\mathbf {v} }{\sqrt {1-{\frac {v^{2}}{c^{2}}}}}}\,,} where m is the particle's rest mass, v its velocity, v is the modulus of v, and c is the speed of light. If v is very small compared to c, v2/c2 is approximately zero, and so p ≈ m v . {\displaystyle \mathbf {p} \approx m\mathbf {v} \,.} Thus the Newtonian equation p = mv is an approximation of the relativistic equation for bodies moving with low speeds compared to the speed of light. For example, the relativistic cyclotron frequency of a cyclotron, gyrotron, or high voltage magnetron is given by f = f c m 0 m 0 + T c 2 , {\displaystyle f=f_{\mathrm {c} }{\frac {m_{0}}{m_{0}+{\frac {T}{c^{2}}}}}\,,} where fc is the classical frequency of an electron (or other charged particle) with kinetic energy T and (rest) mass m0 circling in a magnetic field. The (rest) mass of an electron is 511 keV. So the frequency correction is 1% for a magnetic vacuum tube with a 5.11 kV direct current accelerating voltage. === Classical approximation to quantum mechanics === The ray approximation of classical mechanics breaks down when the de Broglie wavelength is not much smaller than other dimensions of the system. For non-relativistic particles, this wavelength is λ = h p {\displaystyle \lambda ={\frac {h}{p}}} where h is the Planck constant and p is the momentum. Again, this happens with electrons before it happens with heavier particles. For example, the electrons used by Clinton Davisson and Lester Germer in 1927, accelerated by 54 V, had a wavelength of 0.167 nm, which was long enough to exhibit a single diffraction side lobe when reflecting from the face of a nickel crystal with atomic spacing of 0.215 nm. With a larger vacuum chamber, it would seem relatively easy to increase the angular resolution from around a radian to a milliradian and see quantum diffraction from the periodic patterns of integrated circuit computer memory. More practical examples of the failure of classical mechanics on an engineering scale are conduction by quantum tunneling in tunnel diodes and very narrow transistor gates in integrated circuits. Classical mechanics is the same extreme high frequency approximation as geometric optics. It is more often accurate because it describes particles and bodies with rest mass. These have more momentum and therefore shorter De Broglie wavelengths than massless particles, such as light, with the same kinetic energies. == History == The study of the motion of bodies is an ancient one, making classical mechanics one of the oldest and largest subjects in science, engineering, and technology. The development of classical mechanics lead to the development of many areas of mathematics.: 54  Some Greek philosophers of antiquity, among them Aristotle, founder of Aristotelian physics, may have been the first to maintain the idea that "everything happens for a reason" and that theoretical principles can assist in the understanding of nature. While to a modern reader, many of these preserved ideas come forth as eminently reasonable, there is a conspicuous lack of both mathematical theory and controlled experiment, as we know it. These later became decisive factors in forming modern science, and their early application came to be known as classical mechanics. In his Elementa super demonstrationem ponderum, medieval mathematician Jordanus de Nemore introduced the concept of "positional gravity" and the use of component forces. The first published causal explanation of the motions of planets was Johannes Kepler's Astronomia nova, published in 1609. He concluded, based on Tycho Brahe's observations on the orbit of Mars, that the planet's orbits were ellipses. This break with ancient thought was happening around the same time that Galileo was proposing abstract mathematical laws for the motion of objects. He may (or may not) have performed the famous experiment of dropping two cannonballs of different weights from the tower of Pisa, showing that they both hit the ground at the same time. The reality of that particular experiment is disputed, but he did carry out quantitative experiments by rolling balls on an inclined plane. His theory of accelerated motion was derived from the results of such experiments and forms a cornerstone of classical mechanics. In 1673 Christiaan Huygens described in his Horologium Oscillatorium the first two laws of motion. The work is also the first modern treatise in which a physical problem (the accelerated motion of a falling body) is idealized by a set of parameters then analyzed mathematically and constitutes one of the seminal works of applied mathematics. Newton founded his principles of natural philosophy on three proposed laws of motion: the law of inertia, his second law of acceleration (mentioned above), and the law of action and reaction; and hence laid the foundations for classical mechanics. Both Newton's second and third laws were given the proper scientific and mathematical treatment in Newton's Philosophiæ Naturalis Principia Mathematica. Here they are distinguished from earlier attempts at explaining similar phenomena, which were either incomplete, incorrect, or given little accurate mathematical expression. Newton also enunciated the principles of conservation of momentum and angular momentum. In mechanics, Newton was also the first to provide the first correct scientific and mathematical formulation of gravity in Newton's law of universal gravitation. The combination of Newton's laws of motion and gravitation provides the fullest and most accurate description of classical mechanics. He demonstrated that these laws apply to everyday objects as well as to celestial objects. In particular, he obtained a theoretical explanation of Kepler's laws of motion of the planets. Newton had previously invented the calculus; however, the Principia was formulated entirely in terms of long-established geometric methods in emulation of Euclid. Newton, and most of his contemporaries, with the notable exception of Huygens, worked on the assumption that classical mechanics would be able to explain all phenomena, including light, in the form of geometric optics. Even when discovering the so-called Newton's rings (a wave interference phenomenon) he maintained his own corpuscular theory of light. After Newton, classical mechanics became a principal field of study in mathematics as well as physics. Mathematical formulations progressively allowed finding solutions to a far greater number of problems. The first notable mathematical treatment was in 1788 by Joseph Louis Lagrange. Lagrangian mechanics was in turn re-formulated in 1833 by William Rowan Hamilton. Some difficulties were discovered in the late 19th century that could only be resolved by more modern physics. Some of these difficulties related to compatibility with electromagnetic theory, and the famous Michelson–Morley experiment. The resolution of these problems led to the special theory of relativity, often still considered a part of classical mechanics. A second set of difficulties were related to thermodynamics. When combined with thermodynamics, classical mechanics leads to the Gibbs paradox of classical statistical mechanics, in which entropy is not a well-defined quantity. Black-body radiation was not explained without the introduction of quanta. As experiments reached the atomic level, classical mechanics failed to explain, even approximately, such basic things as the energy levels and sizes of atoms and the photo-electric effect. The effort at resolving these problems led to the development of quantum mechanics. Since the end of the 20th century, classical mechanics in physics has no longer been an independent theory. Instead, classical mechanics is now considered an approximate theory to the more general quantum mechanics. Emphasis has shifted to understanding the fundamental forces of nature as in the Standard Model and its more modern extensions into a unified theory of everything. Classical mechanics is a theory useful for the study of the motion of non-quantum mechanical, low-energy particles in weak gravitational fields. == See also == == Notes == == References == == Further reading == Alonso, M.; Finn, J. (1992). Fundamental University Physics. Addison-Wesley. Feynman, Richard (1999). The Feynman Lectures on Physics. Perseus Publishing. ISBN 978-0-7382-0092-7. Feynman, Richard; Phillips, Richard (1998). Six Easy Pieces. Perseus Publishing. ISBN 978-0-201-32841-7. Goldstein, Herbert; Charles P. Poole; John L. Safko (2002). Classical Mechanics (3rd ed.). Addison Wesley. ISBN 978-0-201-65702-9. Kibble, Tom W.B.; Berkshire, Frank H. (2004). Classical Mechanics (5th ed.). Imperial College Press. ISBN 978-1-86094-424-6. Kleppner, D.; Kolenkow, R.J. (1973). An Introduction to Mechanics. McGraw-Hill. ISBN 978-0-07-035048-9. Landau, L.D.; Lifshitz, E.M. (1972). Course of Theoretical Physics, Vol. 1 – Mechanics. Franklin Book Company. ISBN 978-0-08-016739-8. Morin, David (2008). Introduction to Classical Mechanics: With Problems and Solutions (1st ed.). Cambridge: Cambridge University Press. ISBN 978-0-521-87622-3. Gerald Jay Sussman; Jack Wisdom (2001). Structure and Interpretation of Classical Mechanics. MIT Press. ISBN 978-0-262-19455-6. O'Donnell, Peter J. (2015). Essential Dynamics and Relativity. CRC Press. ISBN 978-1-4665-8839-4. Thornton, Stephen T.; Marion, Jerry B. (2003). Classical Dynamics of Particles and Systems (5th ed.). Brooks Cole. ISBN 978-0-534-40896-1. == External links == Crowell, Benjamin. Light and Matter (an introductory text, uses algebra with optional sections involving calculus) Fitzpatrick, Richard. Classical Mechanics (uses calculus) Hoiland, Paul (2004). Preferred Frames of Reference & Relativity Horbatsch, Marko, "Classical Mechanics Course Notes". Rosu, Haret C., "Classical Mechanics". Physics Education. 1999. [arxiv.org : physics/9909035] Shapiro, Joel A. (2003). Classical Mechanics Sussman, Gerald Jay & Wisdom, Jack & Mayer, Meinhard E. (2001). Structure and Interpretation of Classical Mechanics Tong, David. Classical Dynamics (Cambridge lecture notes on Lagrangian and Hamiltonian formalism) Kinematic Models for Design Digital Library (KMODDL) Movies and photos of hundreds of working mechanical-systems models at Cornell University. Also includes an e-book library of classic texts on mechanical design and engineering. MIT OpenCourseWare 8.01: Classical Mechanics Free videos of actual course lectures with links to lecture notes, assignments and exams. Alejandro A. Torassa, On Classical Mechanics
Wikipedia/Kinetics_(dynamics)
An intermolecular force (IMF; also secondary force) is the force that mediates interaction between molecules, including the electromagnetic forces of attraction or repulsion which act between atoms and other types of neighbouring particles (e.g. atoms or ions). Intermolecular forces are weak relative to intramolecular forces – the forces which hold a molecule together. For example, the covalent bond, involving sharing electron pairs between atoms, is much stronger than the forces present between neighboring molecules. Both sets of forces are essential parts of force fields frequently used in molecular mechanics. The first reference to the nature of microscopic forces is found in Alexis Clairaut's work Théorie de la figure de la Terre, published in Paris in 1743. Other scientists who have contributed to the investigation of microscopic forces include: Laplace, Gauss, Maxwell, Boltzmann and Pauling. Attractive intermolecular forces are categorized into the following types: Hydrogen bonding Ion–dipole forces and ion–induced dipole force Cation–π, σ–π and π–π bonding Van der Waals forces – Keesom force, Debye force, and London dispersion force Cation–cation bonding Salt bridge (protein and supramolecular) Information on intermolecular forces is obtained by macroscopic measurements of properties like viscosity, pressure, volume, temperature (PVT) data. The link to microscopic aspects is given by virial coefficients and intermolecular pair potentials, such as the Mie potential, Buckingham potential or Lennard-Jones potential. In the broadest sense, it can be understood as such interactions between any particles (molecules, atoms, ions and molecular ions) in which the formation of chemical (that is, ionic, covalent or metallic) bonds does not occur. In other words, these interactions are significantly weaker than covalent ones and do not lead to a significant restructuring of the electronic structure of the interacting particles. (This is only partially true. For example, all enzymatic and catalytic reactions begin with a weak intermolecular interaction between a substrate and an enzyme or a molecule with a catalyst, but several such weak interactions with the required spatial configuration of the active center of the enzyme lead to significant restructuring changes the energy state of molecules or substrate, which ultimately leads to the breaking of some and the formation of other covalent chemical bonds. Strictly speaking, all enzymatic reactions begin with intermolecular interactions between the substrate and the enzyme, therefore the importance of these interactions is especially great in biochemistry and molecular biology, and is the basis of enzymology). == Hydrogen bonding == A hydrogen bond refers to the attraction between a hydrogen atom that is covalently bonded to an element with high electronegativity, usually nitrogen, oxygen, or fluorine, and another highly electronegative atom. The hydrogen bond is often described as a strong electrostatic interaction. However, it also has some features of covalent bonding: it is directional, stronger than a van der Waals force interaction, produces interatomic distances shorter than the sum of their van der Waals radii, and usually involves a limited number of interaction partners, which can be interpreted as a kind of valence. The number of hydrogen bonds formed between molecules is equal to the number of active pairs. The molecule which donates its hydrogen is termed the donor molecule, while the molecule containing lone pair participating in H bonding is termed the acceptor molecule. The number of active pairs is equal to the common number between number of hydrogens the donor has and the number of lone pairs the acceptor has. Though both are not depicted in the diagram, water molecules have four active bonds. The oxygen atom’s two lone pairs interact with a hydrogen each, forming two additional hydrogen bonds, and the second hydrogen atom also interacts with a neighbouring oxygen. Intermolecular hydrogen bonding is responsible for the high boiling point of water (100 °C) compared to the other group 16 hydrides, which have little capability to hydrogen bond. Intramolecular hydrogen bonding is partly responsible for the secondary, tertiary, and quaternary structures of proteins and nucleic acids. It also plays an important role in the structure of polymers, both synthetic and natural. == Salt bridge == The attraction between cationic and anionic sites is a noncovalent, or intermolecular interaction which is usually referred to as ion pairing or salt bridge. It is essentially due to electrostatic forces, although in aqueous medium the association is driven by entropy and often even endothermic. Most salts form crystals with characteristic distances between the ions; in contrast to many other noncovalent interactions, salt bridges are not directional and show in the solid state usually contact determined only by the van der Waals radii of the ions. Inorganic as well as organic ions display in water at moderate ionic strength I similar salt bridge as association ΔG values around 5 to 6 kJ/mol for a 1:1 combination of anion and cation, almost independent of the nature (size, polarizability, etc.) of the ions. The ΔG values are additive and approximately a linear function of the charges, the interaction of e.g. a doubly charged phosphate anion with a single charged ammonium cation accounts for about 2x5 = 10 kJ/mol. The ΔG values depend on the ionic strength I of the solution, as described by the Debye-Hückel equation, at zero ionic strength one observes ΔG = 8 kJ/mol. == Dipole–dipole and similar interactions == Dipole–dipole interactions (or Keesom interactions) are electrostatic interactions between molecules which have permanent dipoles. This interaction is stronger than the London forces but is weaker than ion-ion interaction because only partial charges are involved. These interactions tend to align the molecules to increase attraction (reducing potential energy). An example of a dipole–dipole interaction can be seen in hydrogen chloride (HCl): the positive end of a polar molecule will attract the negative end of the other molecule and influence its position. Polar molecules have a net attraction between them. Examples of polar molecules include hydrogen chloride (HCl) and chloroform (CHCl3). H δ + − Cl δ − ⋯ H δ + − Cl δ − {\displaystyle {\overset {\color {Red}\delta +}{{\ce {H}}}}-{\overset {\color {Red}\delta -}{{\ce {Cl}}}}\cdots {\overset {\color {Red}\delta +}{{\ce {H}}}}-{\overset {\color {Red}\delta -}{{\ce {Cl}}}}} Often molecules contain dipolar groups of atoms, but have no overall dipole moment on the molecule as a whole. This occurs if there is symmetry within the molecule that causes the dipoles to cancel each other out. This occurs in molecules such as tetrachloromethane and carbon dioxide. The dipole–dipole interaction between two individual atoms is usually zero, since atoms rarely carry a permanent dipole. The Keesom interaction is a van der Waals force. It is discussed further in the section "Van der Waals forces". === Ion–dipole and ion–induced dipole forces === Ion–dipole and ion–induced dipole forces are similar to dipole–dipole and dipole–induced dipole interactions but involve ions, instead of only polar and non-polar molecules. Ion–dipole and ion–induced dipole forces are stronger than dipole–dipole interactions because the charge of any ion is much greater than the charge of a dipole moment. Ion–dipole bonding is stronger than hydrogen bonding. An ion–dipole force consists of an ion and a polar molecule interacting. They align so that the positive and negative groups are next to one another, allowing maximum attraction. An important example of this interaction is hydration of ions in water which give rise to hydration enthalpy. The polar water molecules surround themselves around ions in water and the energy released during the process is known as hydration enthalpy. The interaction has its immense importance in justifying the stability of various ions (like Cu2+) in water. An ion–induced dipole force consists of an ion and a non-polar molecule interacting. Like a dipole–induced dipole force, the charge of the ion causes distortion of the electron cloud on the non-polar molecule. == Van der Waals forces == The van der Waals forces arise from interaction between uncharged atoms or molecules, leading not only to such phenomena as the cohesion of condensed phases and physical absorption of gases, but also to a universal force of attraction between macroscopic bodies. === Keesom force (permanent dipole – permanent dipole) === The first contribution to van der Waals forces is due to electrostatic interactions between rotating permanent dipoles, quadrupoles (all molecules with symmetry lower than cubic), and multipoles. It is termed the Keesom interaction, named after Willem Hendrik Keesom. These forces originate from the attraction between permanent dipoles (dipolar molecules) and are temperature dependent. They consist of attractive interactions between dipoles that are ensemble averaged over different rotational orientations of the dipoles. It is assumed that the molecules are constantly rotating and never get locked into place. This is a good assumption, but at some point molecules do get locked into place. The energy of a Keesom interaction depends on the inverse sixth power of the distance, unlike the interaction energy of two spatially fixed dipoles, which depends on the inverse third power of the distance. The Keesom interaction can only occur among molecules that possess permanent dipole moments, i.e., two polar molecules. Also Keesom interactions are very weak van der Waals interactions and do not occur in aqueous solutions that contain electrolytes. The angle averaged interaction is given by the following equation: − d 1 2 d 2 2 24 π 2 ε 0 2 ε r 2 k B T r 6 = V , {\displaystyle {\frac {-d_{1}^{2}d_{2}^{2}}{24\pi ^{2}\varepsilon _{0}^{2}\varepsilon _{r}^{2}k_{\text{B}}Tr^{6}}}=V,} where d = electric dipole moment, ε 0 {\displaystyle \varepsilon _{0}} = permittivity of free space, ε r {\displaystyle \varepsilon _{r}} = dielectric constant of surrounding material, T = temperature, k B {\displaystyle k_{\text{B}}} = Boltzmann constant, and r = distance between molecules. === Debye force (permanent dipoles–induced dipoles) === The second contribution is the induction (also termed polarization) or Debye force, arising from interactions between rotating permanent dipoles and from the polarizability of atoms and molecules (induced dipoles). These induced dipoles occur when one molecule with a permanent dipole repels another molecule's electrons. A molecule with permanent dipole can induce a dipole in a similar neighboring molecule and cause mutual attraction. Debye forces cannot occur between atoms. The forces between induced and permanent dipoles are not as temperature dependent as Keesom interactions because the induced dipole is free to shift and rotate around the polar molecule. The Debye induction effects and Keesom orientation effects are termed polar interactions. The induced dipole forces appear from the induction (also termed polarization), which is the attractive interaction between a permanent multipole on one molecule with an induced (by the former di/multi-pole) 31 on another. This interaction is called the Debye force, named after Peter J. W. Debye. One example of an induction interaction between permanent dipole and induced dipole is the interaction between HCl and Ar. In this system, Ar experiences a dipole as its electrons are attracted (to the H side of HCl) or repelled (from the Cl side) by HCl. The angle averaged interaction is given by the following equation: − d 1 2 α 2 16 π 2 ε 0 2 ε r 2 r 6 = V , {\displaystyle {\frac {-d_{1}^{2}\alpha _{2}}{16\pi ^{2}\varepsilon _{0}^{2}\varepsilon _{r}^{2}r^{6}}}=V,} where α 2 {\displaystyle \alpha _{2}} = polarizability. This kind of interaction can be expected between any polar molecule and non-polar/symmetrical molecule. The induction-interaction force is far weaker than dipole–dipole interaction, but stronger than the London dispersion force. === London dispersion force (fluctuating dipole–induced dipole interaction) === The third and dominant contribution is the dispersion or London force (fluctuating dipole–induced dipole), which arises due to the non-zero instantaneous dipole moments of all atoms and molecules. Such polarization can be induced either by a polar molecule or by the repulsion of negatively charged electron clouds in non-polar molecules. Thus, London interactions are caused by random fluctuations of electron density in an electron cloud. An atom with a large number of electrons will have a greater associated London force than an atom with fewer electrons. The dispersion (London) force is the most important component because all materials are polarizable, whereas Keesom and Debye forces require permanent dipoles. The London interaction is universal and is present in atom-atom interactions as well. For various reasons, London interactions (dispersion) have been considered relevant for interactions between macroscopic bodies in condensed systems. Hamaker developed the theory of van der Waals between macroscopic bodies in 1937 and showed that the additivity of these interactions renders them considerably more long-range. == Relative strength of forces == This comparison is approximate. The actual relative strengths will vary depending on the molecules involved. For instance, the presence of water creates competing interactions that greatly weaken the strength of both ionic and hydrogen bonds. We may consider that for static systems, Ionic bonding and covalent bonding will always be stronger than intermolecular forces in any given substance. But it is not so for big moving systems like enzyme molecules interacting with substrate molecules. Here the numerous intramolecular (most often - hydrogen bonds) bonds form an active intermediate state where the intermolecular bonds cause some of the covalent bond to be broken, while the others are formed, in this way enabling the thousands of enzymatic reactions, so important for living organisms. == Effect on the behavior of gases == Intermolecular forces are repulsive at short distances and attractive at long distances (see the Lennard-Jones potential). In a gas, the repulsive force chiefly has the effect of keeping two molecules from occupying the same volume. This gives a real gas a tendency to occupy a larger volume than an ideal gas at the same temperature and pressure. The attractive force draws molecules closer together and gives a real gas a tendency to occupy a smaller volume than an ideal gas. Which interaction is more important depends on temperature and pressure (see compressibility factor). In a gas, the distances between molecules are generally large, so intermolecular forces have only a small effect. The attractive force is not overcome by the repulsive force, but by the thermal energy of the molecules. Temperature is the measure of thermal energy, so increasing temperature reduces the influence of the attractive force. In contrast, the influence of the repulsive force is essentially unaffected by temperature. When a gas is compressed to increase its density, the influence of the attractive force increases. If the gas is made sufficiently dense, the attractions can become large enough to overcome the tendency of thermal motion to cause the molecules to disperse. Then the gas can condense to form a solid or liquid, i.e., a condensed phase. Lower temperature favors the formation of a condensed phase. In a condensed phase, there is very nearly a balance between the attractive and repulsive forces. == Quantum mechanical theories == Intermolecular forces observed between atoms and molecules can be described phenomenologically as occurring between permanent and instantaneous dipoles, as outlined above. Alternatively, one may seek a fundamental, unifying theory that is able to explain the various types of interactions such as hydrogen bonding, van der Waals force and dipole–dipole interactions. Typically, this is done by applying the ideas of quantum mechanics to molecules, and Rayleigh–Schrödinger perturbation theory has been especially effective in this regard. When applied to existing quantum chemistry methods, such a quantum mechanical explanation of intermolecular interactions provides an array of approximate methods that can be used to analyze intermolecular interactions. One of the most helpful methods to visualize this kind of intermolecular interactions, that we can find in quantum chemistry, is the non-covalent interaction index, which is based on the electron density of the system. London dispersion forces play a big role with this. Concerning electron density topology, recent methods based on electron density gradient methods have emerged recently, notably with the development of IBSI (Intrinsic Bond Strength Index), relying on the IGM (Independent Gradient Model) methodology. == See also == == References ==
Wikipedia/Intermolecular_force
Physical Chemistry Chemical Physics is a weekly peer-reviewed scientific journal publishing research and review articles on any aspect of physical chemistry, chemical physics, and biophysical chemistry. It is published by the Royal Society of Chemistry on behalf of eighteen participating societies. The editor-in-chief is Anouk Rijs, (Vrije Universiteit Amsterdam). The journal was established in 1999 as the results of a merger between Faraday Transactions and a number of other physical chemistry journals published by different societies. == Owner societies == The journal is run by an Ownership Board, on which all the member societies have equal representation. The eighteen participating societies are: == Article types == The journal publishes the following types of articles Research Papers, original scientific work that has not been published previously Communications, original scientific work that has not been published previously and is of an urgent nature Perspectives, review articles of interest to a broad readership which are commissioned by the editorial board Comments, a medium for the discussion and exchange of scientific opinions, normally concerning material previously published in the journal == Abstracting and indexing == The journal is abstracted and indexed in: According to the Journal Citation Reports, the journal has a 2021 impact factor of 3.945. == See also == List of scientific journals in chemistry Annual Reports on the Progress of Chemistry Section C == References == == External links == Official website
Wikipedia/Physical_Chemistry_Chemical_Physics
In physics, helicity is the projection of the spin onto the direction of momentum. Mathematically, helicity is the sign of the projection of the spin vector onto the momentum vector: "left" is negative, "right" is positive. == Overview == The angular momentum J is the sum of an orbital angular momentum L and a spin S. The relationship between orbital angular momentum L, the position operator r and the linear momentum (orbit part) p is L = r × p {\displaystyle \mathbf {L} =\mathbf {r} \times \mathbf {p} } so L's component in the direction of p is zero. Thus, helicity is just the projection of the spin onto the direction of linear momentum. The helicity of a particle is positive (" right-handed") if the direction of its spin is the same as the direction of its motion and negative ("left-handed") if opposite. Helicity is conserved. That is, the helicity commutes with the Hamiltonian, and thus, in the absence of external forces, is time-invariant. It is also rotationally invariant, in that a rotation applied to the system leaves the helicity unchanged. Helicity, however, is not Lorentz invariant; under the action of a Lorentz boost, the helicity may change sign. Consider, for example, a baseball, pitched as a gyroball, so that its spin axis is aligned with the direction of the pitch. It will have one helicity with respect to the point of view of the players on the field, but would appear to have a flipped helicity in any frame moving faster than the ball. === Comparison with chirality === In this sense, helicity can be contrasted to chirality, which is Lorentz invariant, but is not a constant of motion for massive particles. For massless particles, the two coincide: The helicity is equal to the chirality, both are Lorentz invariant, and both are constants of motion. In quantum mechanics, angular momentum is quantized, and thus helicity is quantized as well. Because the eigenvalues of spin with respect to an axis have discrete values, the eigenvalues of helicity are also discrete. For a massive particle of spin S, the eigenvalues of helicity are S, S − 1, S − 2, ..., −S.: 12  For massless particles, not all of spin eigenvalues correspond to physically meaningful degrees of freedom: For example, the photon is a massless spin 1 particle with helicity eigenvalues −1 and +1, but the eigenvalue 0 is not physically present. All known spin ⁠1/2⁠ particles have non-zero mass; however, for hypothetical massless spin ⁠1/2⁠ particles (the Weyl spinors), helicity is equivalent to the chirality operator multiplied by ⁠1/2⁠ħ. By contrast, for massive particles, distinct chirality states (e.g., as occur in the weak interaction charges) have both positive and negative helicity components, in ratios proportional to the mass of the particle. A treatment of the helicity of gravitational waves can be found in Weinberg. In summary, they come in only two forms: +2 and −2, while the +1, 0 and −1 helicities are "non-dynamical" (they can be removed by a gauge transformation). == Little group == In 3 + 1 dimensions, the little group for a massless particle is the double cover of SE(2). This has unitary representations which are invariant under the SE(2) "translations" and transform as eihθ under a SE(2) rotation by θ. This is the helicity h representation. There is also another unitary representation which transforms non-trivially under the SE(2) translations. This is the continuous spin representation. In d + 1 dimensions, the little group is the double cover of SE(d − 1) (the case where d ≤ 2 is more complicated because of anyons, etc.). As before, there are unitary representations which don't transform under the SE(d − 1) "translations" (the "standard" representations) and "continuous spin" representations. == See also == Chirality (physics) Helicity basis Gyroball, a macroscopic object (specifically a baseball) exhibiting an analogous phenomenon Wigner's classification Pauli–Lubanski pseudovector == References == == Other sources ==
Wikipedia/Helicity_(particle_physics)
The Schrödinger equation is a partial differential equation that governs the wave function of a non-relativistic quantum-mechanical system.: 1–2  Its discovery was a significant landmark in the development of quantum mechanics. It is named after Erwin Schrödinger, an Austrian physicist, who postulated the equation in 1925 and published it in 1926, forming the basis for the work that resulted in his Nobel Prize in Physics in 1933. Conceptually, the Schrödinger equation is the quantum counterpart of Newton's second law in classical mechanics. Given a set of known initial conditions, Newton's second law makes a mathematical prediction as to what path a given physical system will take over time. The Schrödinger equation gives the evolution over time of the wave function, the quantum-mechanical characterization of an isolated physical system. The equation was postulated by Schrödinger based on a postulate of Louis de Broglie that all matter has an associated matter wave. The equation predicted bound states of the atom in agreement with experimental observations.: II:268  The Schrödinger equation is not the only way to study quantum mechanical systems and make predictions. Other formulations of quantum mechanics include matrix mechanics, introduced by Werner Heisenberg, and the path integral formulation, developed chiefly by Richard Feynman. When these approaches are compared, the use of the Schrödinger equation is sometimes called "wave mechanics". The equation given by Schrödinger is nonrelativistic because it contains a first derivative in time and a second derivative in space, and therefore space and time are not on equal footing. Paul Dirac incorporated special relativity and quantum mechanics into a single formulation that simplifies to the Schrödinger equation in the non-relativistic limit. This is the Dirac equation, which contains a single derivative in both space and time. Another partial differential equation, the Klein–Gordon equation, led to a problem with probability density even though it was a relativistic wave equation. The probability density could be negative, which is physically unviable. This was fixed by Dirac by taking the so-called square root of the Klein–Gordon operator and in turn introducing Dirac matrices. In a modern context, the Klein–Gordon equation describes spin-less particles, while the Dirac equation describes spin-1/2 particles. == Definition == === Preliminaries === Introductory courses on physics or chemistry typically introduce the Schrödinger equation in a way that can be appreciated knowing only the concepts and notations of basic calculus, particularly derivatives with respect to space and time. A special case of the Schrödinger equation that admits a statement in those terms is the position-space Schrödinger equation for a single nonrelativistic particle in one dimension: i ℏ ∂ ∂ t Ψ ( x , t ) = [ − ℏ 2 2 m ∂ 2 ∂ x 2 + V ( x , t ) ] Ψ ( x , t ) . {\displaystyle i\hbar {\frac {\partial }{\partial t}}\Psi (x,t)=\left[-{\frac {\hbar ^{2}}{2m}}{\frac {\partial ^{2}}{\partial x^{2}}}+V(x,t)\right]\Psi (x,t).} Here, Ψ ( x , t ) {\displaystyle \Psi (x,t)} is a wave function, a function that assigns a complex number to each point x {\displaystyle x} at each time t {\displaystyle t} . The parameter m {\displaystyle m} is the mass of the particle, and V ( x , t ) {\displaystyle V(x,t)} is the potential that represents the environment in which the particle exists.: 74  The constant i {\displaystyle i} is the imaginary unit, and ℏ {\displaystyle \hbar } is the reduced Planck constant, which has units of action (energy multiplied by time).: 10  Broadening beyond this simple case, the mathematical formulation of quantum mechanics developed by Paul Dirac, David Hilbert, John von Neumann, and Hermann Weyl defines the state of a quantum mechanical system to be a vector | ψ ⟩ {\displaystyle |\psi \rangle } belonging to a separable complex Hilbert space H {\displaystyle {\mathcal {H}}} . This vector is postulated to be normalized under the Hilbert space's inner product, that is, in Dirac notation it obeys ⟨ ψ | ψ ⟩ = 1 {\displaystyle \langle \psi |\psi \rangle =1} . The exact nature of this Hilbert space is dependent on the system – for example, for describing position and momentum the Hilbert space is the space of square-integrable functions L 2 {\displaystyle L^{2}} , while the Hilbert space for the spin of a single proton is the two-dimensional complex vector space C 2 {\displaystyle \mathbb {C} ^{2}} with the usual inner product.: 322  Physical quantities of interest – position, momentum, energy, spin – are represented by observables, which are self-adjoint operators acting on the Hilbert space. A wave function can be an eigenvector of an observable, in which case it is called an eigenstate, and the associated eigenvalue corresponds to the value of the observable in that eigenstate. More generally, a quantum state will be a linear combination of the eigenstates, known as a quantum superposition. When an observable is measured, the result will be one of its eigenvalues with probability given by the Born rule: in the simplest case the eigenvalue λ {\displaystyle \lambda } is non-degenerate and the probability is given by | ⟨ λ | ψ ⟩ | 2 {\displaystyle |\langle \lambda |\psi \rangle |^{2}} , where | λ ⟩ {\displaystyle |\lambda \rangle } is its associated eigenvector. More generally, the eigenvalue is degenerate and the probability is given by ⟨ ψ | P λ | ψ ⟩ {\displaystyle \langle \psi |P_{\lambda }|\psi \rangle } , where P λ {\displaystyle P_{\lambda }} is the projector onto its associated eigenspace. A momentum eigenstate would be a perfectly monochromatic wave of infinite extent, which is not square-integrable. Likewise a position eigenstate would be a Dirac delta distribution, not square-integrable and technically not a function at all. Consequently, neither can belong to the particle's Hilbert space. Physicists sometimes regard these eigenstates, composed of elements outside the Hilbert space, as "generalized eigenvectors". These are used for calculational convenience and do not represent physical states.: 100–105  Thus, a position-space wave function Ψ ( x , t ) {\displaystyle \Psi (x,t)} as used above can be written as the inner product of a time-dependent state vector | Ψ ( t ) ⟩ {\displaystyle |\Psi (t)\rangle } with unphysical but convenient "position eigenstates" | x ⟩ {\displaystyle |x\rangle } : Ψ ( x , t ) = ⟨ x | Ψ ( t ) ⟩ . {\displaystyle \Psi (x,t)=\langle x|\Psi (t)\rangle .} === Time-dependent equation === The form of the Schrödinger equation depends on the physical situation. The most general form is the time-dependent Schrödinger equation, which gives a description of a system evolving with time:: 143  where t {\displaystyle t} is time, | Ψ ( t ) ⟩ {\displaystyle \vert \Psi (t)\rangle } is the state vector of the quantum system ( Ψ {\displaystyle \Psi } being the Greek letter psi), and H ^ {\displaystyle {\hat {H}}} is an observable, the Hamiltonian operator. The term "Schrödinger equation" can refer to both the general equation, or the specific nonrelativistic version. The general equation is indeed quite general, used throughout quantum mechanics, for everything from the Dirac equation to quantum field theory, by plugging in diverse expressions for the Hamiltonian. The specific nonrelativistic version is an approximation that yields accurate results in many situations, but only to a certain extent (see relativistic quantum mechanics and relativistic quantum field theory). To apply the Schrödinger equation, write down the Hamiltonian for the system, accounting for the kinetic and potential energies of the particles constituting the system, then insert it into the Schrödinger equation. The resulting partial differential equation is solved for the wave function, which contains information about the system. In practice, the square of the absolute value of the wave function at each point is taken to define a probability density function.: 78  For example, given a wave function in position space Ψ ( x , t ) {\displaystyle \Psi (x,t)} as above, we have Pr ( x , t ) = | Ψ ( x , t ) | 2 . {\displaystyle \Pr(x,t)=|\Psi (x,t)|^{2}.} === Time-independent equation === The time-dependent Schrödinger equation described above predicts that wave functions can form standing waves, called stationary states. These states are particularly important as their individual study later simplifies the task of solving the time-dependent Schrödinger equation for any state. Stationary states can also be described by a simpler form of the Schrödinger equation, the time-independent Schrödinger equation. where E {\displaystyle E} is the energy of the system.: 134  This is only used when the Hamiltonian itself is not dependent on time explicitly. However, even in this case the total wave function is dependent on time as explained in the section on linearity below. In the language of linear algebra, this equation is an eigenvalue equation. Therefore, the wave function is an eigenfunction of the Hamiltonian operator with corresponding eigenvalue(s) E {\displaystyle E} . == Properties == === Linearity === The Schrödinger equation is a linear differential equation, meaning that if two state vectors | ψ 1 ⟩ {\displaystyle |\psi _{1}\rangle } and | ψ 2 ⟩ {\displaystyle |\psi _{2}\rangle } are solutions, then so is any linear combination | ψ ⟩ = a | ψ 1 ⟩ + b | ψ 2 ⟩ {\displaystyle |\psi \rangle =a|\psi _{1}\rangle +b|\psi _{2}\rangle } of the two state vectors where a and b are any complex numbers.: 25  Moreover, the sum can be extended for any number of state vectors. This property allows superpositions of quantum states to be solutions of the Schrödinger equation. Even more generally, it holds that a general solution to the Schrödinger equation can be found by taking a weighted sum over a basis of states. A choice often employed is the basis of energy eigenstates, which are solutions of the time-independent Schrödinger equation. In this basis, a time-dependent state vector | Ψ ( t ) ⟩ {\displaystyle |\Psi (t)\rangle } can be written as the linear combination | Ψ ( t ) ⟩ = ∑ n A n e − i E n t / ℏ | ψ E n ⟩ , {\displaystyle |\Psi (t)\rangle =\sum _{n}A_{n}e^{{-iE_{n}t}/\hbar }|\psi _{E_{n}}\rangle ,} where A n {\displaystyle A_{n}} are complex numbers and the vectors | ψ E n ⟩ {\displaystyle |\psi _{E_{n}}\rangle } are solutions of the time-independent equation H ^ | ψ E n ⟩ = E n | ψ E n ⟩ {\displaystyle {\hat {H}}|\psi _{E_{n}}\rangle =E_{n}|\psi _{E_{n}}\rangle } . === Unitarity === Holding the Hamiltonian H ^ {\displaystyle {\hat {H}}} constant, the Schrödinger equation has the solution | Ψ ( t ) ⟩ = e − i H ^ t / ℏ | Ψ ( 0 ) ⟩ . {\displaystyle |\Psi (t)\rangle =e^{-i{\hat {H}}t/\hbar }|\Psi (0)\rangle .} The operator U ^ ( t ) = e − i H ^ t / ℏ {\displaystyle {\hat {U}}(t)=e^{-i{\hat {H}}t/\hbar }} is known as the time-evolution operator, and it is unitary: it preserves the inner product between vectors in the Hilbert space. Unitarity is a general feature of time evolution under the Schrödinger equation. If the initial state is | Ψ ( 0 ) ⟩ {\displaystyle |\Psi (0)\rangle } , then the state at a later time t {\displaystyle t} will be given by | Ψ ( t ) ⟩ = U ^ ( t ) | Ψ ( 0 ) ⟩ {\displaystyle |\Psi (t)\rangle ={\hat {U}}(t)|\Psi (0)\rangle } for some unitary operator U ^ ( t ) {\displaystyle {\hat {U}}(t)} . Conversely, suppose that U ^ ( t ) {\displaystyle {\hat {U}}(t)} is a continuous family of unitary operators parameterized by t {\displaystyle t} . Without loss of generality, the parameterization can be chosen so that U ^ ( 0 ) {\displaystyle {\hat {U}}(0)} is the identity operator and that U ^ ( t / N ) N = U ^ ( t ) {\displaystyle {\hat {U}}(t/N)^{N}={\hat {U}}(t)} for any N > 0 {\displaystyle N>0} . Then U ^ ( t ) {\displaystyle {\hat {U}}(t)} depends upon the parameter t {\displaystyle t} in such a way that U ^ ( t ) = e − i G ^ t {\displaystyle {\hat {U}}(t)=e^{-i{\hat {G}}t}} for some self-adjoint operator G ^ {\displaystyle {\hat {G}}} , called the generator of the family U ^ ( t ) {\displaystyle {\hat {U}}(t)} . A Hamiltonian is just such a generator (up to the factor of the Planck constant that would be set to 1 in natural units). To see that the generator is Hermitian, note that with U ^ ( δ t ) ≈ U ^ ( 0 ) − i G ^ δ t {\displaystyle {\hat {U}}(\delta t)\approx {\hat {U}}(0)-i{\hat {G}}\delta t} , we have U ^ ( δ t ) † U ^ ( δ t ) ≈ ( U ^ ( 0 ) † + i G ^ † δ t ) ( U ^ ( 0 ) − i G ^ δ t ) = I + i δ t ( G ^ † − G ^ ) + O ( δ t 2 ) , {\displaystyle {\hat {U}}(\delta t)^{\dagger }{\hat {U}}(\delta t)\approx ({\hat {U}}(0)^{\dagger }+i{\hat {G}}^{\dagger }\delta t)({\hat {U}}(0)-i{\hat {G}}\delta t)=I+i\delta t({\hat {G}}^{\dagger }-{\hat {G}})+O(\delta t^{2}),} so U ^ ( t ) {\displaystyle {\hat {U}}(t)} is unitary only if, to first order, its derivative is Hermitian. === Changes of basis === The Schrödinger equation is often presented using quantities varying as functions of position, but as a vector-operator equation it has a valid representation in any arbitrary complete basis of kets in Hilbert space. As mentioned above, "bases" that lie outside the physical Hilbert space are also employed for calculational purposes. This is illustrated by the position-space and momentum-space Schrödinger equations for a nonrelativistic, spinless particle.: 182  The Hilbert space for such a particle is the space of complex square-integrable functions on three-dimensional Euclidean space, and its Hamiltonian is the sum of a kinetic-energy term that is quadratic in the momentum operator and a potential-energy term: i ℏ d d t | Ψ ( t ) ⟩ = ( 1 2 m p ^ 2 + V ^ ) | Ψ ( t ) ⟩ . {\displaystyle i\hbar {\frac {d}{dt}}|\Psi (t)\rangle =\left({\frac {1}{2m}}{\hat {p}}^{2}+{\hat {V}}\right)|\Psi (t)\rangle .} Writing r {\displaystyle \mathbf {r} } for a three-dimensional position vector and p {\displaystyle \mathbf {p} } for a three-dimensional momentum vector, the position-space Schrödinger equation is i ℏ ∂ ∂ t Ψ ( r , t ) = − ℏ 2 2 m ∇ 2 Ψ ( r , t ) + V ( r ) Ψ ( r , t ) . {\displaystyle i\hbar {\frac {\partial }{\partial t}}\Psi (\mathbf {r} ,t)=-{\frac {\hbar ^{2}}{2m}}\nabla ^{2}\Psi (\mathbf {r} ,t)+V(\mathbf {r} )\Psi (\mathbf {r} ,t).} The momentum-space counterpart involves the Fourier transforms of the wave function and the potential: i ℏ ∂ ∂ t Ψ ~ ( p , t ) = p 2 2 m Ψ ~ ( p , t ) + ( 2 π ℏ ) − 3 / 2 ∫ d 3 p ′ V ~ ( p − p ′ ) Ψ ~ ( p ′ , t ) . {\displaystyle i\hbar {\frac {\partial }{\partial t}}{\tilde {\Psi }}(\mathbf {p} ,t)={\frac {\mathbf {p} ^{2}}{2m}}{\tilde {\Psi }}(\mathbf {p} ,t)+(2\pi \hbar )^{-3/2}\int d^{3}\mathbf {p} '\,{\tilde {V}}(\mathbf {p} -\mathbf {p} '){\tilde {\Psi }}(\mathbf {p} ',t).} The functions Ψ ( r , t ) {\displaystyle \Psi (\mathbf {r} ,t)} and Ψ ~ ( p , t ) {\displaystyle {\tilde {\Psi }}(\mathbf {p} ,t)} are derived from | Ψ ( t ) ⟩ {\displaystyle |\Psi (t)\rangle } by Ψ ( r , t ) = ⟨ r | Ψ ( t ) ⟩ , {\displaystyle \Psi (\mathbf {r} ,t)=\langle \mathbf {r} |\Psi (t)\rangle ,} Ψ ~ ( p , t ) = ⟨ p | Ψ ( t ) ⟩ , {\displaystyle {\tilde {\Psi }}(\mathbf {p} ,t)=\langle \mathbf {p} |\Psi (t)\rangle ,} where | r ⟩ {\displaystyle |\mathbf {r} \rangle } and | p ⟩ {\displaystyle |\mathbf {p} \rangle } do not belong to the Hilbert space itself, but have well-defined inner products with all elements of that space. When restricted from three dimensions to one, the position-space equation is just the first form of the Schrödinger equation given above. The relation between position and momentum in quantum mechanics can be appreciated in a single dimension. In canonical quantization, the classical variables x {\displaystyle x} and p {\displaystyle p} are promoted to self-adjoint operators x ^ {\displaystyle {\hat {x}}} and p ^ {\displaystyle {\hat {p}}} that satisfy the canonical commutation relation [ x ^ , p ^ ] = i ℏ . {\displaystyle [{\hat {x}},{\hat {p}}]=i\hbar .} This implies that: 190  ⟨ x | p ^ | Ψ ⟩ = − i ℏ d d x Ψ ( x ) , {\displaystyle \langle x|{\hat {p}}|\Psi \rangle =-i\hbar {\frac {d}{dx}}\Psi (x),} so the action of the momentum operator p ^ {\displaystyle {\hat {p}}} in the position-space representation is − i ℏ d d x {\textstyle -i\hbar {\frac {d}{dx}}} . Thus, p ^ 2 {\displaystyle {\hat {p}}^{2}} becomes a second derivative, and in three dimensions, the second derivative becomes the Laplacian ∇ 2 {\displaystyle \nabla ^{2}} . The canonical commutation relation also implies that the position and momentum operators are Fourier conjugates of each other. Consequently, functions originally defined in terms of their position dependence can be converted to functions of momentum using the Fourier transform.: 103–104  In solid-state physics, the Schrödinger equation is often written for functions of momentum, as Bloch's theorem ensures the periodic crystal lattice potential couples Ψ ~ ( p ) {\displaystyle {\tilde {\Psi }}(p)} with Ψ ~ ( p + ℏ K ) {\displaystyle {\tilde {\Psi }}(p+\hbar K)} for only discrete reciprocal lattice vectors K {\displaystyle K} . This makes it convenient to solve the momentum-space Schrödinger equation at each point in the Brillouin zone independently of the other points in the Brillouin zone.: 138  === Probability current === The Schrödinger equation is consistent with local probability conservation.: 238  It also ensures that a normalized wavefunction remains normalized after time evolution. In matrix mechanics, this means that the time evolution operator is a unitary operator. In contrast to, for example, the Klein Gordon equation, although a redefined inner product of a wavefunction can be time independent, the total volume integral of modulus square of the wavefunction need not be time independent. The continuity equation for probability in non relativistic quantum mechanics is stated as: ∂ ∂ t ρ ( r , t ) + ∇ ⋅ j = 0 , {\displaystyle {\frac {\partial }{\partial t}}\rho \left(\mathbf {r} ,t\right)+\nabla \cdot \mathbf {j} =0,} where j = 1 2 m ( Ψ ∗ p ^ Ψ − Ψ p ^ Ψ ∗ ) = − i ℏ 2 m ( ψ ∗ ∇ ψ − ψ ∇ ψ ∗ ) = ℏ m Im ⁡ ( ψ ∗ ∇ ψ ) {\displaystyle \mathbf {j} ={\frac {1}{2m}}\left(\Psi ^{*}{\hat {\mathbf {p} }}\Psi -\Psi {\hat {\mathbf {p} }}\Psi ^{*}\right)=-{\frac {i\hbar }{2m}}(\psi ^{*}\nabla \psi -\psi \nabla \psi ^{*})={\frac {\hbar }{m}}\operatorname {Im} (\psi ^{*}\nabla \psi )} is the probability current or probability flux (flow per unit area). If the wavefunction is represented as ψ ( x , t ) = ρ ( x , t ) exp ⁡ ( i S ( x , t ) ℏ ) , {\textstyle \psi ({\bf {x}},t)={\sqrt {\rho ({\bf {x}},t)}}\exp \left({\frac {iS({\bf {x}},t)}{\hbar }}\right),} where S ( x , t ) {\displaystyle S(\mathbf {x} ,t)} is a real function which represents the complex phase of the wavefunction, then the probability flux is calculated as: j = ρ ∇ S m {\displaystyle \mathbf {j} ={\frac {\rho \nabla S}{m}}} Hence, the spatial variation of the phase of a wavefunction is said to characterize the probability flux of the wavefunction. Although the ∇ S m {\textstyle {\frac {\nabla S}{m}}} term appears to play the role of velocity, it does not represent velocity at a point since simultaneous measurement of position and velocity violates uncertainty principle. === Separation of variables === If the Hamiltonian is not an explicit function of time, Schrödinger's equation reads: i ℏ ∂ ∂ t Ψ ( r , t ) = [ − ℏ 2 2 m ∇ 2 + V ( r ) ] Ψ ( r , t ) . {\displaystyle i\hbar {\frac {\partial }{\partial t}}\Psi (\mathbf {r} ,t)=\left[-{\frac {\hbar ^{2}}{2m}}\nabla ^{2}+V(\mathbf {r} )\right]\Psi (\mathbf {r} ,t).} The operator on the left side depends only on time; the one on the right side depends only on space. Solving the equation by separation of variables means seeking a solution of the form of a product of spatial and temporal parts Ψ ( r , t ) = ψ ( r ) τ ( t ) , {\displaystyle \Psi (\mathbf {r} ,t)=\psi (\mathbf {r} )\tau (t),} where ψ ( r ) {\displaystyle \psi (\mathbf {r} )} is a function of all the spatial coordinate(s) of the particle(s) constituting the system only, and τ ( t ) {\displaystyle \tau (t)} is a function of time only. Substituting this expression for Ψ {\displaystyle \Psi } into the time dependent left hand side shows that τ ( t ) {\displaystyle \tau (t)} is a phase factor: Ψ ( r , t ) = ψ ( r ) e − i E t / ℏ . {\displaystyle \Psi (\mathbf {r} ,t)=\psi (\mathbf {r} )e^{-i{Et/\hbar }}.} A solution of this type is called stationary, since the only time dependence is a phase factor that cancels when the probability density is calculated via the Born rule.: 143ff  The spatial part of the full wave function solves the equation ∇ 2 ψ ( r ) + 2 m ℏ 2 [ E − V ( r ) ] ψ ( r ) = 0 , {\displaystyle \nabla ^{2}\psi (\mathbf {r} )+{\frac {2m}{\hbar ^{2}}}\left[E-V(\mathbf {r} )\right]\psi (\mathbf {r} )=0,} where the energy E {\displaystyle E} appears in the phase factor. This generalizes to any number of particles in any number of dimensions (in a time-independent potential): the standing wave solutions of the time-independent equation are the states with definite energy, instead of a probability distribution of different energies. In physics, these standing waves are called "stationary states" or "energy eigenstates"; in chemistry they are called "atomic orbitals" or "molecular orbitals". Superpositions of energy eigenstates change their properties according to the relative phases between the energy levels. The energy eigenstates form a basis: any wave function may be written as a sum over the discrete energy states or an integral over continuous energy states, or more generally as an integral over a measure. This is an example of the spectral theorem, and in a finite-dimensional state space it is just a statement of the completeness of the eigenvectors of a Hermitian matrix. Separation of variables can also be a useful method for the time-independent Schrödinger equation. For example, depending on the symmetry of the problem, the Cartesian axes might be separated, as in ψ ( r ) = ψ x ( x ) ψ y ( y ) ψ z ( z ) , {\displaystyle \psi (\mathbf {r} )=\psi _{x}(x)\psi _{y}(y)\psi _{z}(z),} or radial and angular coordinates might be separated: ψ ( r ) = ψ r ( r ) ψ θ ( θ ) ψ ϕ ( ϕ ) . {\displaystyle \psi (\mathbf {r} )=\psi _{r}(r)\psi _{\theta }(\theta )\psi _{\phi }(\phi ).} == Examples == === Particle in a box === The particle in a one-dimensional potential energy box is the most mathematically simple example where restraints lead to the quantization of energy levels. The box is defined as having zero potential energy inside a certain region and infinite potential energy outside.: 77–78  For the one-dimensional case in the x {\displaystyle x} direction, the time-independent Schrödinger equation may be written − ℏ 2 2 m d 2 ψ d x 2 = E ψ . {\displaystyle -{\frac {\hbar ^{2}}{2m}}{\frac {d^{2}\psi }{dx^{2}}}=E\psi .} With the differential operator defined by p ^ x = − i ℏ d d x {\displaystyle {\hat {p}}_{x}=-i\hbar {\frac {d}{dx}}} the previous equation is evocative of the classic kinetic energy analogue, 1 2 m p ^ x 2 = E , {\displaystyle {\frac {1}{2m}}{\hat {p}}_{x}^{2}=E,} with state ψ {\displaystyle \psi } in this case having energy E {\displaystyle E} coincident with the kinetic energy of the particle. The general solutions of the Schrödinger equation for the particle in a box are ψ ( x ) = A e i k x + B e − i k x E = ℏ 2 k 2 2 m {\displaystyle \psi (x)=Ae^{ikx}+Be^{-ikx}\qquad \qquad E={\frac {\hbar ^{2}k^{2}}{2m}}} or, from Euler's formula, ψ ( x ) = C sin ⁡ ( k x ) + D cos ⁡ ( k x ) . {\displaystyle \psi (x)=C\sin(kx)+D\cos(kx).} The infinite potential walls of the box determine the values of C , D , {\displaystyle C,D,} and k {\displaystyle k} at x = 0 {\displaystyle x=0} and x = L {\displaystyle x=L} where ψ {\displaystyle \psi } must be zero. Thus, at x = 0 {\displaystyle x=0} , ψ ( 0 ) = 0 = C sin ⁡ ( 0 ) + D cos ⁡ ( 0 ) = D {\displaystyle \psi (0)=0=C\sin(0)+D\cos(0)=D} and D = 0 {\displaystyle D=0} . At x = L {\displaystyle x=L} , ψ ( L ) = 0 = C sin ⁡ ( k L ) , {\displaystyle \psi (L)=0=C\sin(kL),} in which C {\displaystyle C} cannot be zero as this would conflict with the postulate that ψ {\displaystyle \psi } has norm 1. Therefore, since sin ⁡ ( k L ) = 0 {\displaystyle \sin(kL)=0} , k L {\displaystyle kL} must be an integer multiple of π {\displaystyle \pi } , k = n π L n = 1 , 2 , 3 , … . {\displaystyle k={\frac {n\pi }{L}}\qquad \qquad n=1,2,3,\ldots .} This constraint on k {\displaystyle k} implies a constraint on the energy levels, yielding E n = ℏ 2 π 2 n 2 2 m L 2 = n 2 h 2 8 m L 2 . {\displaystyle E_{n}={\frac {\hbar ^{2}\pi ^{2}n^{2}}{2mL^{2}}}={\frac {n^{2}h^{2}}{8mL^{2}}}.} A finite potential well is the generalization of the infinite potential well problem to potential wells having finite depth. The finite potential well problem is mathematically more complicated than the infinite particle-in-a-box problem as the wave function is not pinned to zero at the walls of the well. Instead, the wave function must satisfy more complicated mathematical boundary conditions as it is nonzero in regions outside the well. Another related problem is that of the rectangular potential barrier, which furnishes a model for the quantum tunneling effect that plays an important role in the performance of modern technologies such as flash memory and scanning tunneling microscopy. === Harmonic oscillator === The Schrödinger equation for this situation is E ψ = − ℏ 2 2 m d 2 d x 2 ψ + 1 2 m ω 2 x 2 ψ , {\displaystyle E\psi =-{\frac {\hbar ^{2}}{2m}}{\frac {d^{2}}{dx^{2}}}\psi +{\frac {1}{2}}m\omega ^{2}x^{2}\psi ,} where x {\displaystyle x} is the displacement and ω {\displaystyle \omega } the angular frequency. Furthermore, it can be used to describe approximately a wide variety of other systems, including vibrating atoms, molecules, and atoms or ions in lattices, and approximating other potentials near equilibrium points. It is also the basis of perturbation methods in quantum mechanics. The solutions in position space are ψ n ( x ) = 1 2 n n ! ( m ω π ℏ ) 1 / 4 e − m ω x 2 2 ℏ H n ( m ω ℏ x ) , {\displaystyle \psi _{n}(x)={\sqrt {\frac {1}{2^{n}\,n!}}}\ \left({\frac {m\omega }{\pi \hbar }}\right)^{1/4}\ e^{-{\frac {m\omega x^{2}}{2\hbar }}}\ {\mathcal {H}}_{n}\left({\sqrt {\frac {m\omega }{\hbar }}}x\right),} where n ∈ { 0 , 1 , 2 , … } {\displaystyle n\in \{0,1,2,\ldots \}} , and the functions H n {\displaystyle {\mathcal {H}}_{n}} are the Hermite polynomials of order n {\displaystyle n} . The solution set may be generated by ψ n ( x ) = 1 n ! ( m ω 2 ℏ ) n ( x − ℏ m ω d d x ) n ( m ω π ℏ ) 1 4 e − m ω x 2 2 ℏ . {\displaystyle \psi _{n}(x)={\frac {1}{\sqrt {n!}}}\left({\sqrt {\frac {m\omega }{2\hbar }}}\right)^{n}\left(x-{\frac {\hbar }{m\omega }}{\frac {d}{dx}}\right)^{n}\left({\frac {m\omega }{\pi \hbar }}\right)^{\frac {1}{4}}e^{\frac {-m\omega x^{2}}{2\hbar }}.} The eigenvalues are E n = ( n + 1 2 ) ℏ ω . {\displaystyle E_{n}=\left(n+{\frac {1}{2}}\right)\hbar \omega .} The case n = 0 {\displaystyle n=0} is called the ground state, its energy is called the zero-point energy, and the wave function is a Gaussian. The harmonic oscillator, like the particle in a box, illustrates the generic feature of the Schrödinger equation that the energies of bound eigenstates are discretized.: 352  === Hydrogen atom === The Schrödinger equation for the electron in a hydrogen atom (or a hydrogen-like atom) is E ψ = − ℏ 2 2 μ ∇ 2 ψ − q 2 4 π ε 0 r ψ {\displaystyle E\psi =-{\frac {\hbar ^{2}}{2\mu }}\nabla ^{2}\psi -{\frac {q^{2}}{4\pi \varepsilon _{0}r}}\psi } where q {\displaystyle q} is the electron charge, r {\displaystyle \mathbf {r} } is the position of the electron relative to the nucleus, r = | r | {\displaystyle r=|\mathbf {r} |} is the magnitude of the relative position, the potential term is due to the Coulomb interaction, wherein ε 0 {\displaystyle \varepsilon _{0}} is the permittivity of free space and μ = m q m p m q + m p {\displaystyle \mu ={\frac {m_{q}m_{p}}{m_{q}+m_{p}}}} is the 2-body reduced mass of the hydrogen nucleus (just a proton) of mass m p {\displaystyle m_{p}} and the electron of mass m q {\displaystyle m_{q}} . The negative sign arises in the potential term since the proton and electron are oppositely charged. The reduced mass in place of the electron mass is used since the electron and proton together orbit each other about a common center of mass, and constitute a two-body problem to solve. The motion of the electron is of principal interest here, so the equivalent one-body problem is the motion of the electron using the reduced mass. The Schrödinger equation for a hydrogen atom can be solved by separation of variables. In this case, spherical polar coordinates are the most convenient. Thus, ψ ( r , θ , φ ) = R ( r ) Y ℓ m ( θ , φ ) = R ( r ) Θ ( θ ) Φ ( φ ) , {\displaystyle \psi (r,\theta ,\varphi )=R(r)Y_{\ell }^{m}(\theta ,\varphi )=R(r)\Theta (\theta )\Phi (\varphi ),} where R are radial functions and Y l m ( θ , φ ) {\displaystyle Y_{l}^{m}(\theta ,\varphi )} are spherical harmonics of degree ℓ {\displaystyle \ell } and order m {\displaystyle m} . This is the only atom for which the Schrödinger equation has been solved for exactly. Multi-electron atoms require approximate methods. The family of solutions are: ψ n ℓ m ( r , θ , φ ) = ( 2 n a 0 ) 3 ( n − ℓ − 1 ) ! 2 n [ ( n + ℓ ) ! ] e − r / n a 0 ( 2 r n a 0 ) ℓ L n − ℓ − 1 2 ℓ + 1 ( 2 r n a 0 ) ⋅ Y ℓ m ( θ , φ ) {\displaystyle \psi _{n\ell m}(r,\theta ,\varphi )={\sqrt {\left({\frac {2}{na_{0}}}\right)^{3}{\frac {(n-\ell -1)!}{2n[(n+\ell )!]}}}}e^{-r/na_{0}}\left({\frac {2r}{na_{0}}}\right)^{\ell }L_{n-\ell -1}^{2\ell +1}\left({\frac {2r}{na_{0}}}\right)\cdot Y_{\ell }^{m}(\theta ,\varphi )} where a 0 = 4 π ε 0 ℏ 2 m q q 2 {\displaystyle a_{0}={\frac {4\pi \varepsilon _{0}\hbar ^{2}}{m_{q}q^{2}}}} is the Bohr radius, L n − ℓ − 1 2 ℓ + 1 ( ⋯ ) {\displaystyle L_{n-\ell -1}^{2\ell +1}(\cdots )} are the generalized Laguerre polynomials of degree n − ℓ − 1 {\displaystyle n-\ell -1} , n , ℓ , m {\displaystyle n,\ell ,m} are the principal, azimuthal, and magnetic quantum numbers respectively, which take the values n = 1 , 2 , 3 , … , {\displaystyle n=1,2,3,\dots ,} ℓ = 0 , 1 , 2 , … , n − 1 , {\displaystyle \ell =0,1,2,\dots ,n-1,} m = − ℓ , … , ℓ . {\displaystyle m=-\ell ,\dots ,\ell .} === Approximate solutions === It is typically not possible to solve the Schrödinger equation exactly for situations of physical interest. Accordingly, approximate solutions are obtained using techniques like variational methods and WKB approximation. It is also common to treat a problem of interest as a small modification to a problem that can be solved exactly, a method known as perturbation theory. == Semiclassical limit == One simple way to compare classical to quantum mechanics is to consider the time-evolution of the expected position and expected momentum, which can then be compared to the time-evolution of the ordinary position and momentum in classical mechanics.: 302  The quantum expectation values satisfy the Ehrenfest theorem. For a one-dimensional quantum particle moving in a potential V {\displaystyle V} , the Ehrenfest theorem says m d d t ⟨ x ⟩ = ⟨ p ⟩ ; d d t ⟨ p ⟩ = − ⟨ V ′ ( X ) ⟩ . {\displaystyle m{\frac {d}{dt}}\langle x\rangle =\langle p\rangle ;\quad {\frac {d}{dt}}\langle p\rangle =-\left\langle V'(X)\right\rangle .} Although the first of these equations is consistent with the classical behavior, the second is not: If the pair ( ⟨ X ⟩ , ⟨ P ⟩ ) {\displaystyle (\langle X\rangle ,\langle P\rangle )} were to satisfy Newton's second law, the right-hand side of the second equation would have to be − V ′ ( ⟨ X ⟩ ) {\displaystyle -V'\left(\left\langle X\right\rangle \right)} which is typically not the same as − ⟨ V ′ ( X ) ⟩ {\displaystyle -\left\langle V'(X)\right\rangle } . For a general V ′ {\displaystyle V'} , therefore, quantum mechanics can lead to predictions where expectation values do not mimic the classical behavior. In the case of the quantum harmonic oscillator, however, V ′ {\displaystyle V'} is linear and this distinction disappears, so that in this very special case, the expected position and expected momentum do exactly follow the classical trajectories. For general systems, the best we can hope for is that the expected position and momentum will approximately follow the classical trajectories. If the wave function is highly concentrated around a point x 0 {\displaystyle x_{0}} , then V ′ ( ⟨ X ⟩ ) {\displaystyle V'\left(\left\langle X\right\rangle \right)} and ⟨ V ′ ( X ) ⟩ {\displaystyle \left\langle V'(X)\right\rangle } will be almost the same, since both will be approximately equal to V ′ ( x 0 ) {\displaystyle V'(x_{0})} . In that case, the expected position and expected momentum will remain very close to the classical trajectories, at least for as long as the wave function remains highly localized in position. The Schrödinger equation in its general form i ℏ ∂ ∂ t Ψ ( r , t ) = H ^ Ψ ( r , t ) {\displaystyle i\hbar {\frac {\partial }{\partial t}}\Psi \left(\mathbf {r} ,t\right)={\hat {H}}\Psi \left(\mathbf {r} ,t\right)} is closely related to the Hamilton–Jacobi equation (HJE) − ∂ ∂ t S ( q i , t ) = H ( q i , ∂ S ∂ q i , t ) {\displaystyle -{\frac {\partial }{\partial t}}S(q_{i},t)=H\left(q_{i},{\frac {\partial S}{\partial q_{i}}},t\right)} where S {\displaystyle S} is the classical action and H {\displaystyle H} is the Hamiltonian function (not operator).: 308  Here the generalized coordinates q i {\displaystyle q_{i}} for i = 1 , 2 , 3 {\displaystyle i=1,2,3} (used in the context of the HJE) can be set to the position in Cartesian coordinates as r = ( q 1 , q 2 , q 3 ) = ( x , y , z ) {\displaystyle \mathbf {r} =(q_{1},q_{2},q_{3})=(x,y,z)} . Substituting Ψ = ρ ( r , t ) e i S ( r , t ) / ℏ {\displaystyle \Psi ={\sqrt {\rho (\mathbf {r} ,t)}}e^{iS(\mathbf {r} ,t)/\hbar }} where ρ {\displaystyle \rho } is the probability density, into the Schrödinger equation and then taking the limit ℏ → 0 {\displaystyle \hbar \to 0} in the resulting equation yield the Hamilton–Jacobi equation. == Density matrices == Wave functions are not always the most convenient way to describe quantum systems and their behavior. When the preparation of a system is only imperfectly known, or when the system under investigation is a part of a larger whole, density matrices may be used instead.: 74  A density matrix is a positive semi-definite operator whose trace is equal to 1. (The term "density operator" is also used, particularly when the underlying Hilbert space is infinite-dimensional.) The set of all density matrices is convex, and the extreme points are the operators that project onto vectors in the Hilbert space. These are the density-matrix representations of wave functions; in Dirac notation, they are written ρ ^ = | Ψ ⟩ ⟨ Ψ | . {\displaystyle {\hat {\rho }}=|\Psi \rangle \langle \Psi |.} The density-matrix analogue of the Schrödinger equation for wave functions is i ℏ ∂ ρ ^ ∂ t = [ H ^ , ρ ^ ] , {\displaystyle i\hbar {\frac {\partial {\hat {\rho }}}{\partial t}}=[{\hat {H}},{\hat {\rho }}],} where the brackets denote a commutator. This is variously known as the von Neumann equation, the Liouville–von Neumann equation, or just the Schrödinger equation for density matrices.: 312  If the Hamiltonian is time-independent, this equation can be easily solved to yield ρ ^ ( t ) = e − i H ^ t / ℏ ρ ^ ( 0 ) e i H ^ t / ℏ . {\displaystyle {\hat {\rho }}(t)=e^{-i{\hat {H}}t/\hbar }{\hat {\rho }}(0)e^{i{\hat {H}}t/\hbar }.} More generally, if the unitary operator U ^ ( t ) {\displaystyle {\hat {U}}(t)} describes wave function evolution over some time interval, then the time evolution of a density matrix over that same interval is given by ρ ^ ( t ) = U ^ ( t ) ρ ^ ( 0 ) U ^ ( t ) † . {\displaystyle {\hat {\rho }}(t)={\hat {U}}(t){\hat {\rho }}(0){\hat {U}}(t)^{\dagger }.} Unitary evolution of a density matrix conserves its von Neumann entropy.: 267  == Relativistic quantum physics and quantum field theory == The one-particle Schrödinger equation described above is valid essentially in the nonrelativistic domain. For one reason, it is essentially invariant under Galilean transformations, which form the symmetry group of Newtonian dynamics. Moreover, processes that change particle number are natural in relativity, and so an equation for one particle (or any fixed number thereof) can only be of limited use. A more general form of the Schrödinger equation that also applies in relativistic situations can be formulated within quantum field theory (QFT), a framework that allows the combination of quantum mechanics with special relativity. The region in which both simultaneously apply may be described by relativistic quantum mechanics. Such descriptions may use time evolution generated by a Hamiltonian operator, as in the Schrödinger functional method. === Klein–Gordon and Dirac equations === Attempts to combine quantum physics with special relativity began with building relativistic wave equations from the relativistic energy–momentum relation E 2 = ( p c ) 2 + ( m 0 c 2 ) 2 , {\displaystyle E^{2}=(pc)^{2}+\left(m_{0}c^{2}\right)^{2},} instead of nonrelativistic energy equations. The Klein–Gordon equation and the Dirac equation are two such equations. The Klein–Gordon equation, − 1 c 2 ∂ 2 ∂ t 2 ψ + ∇ 2 ψ = m 2 c 2 ℏ 2 ψ , {\displaystyle -{\frac {1}{c^{2}}}{\frac {\partial ^{2}}{\partial t^{2}}}\psi +\nabla ^{2}\psi ={\frac {m^{2}c^{2}}{\hbar ^{2}}}\psi ,} was the first such equation to be obtained, even before the nonrelativistic one-particle Schrödinger equation, and applies to massive spinless particles. Historically, Dirac obtained the Dirac equation by seeking a differential equation that would be first-order in both time and space, a desirable property for a relativistic theory. Taking the "square root" of the left-hand side of the Klein–Gordon equation in this way required factorizing it into a product of two operators, which Dirac wrote using 4 × 4 matrices α 1 , α 2 , α 3 , β {\displaystyle \alpha _{1},\alpha _{2},\alpha _{3},\beta } . Consequently, the wave function also became a four-component function, governed by the Dirac equation that, in free space, read ( β m c 2 + c ( ∑ n = ⁡ 1 3 α n p n ) ) ψ = i ℏ ∂ ψ ∂ t . {\displaystyle \left(\beta mc^{2}+c\left(\sum _{n\mathop {=} 1}^{3}\alpha _{n}p_{n}\right)\right)\psi =i\hbar {\frac {\partial \psi }{\partial t}}.} This has again the form of the Schrödinger equation, with the time derivative of the wave function being given by a Hamiltonian operator acting upon the wave function. Including influences upon the particle requires modifying the Hamiltonian operator. For example, the Dirac Hamiltonian for a particle of mass m and electric charge q in an electromagnetic field (described by the electromagnetic potentials φ and A) is: H ^ Dirac = γ 0 [ c γ ⋅ ( p ^ − q A ) + m c 2 + γ 0 q φ ] , {\displaystyle {\hat {H}}_{\text{Dirac}}=\gamma ^{0}\left[c{\boldsymbol {\gamma }}\cdot \left({\hat {\mathbf {p} }}-q\mathbf {A} \right)+mc^{2}+\gamma ^{0}q\varphi \right],} in which the γ = (γ1, γ2, γ3) and γ0 are the Dirac gamma matrices related to the spin of the particle. The Dirac equation is true for all spin-1⁄2 particles, and the solutions to the equation are 4-component spinor fields with two components corresponding to the particle and the other two for the antiparticle. For the Klein–Gordon equation, the general form of the Schrödinger equation is inconvenient to use, and in practice the Hamiltonian is not expressed in an analogous way to the Dirac Hamiltonian. The equations for relativistic quantum fields, of which the Klein–Gordon and Dirac equations are two examples, can be obtained in other ways, such as starting from a Lagrangian density and using the Euler–Lagrange equations for fields, or using the representation theory of the Lorentz group in which certain representations can be used to fix the equation for a free particle of given spin (and mass). In general, the Hamiltonian to be substituted in the general Schrödinger equation is not just a function of the position and momentum operators (and possibly time), but also of spin matrices. Also, the solutions to a relativistic wave equation, for a massive particle of spin s, are complex-valued 2(2s + 1)-component spinor fields. === Fock space === As originally formulated, the Dirac equation is an equation for a single quantum particle, just like the single-particle Schrödinger equation with wave function Ψ ( x , t ) {\displaystyle \Psi (x,t)} . This is of limited use in relativistic quantum mechanics, where particle number is not fixed. Heuristically, this complication can be motivated by noting that mass–energy equivalence implies material particles can be created from energy. A common way to address this in QFT is to introduce a Hilbert space where the basis states are labeled by particle number, a so-called Fock space. The Schrödinger equation can then be formulated for quantum states on this Hilbert space. However, because the Schrödinger equation picks out a preferred time axis, the Lorentz invariance of the theory is no longer manifest, and accordingly, the theory is often formulated in other ways. == History == Following Max Planck's quantization of light (see black-body radiation), Albert Einstein interpreted Planck's quanta to be photons, particles of light, and proposed that the energy of a photon is proportional to its frequency, one of the first signs of wave–particle duality. Since energy and momentum are related in the same way as frequency and wave number in special relativity, it followed that the momentum p {\displaystyle p} of a photon is inversely proportional to its wavelength λ {\displaystyle \lambda } , or proportional to its wave number k {\displaystyle k} : p = h λ = ℏ k , {\displaystyle p={\frac {h}{\lambda }}=\hbar k,} where h {\displaystyle h} is the Planck constant and ℏ = h / 2 π {\displaystyle \hbar ={h}/{2\pi }} is the reduced Planck constant. Louis de Broglie hypothesized that this is true for all particles, even particles which have mass such as electrons. He showed that, assuming that the matter waves propagate along with their particle counterparts, electrons form standing waves, meaning that only certain discrete rotational frequencies about the nucleus of an atom are allowed. These quantized orbits correspond to discrete energy levels, and de Broglie reproduced the Bohr model formula for the energy levels. The Bohr model was based on the assumed quantization of angular momentum L {\displaystyle L} according to L = n h 2 π = n ℏ . {\displaystyle L=n{\frac {h}{2\pi }}=n\hbar .} According to de Broglie, the electron is described by a wave, and a whole number of wavelengths must fit along the circumference of the electron's orbit: n λ = 2 π r . {\displaystyle n\lambda =2\pi r.} This approach essentially confined the electron wave in one dimension, along a circular orbit of radius r {\displaystyle r} . In 1921, prior to de Broglie, Arthur C. Lunn at the University of Chicago had used the same argument based on the completion of the relativistic energy–momentum 4-vector to derive what we now call the de Broglie relation. Unlike de Broglie, Lunn went on to formulate the differential equation now known as the Schrödinger equation and solve for its energy eigenvalues for the hydrogen atom; the paper was rejected by the Physical Review, according to Kamen. Following up on de Broglie's ideas, physicist Peter Debye made an offhand comment that if particles behaved as waves, they should satisfy some sort of wave equation. Inspired by Debye's remark, Schrödinger decided to find a proper 3-dimensional wave equation for the electron. He was guided by William Rowan Hamilton's analogy between mechanics and optics, encoded in the observation that the zero-wavelength limit of optics resembles a mechanical system—the trajectories of light rays become sharp tracks that obey Fermat's principle, an analog of the principle of least action. The equation he found is i ℏ ∂ ∂ t Ψ ( r , t ) = − ℏ 2 2 m ∇ 2 Ψ ( r , t ) + V ( r ) Ψ ( r , t ) . {\displaystyle i\hbar {\frac {\partial }{\partial t}}\Psi (\mathbf {r} ,t)=-{\frac {\hbar ^{2}}{2m}}\nabla ^{2}\Psi (\mathbf {r} ,t)+V(\mathbf {r} )\Psi (\mathbf {r} ,t).} By that time Arnold Sommerfeld had refined the Bohr model with relativistic corrections. Schrödinger used the relativistic energy–momentum relation to find what is now known as the Klein–Gordon equation in a Coulomb potential (in natural units): ( E + e 2 r ) 2 ψ ( x ) = − ∇ 2 ψ ( x ) + m 2 ψ ( x ) . {\displaystyle \left(E+{\frac {e^{2}}{r}}\right)^{2}\psi (x)=-\nabla ^{2}\psi (x)+m^{2}\psi (x).} He found the standing waves of this relativistic equation, but the relativistic corrections disagreed with Sommerfeld's formula. Discouraged, he put away his calculations and secluded himself with a mistress in a mountain cabin in December 1925. While at the cabin, Schrödinger decided that his earlier nonrelativistic calculations were novel enough to publish and decided to leave off the problem of relativistic corrections for the future. Despite the difficulties in solving the differential equation for hydrogen (he had sought help from his friend the mathematician Hermann Weyl: 3 ) Schrödinger showed that his nonrelativistic version of the wave equation produced the correct spectral energies of hydrogen in a paper published in 1926.: 1  Schrödinger computed the hydrogen spectral series by treating a hydrogen atom's electron as a wave Ψ ( x , t ) {\displaystyle \Psi (\mathbf {x} ,t)} , moving in a potential well V {\displaystyle V} , created by the proton. This computation accurately reproduced the energy levels of the Bohr model. The Schrödinger equation details the behavior of Ψ {\displaystyle \Psi } but says nothing of its nature. Schrödinger tried to interpret the real part of Ψ ∂ Ψ ∗ ∂ t {\displaystyle \Psi {\frac {\partial \Psi ^{*}}{\partial t}}} as a charge density, and then revised this proposal, saying in his next paper that the modulus squared of Ψ {\displaystyle \Psi } is a charge density. This approach was, however, unsuccessful. In 1926, just a few days after this paper was published, Max Born successfully interpreted Ψ {\displaystyle \Psi } as the probability amplitude, whose modulus squared is equal to probability density.: 220  Later, Schrödinger himself explained this interpretation as follows: The already ... mentioned psi-function.... is now the means for predicting probability of measurement results. In it is embodied the momentarily attained sum of theoretically based future expectation, somewhat as laid down in a catalog. == Interpretation == The Schrödinger equation provides a way to calculate the wave function of a system and how it changes dynamically in time. However, the Schrödinger equation does not directly say what, exactly, the wave function is. The meaning of the Schrödinger equation and how the mathematical entities in it relate to physical reality depends upon the interpretation of quantum mechanics that one adopts. In the views often grouped together as the Copenhagen interpretation, a system's wave function is a collection of statistical information about that system. The Schrödinger equation relates information about the system at one time to information about it at another. While the time-evolution process represented by the Schrödinger equation is continuous and deterministic, in that knowing the wave function at one instant is in principle sufficient to calculate it for all future times, wave functions can also change discontinuously and stochastically during a measurement. The wave function changes, according to this school of thought, because new information is available. The post-measurement wave function generally cannot be known prior to the measurement, but the probabilities for the different possibilities can be calculated using the Born rule. Other, more recent interpretations of quantum mechanics, such as relational quantum mechanics and QBism also give the Schrödinger equation a status of this sort. Schrödinger himself suggested in 1952 that the different terms of a superposition evolving under the Schrödinger equation are "not alternatives but all really happen simultaneously". This has been interpreted as an early version of Everett's many-worlds interpretation. This interpretation, formulated independently in 1956, holds that all the possibilities described by quantum theory simultaneously occur in a multiverse composed of mostly independent parallel universes. This interpretation removes the axiom of wave function collapse, leaving only continuous evolution under the Schrödinger equation, and so all possible states of the measured system and the measuring apparatus, together with the observer, are present in a real physical quantum superposition. While the multiverse is deterministic, we perceive non-deterministic behavior governed by probabilities, because we do not observe the multiverse as a whole, but only one parallel universe at a time. Exactly how this is supposed to work has been the subject of much debate. Why should we assign probabilities at all to outcomes that are certain to occur in some worlds, and why should the probabilities be given by the Born rule? Several ways to answer these questions in the many-worlds framework have been proposed, but there is no consensus on whether they are successful. Bohmian mechanics reformulates quantum mechanics to make it deterministic, at the price of adding a force due to a "quantum potential". It attributes to each physical system not only a wave function but in addition a real position that evolves deterministically under a nonlocal guiding equation. The evolution of a physical system is given at all times by the Schrödinger equation together with the guiding equation. == See also == == Notes == == References == == External links == "Schrödinger equation". Encyclopedia of Mathematics. EMS Press. 2001 [1994]. Quantum Cook Book (PDF) and PHYS 201: Fundamentals of Physics II by Ramamurti Shankar, Yale OpenCourseware The Modern Revolution in Physics – an online textbook. Quantum Physics I at MIT OpenCourseWare
Wikipedia/Schrödinger_Equation
The plum pudding model is an obsolete scientific model of the atom. It was first proposed by J. J. Thomson in 1904 following his discovery of the electron in 1897, and was rendered obsolete by Ernest Rutherford's discovery of the atomic nucleus in 1911. The model tried to account for two properties of atoms then known: that there are electrons, and that atoms have no net electric charge. Logically there had to be an equal amount of positive charge to balance out the negative charge of the electrons. As Thomson had no idea as to the source of this positive charge, he tentatively proposed that it was everywhere in the atom, and that the atom was spherical. This was the mathematically simplest hypothesis to fit the available evidence, or lack thereof. In such a sphere, the negatively charged electrons would distribute themselves in a more or less even manner throughout the volume, simultaneously repelling each other while being attracted to the positive sphere's center. Despite Thomson's efforts, his model couldn't account for emission spectra and valencies. Based on experimental studies of alpha particle scattering (in the gold foil experiment), Ernest Rutherford developed an alternative model for the atom featuring a compact nucleus where the positive charge is concentrated. Thomson's model is popularly referred to as the "plum pudding model" with the notion that the electrons are distributed uniformly like raisins in a plum pudding. Neither Thomson nor his colleagues ever used this analogy. It seems to have been coined by popular science writers to make the model easier to understand for the layman. The analogy is perhaps misleading because Thomson likened the positive sphere to a liquid rather than a solid since he thought the electrons moved around in it. == Significance == Thomson's model was the first atomic model to describe an internal structure. Before this, atoms were simply the basic units of weight by which the chemical elements combined, and their only properties were valency and relative weight to hydrogen. The model had no properties which concerned physicists, such as electric charge, magnetic moment, volume, or absolute mass, and because of this some physicists had doubted atoms even existed. Thomson hypothesized that the quantity, arrangement, and motions of electrons in the atom could explain its physical and chemical properties, such as emission spectra, valencies, reactivity, and ionization. He was on the right track, though his approach was based on classical mechanics and he did not have the insight to incorporate quantized energy into it. == Background == Throughout the 19th century evidence from chemistry and statistical mechanics accumulated that matter was composed of atoms. The structure of the atom was discussed, and by the end of the century the leading model: 175  was the vortex theory of the atom, proposed by William Thomson (later Lord Kelvin) in 1867. By 1890, J.J. Thomson had his own version called the "nebular atom" hypothesis, in which atoms were composed of immaterial vortices and suggested similarities between the arrangement of vortices and periodic regularity found among the chemical elements. Thomson's discovery of the electron in 1897 changed his views. Thomson called them "corpuscles" (particles), but they were more commonly called "electrons", the name G. J. Stoney had coined for the "fundamental unit quantity of electricity" in 1891. However even late in 1899, few scientists believed in subatomic particles.: I:365  Another emerging scientific theme of the 19th century was the discovery and study of radioactivity. Thomson discovered the electron by studying cathode rays, and in 1900 Henri Becquerel determined that the radiation from uranium, now called beta particles, had the same charge/mass ratio as cathode rays.: II:3  These beta particles were believed to be electrons travelling at high speed. The particles were used by Thomson to probe atoms to find evidence for his atomic theory. The other form of radiation critical to this era of atomic models was alpha particles. Heavier and slower than beta particles, these were the key tool used by Rutherford to find evidence against Thomson's model. In addition to the emerging atomic theory, the electron, and radiation, the last element of history was the many studies of atomic spectra published in the late 19th century. Part of the attraction of the vortex model was its possible role in describing the spectral data as vibrational responses to electromagnetic radiation.: 177  Neither Thomson's model nor its successor, Rutherford's model, made progress towards understanding atomic spectra. That would have to wait until Niels Bohr built the first quantum-based atom model. == Development == Thomson's model was the first to assign a specific inner structure to an atom,: 9  though his earliest descriptions did not include mathematical formulas. From 1897 through 1913, Thomson proposed a series of increasingly detailed polyelectron models for the atom.: 178  His first versions were qualitative culminating in his 1906 paper and follow on summaries. Thomson's model changed over the course of its initial publication, finally becoming a model with much more mobility containing electrons revolving in the dense field of positive charge rather than a static structure. Thomson attempted unsuccessfully to reshape his model to account for some of the major spectral lines experimentally known for several elements. === 1897 Corpuscles inside atoms === In a paper titled Cathode Rays, Thomson demonstrated that cathode rays are not light but made of negatively charged particles which he called corpuscles. He observed that cathode rays can be deflected by electric and magnetic fields, which does not happen with light rays. In a few paragraphs near the end of this long paper Thomson discusses the possibility that atoms were made of these corpuscles, calling them primordial atoms. Thomson believed that the intense electric field around the cathode caused the surrounding gas molecules to split up into their component corpuscles, thereby generating cathode rays. Thomson thus showed evidence that atoms were divisible, though he did not attempt to describe their structure at this point. Thomson notes that he was not the first scientist to propose that atoms are divisible, making reference to William Prout who in 1815 found that the atomic weights of various elements were multiples of hydrogen's atomic weight and hypothesised that all atoms were made of hydrogen atoms fused together. Prout's hypothesis was dismissed by chemists when by the 1830s it was found that some elements seemed to have a non-integer atomic weight—e.g. chlorine has an atomic weight of about 35.45. But the idea continued to intrigue scientists. The discrepancies were eventually explained with the discovery of isotopes in 1912. A few months after Thomson's paper appeared, George FitzGerald suggested that the corpuscle identified by Thomson from cathode rays and proposed as parts of an atom was a "free electron", as described by physicist Joseph Larmor and Hendrik Lorentz. While Thomson did not adopt the terminology, the connection convinced other scientists that cathode rays were particles, an important step in their eventual acceptance of an atomic model based on sub-atomic particles. In 1899 Thomson reiterated his atomic model in a paper that showed that negative electricity created by ultraviolet light landing on a metal (known now as the photoelectric effect) has the same mass-to-charge ratio as cathode rays; then he applied his previous method for determining the charge on ions to the negative electric particles created by ultraviolet light.: 86  He estimated that the electron's mass was 0.0014 times that of the hydrogen ion (as a fraction: ⁠1/714⁠). In the conclusion of this paper he writes: I regard the atom as containing a large number of smaller bodies which I shall call corpuscles; these corpuscles are equal to each other; the mass of a corpuscle is the mass of the negative ion in a gas at low pressure, i.e. about 3 × 10−26 of a gramme. In the normal atom, this assemblage of corpuscles forms a system which is electrically neutral. The negative effect is balanced by something which causes the space through which the corpuscles are spread to act as if it had a charge of positive electricity equal in amount to the sum of the negative charges on the corpuscles. === 1904 Mechanical model of the atom === Thomson provided his first detailed description of the atom in his 1904 paper On the Structure of the Atom. Thomson starts with a short description of his model ... the atoms of the elements consist of a number of negatively electrified corpuscles enclosed in a sphere of uniform positive electrification, ... Primarily focused on the electrons, Thomson adopted the positive sphere from Kelvin's atom model proposed a year earlier. He then gives a detailed mechanical analysis of such a system, distributing the electrons uniformly around a ring. The attraction of the positive electrification is balanced by the mutual repulsion of the electrons. His analysis focuses on stability, looking for cases where small changes in position are countered by restoring forces. After discussing his many formulae for stability he turned to analysing patterns in the number of electrons in various concentric rings of stable configurations. These regular patterns Thomson argued are analogous to the periodic law of chemistry behind the structure of the periodic table. This concept, that a model based on subatomic particles could account for chemical trends, encouraged interest in Thomson's model and influenced future work even if the details Thomson's electron assignments turned out to be incorrect.: 135  Thomson at this point believed that all the mass of the atom was carried by the electrons. This would mean that even a small atom would have to contain thousands of electrons, and the positive electrification that encapsulated them was without mass. === 1905 lecture on electron arrangements === In a lecture delivered to the Royal Institution of Great Britain in 1905, Thomson explained that it was too computationally difficult for him to calculate the movements of large numbers of electrons in the positive sphere, so he proposed a practical experiment. This involved magnetised pins pushed into cork discs and set afloat in a basin of water. The pins were oriented such that they repelled each other. Above the centre of the basin was suspended an electromagnet that attracted the pins. The equilibrium arrangement the pins took informed Thomson on what arrangements the electrons in an atom might take. For instance, he observed that while five pins would arrange themselves in a stable pentagon around the centre, six pins could not form a stable hexagon. Instead, one pin would move to the centre and the other five would form a pentagon around the centre pin, and this arrangement was stable. As he added more pins, they would arrange themselves in concentric rings around the centre. The experiment functioned in two dimensions instead of three, but Thomson inferred the electrons in the atom arranged themselves in concentric shells and they could move within these shells but did not move from one shell to another them except when electrons were added or subtracted from the atom. === 1906 Estimating electrons per atom === Before 1906 Thomson considered the atomic weight to be due to the mass of the electrons (which he continued to call "corpuscles"). Based on his own estimates of the electron mass, an atom would need tens of thousands electrons to account for the mass. In 1906 he used three different methods, X-ray scattering, beta ray absorption, or optical properties of gases, to estimate that "number of corpuscles is not greatly different from the atomic weight". This reduced the number of electrons to tens or at most a couple of hundred and that in turn meant that the positive sphere in Thomson's model contained most of the mass of the atom. This meant that Thomson's mechanical stability work from 1904 and the comparison to the periodic table were no longer valid.: 186  Moreover, the alpha particle, so important to the next advance in atomic theory by Rutherford, would no longer be viewed as an atom containing thousands of electrons.: 269  In 1907, Thomson published The Corpuscular Theory of Matter which reviewed his ideas on the atom's structure and proposed further avenues of research. In Chapter 6, he further elaborates his experiment using magnetised pins in water, providing an expanded table. For instance, if 59 pins were placed in the pool, they would arrange themselves in concentric rings of the order 20-16-13-8-2 (from outermost to innermost). In Chapter 7, Thomson summarised his 1906 results on the number of electrons in an atom. He included one important correction: he replaced the beta-particle analysis with one based on the cathode ray experiments of August Becker, giving a result in better agreement with other approaches to the problem.: 273  Experiments by other scientists in this field had shown that atoms contain far fewer electrons than Thomson previously thought. Thomson now believed the number of electrons in an atom was a small multiple of its atomic weight: "the number of corpuscles in an atom of any element is proportional to the atomic weight of the element — it is a multiple, and not a large one, of the atomic weight of the element." This meant that almost all of the atom's mass had to be carried by the positive sphere, whatever it was made of. Thomson in this book estimated that a hydrogen atom is 1,700 times heavier than an electron (the current measurement is 1,837). Thomson noted that no scientist had yet found a positively charged particle smaller than a hydrogen ion. He also wrote that the positive charge of an atom is a multiple of a basic unit of positive charge, equal to the negative charge of an electron. Thomson refused to jump to the conclusion that the basic unit of positive charge has a mass equal to that of the hydrogen ion, arguing that scientists first had to know how many electrons an atom contains. For all he could tell, a hydrogen ion might still contain a few electrons—perhaps two electrons and three units of positive charge. === 1910 Multiple scattering === Thomson's difficulty with beta scattering in 1906 lead him to renewed interest in the topic. He encouraged J. Arnold Crowther to experiment with beta scattering through thin foils and, in 1910, Thomson produced a new theory of beta scattering. The two innovations in this paper was the introduction of scattering from the positive sphere of the atom and analysis that multiple or compound scattering was critical to the final results.: 273  This theory and Crowther's experimental results would be confronted by Rutherford's theory and Geiger and Mardsen new experiments with alpha particles. Another innovation in Thomson's 1910 paper was that he modelled how an atom might deflect an incoming beta particle if the positive charge of the atom existed in discrete units of equal but arbitrary size, spread evenly throughout the atom, separated by empty space, with each unit having a positive charge equal to the electron's negative charge. Thomson therefore came close to deducing the existence of the proton, which was something Rutherford eventually did. In Rutherford's model of the atom, the protons are clustered in a very small nucleus, but in Thomson's alternative model, the positive units were spread throughout the atom. == Thomson's 1910 beta scattering model == In his 1910 paper "On the Scattering of rapidly moving Electrified Particles", Thomson presented equations that modelled how beta particles scatter in a collision with an atom.: 277  His work was based on beta scattering studies by James Crowther. === Partial deflection by the positive sphere === Thomson typically assumed the positive charge in the atom was uniformly distributed throughout its volume, encapsulating the electrons. In his 1910 paper, Thomson presented the following equation which isolated the effect of this positive sphere:: 278  θ ¯ 2 = π 4 ⋅ k q e q g m v 2 R {\displaystyle {\bar {\theta }}_{2}={\frac {\pi }{4}}\cdot {\frac {kq_{\text{e}}q_{\text{g}}}{mv^{2}R}}} where k is the Coulomb constant, qe is the charge of the beta particle, qg is the charge of the positive sphere, m is the mass of the beta particle, and R is the radius of the sphere. Because the atom is many thousands of times heavier than the beta particle, no correction for recoil is needed. Thomson did not explain how this equation was developed, but the historian John L. Heilbron provided an educated guess he called a "straight-line" approximation. Consider a beta particle passing through the positive sphere with its initial trajectory at a lateral distance b from the centre. The path is assumed to have a very small deflection and therefore is treated here as a straight line. Inside a sphere of uniformly distributed positive charge the force exerted on the beta particle at any point along its path through the sphere would be directed along the radius r with magnitude:: 106  F = k q e q g r 2 ⋅ r 3 R 3 {\displaystyle F={\frac {kq_{\text{e}}q_{\text{g}}}{r^{2}}}\cdot {\frac {r^{3}}{R^{3}}}} The component of force perpendicular to the trajectory and thus deflecting the path of the particle would be: F y = k q e q g r 2 ⋅ r 3 R 3 ⋅ cos ⁡ φ = b k q e q g R 3 {\displaystyle F_{\text{y}}={\frac {kq_{\text{e}}q_{\text{g}}}{r^{2}}}\cdot {\frac {r^{3}}{R^{3}}}\cdot \cos \varphi ={\frac {bkq_{\text{e}}q_{\text{g}}}{R^{3}}}} The lateral change in momentum py is therefore Δ p y = F y t = b k q e q g R 3 ⋅ L v {\displaystyle \Delta p_{\text{y}}=F_{\text{y}}t={\frac {bkq_{\text{e}}q_{\text{g}}}{R^{3}}}\cdot {\frac {L}{v}}} The resulting angular deflection, θ 2 {\displaystyle \theta _{2}} , is given by tan ⁡ θ 2 = Δ p y p x = b k q e q g R 3 ⋅ L v ⋅ 1 m v {\displaystyle \tan \theta _{2}={\frac {\Delta p_{\text{y}}}{p_{\text{x}}}}={\frac {bkq_{\text{e}}q_{\text{g}}}{R^{3}}}\cdot {\frac {L}{v}}\cdot {\frac {1}{mv}}} where px is the average horizontal momentum taken to be equal to the incoming momentum. Since we already know the deflection is very small, we can treat tan ⁡ θ 2 {\displaystyle \tan \theta _{2}} as being equal to θ 2 {\displaystyle \theta _{2}} . To find the average deflection angle θ ¯ 2 {\displaystyle {\bar {\theta }}_{2}} , the angle for each value of b and the corresponding L are added across the face sphere, then divided by the cross-section area. L = 2 R 2 − b 2 {\displaystyle L=2{\sqrt {R^{2}-b^{2}}}} per Pythagorean theorem.: 278  θ ¯ 2 = 1 π R 2 ∫ 0 R b k q e q g R 3 ⋅ 2 R 2 − b 2 v ⋅ 1 m v ⋅ 2 π b ⋅ d b {\displaystyle {\bar {\theta }}_{2}={\frac {1}{\pi R^{2}}}\int _{0}^{R}{\frac {bkq_{\text{e}}q_{\text{g}}}{R^{3}}}\cdot {\frac {2{\sqrt {R^{2}-b^{2}}}}{v}}\cdot {\frac {1}{mv}}\cdot 2\pi b\cdot \mathrm {d} b} = π 4 ⋅ k q e q g m v 2 R {\displaystyle ={\frac {\pi }{4}}\cdot {\frac {kq_{\text{e}}q_{\text{g}}}{mv^{2}R}}} This matches Thomson's formula in his 1910 paper. === Partial deflection by the electrons === Thomson modelled the collisions between a beta particle and the electrons of an atom by calculating the deflection of one collision then multiplying by a factor for the number of collisions as the particle crosses the atom. For the electrons within an arbitrary distance s of the beta particle's path, their mean distance will be ⁠s/2⁠. Therefore, the average deflection per electron will be 2 arctan ⁡ k q e q e m v 2 s 2 ≈ 4 k q e q e m v 2 s {\displaystyle 2\arctan {\frac {kq_{\text{e}}q_{\text{e}}}{mv^{2}{\tfrac {s}{2}}}}\approx {\frac {4kq_{\text{e}}q_{\text{e}}}{mv^{2}s}}} where qe is the elementary charge, k is the Coulomb constant, m and v are the mass and velocity of the beta particle. The factor for the number of collisions was known to be the square root of the number of possible electrons along path. The number of electrons depends upon the density of electrons along the particle path times the path length L. The net deflection caused by all the electrons within this arbitrary cylinder of effect around the beta particle's path is θ 1 = 4 k q e q e m v 2 s ⋅ N 0 π s 2 L {\displaystyle \theta _{1}={\frac {4kq_{\text{e}}q_{\text{e}}}{mv^{2}s}}\cdot {\sqrt {N_{0}\pi s^{2}L}}} where N0 is the number of electrons per unit volume and π s 2 L {\displaystyle \pi s^{2}L} is the volume of this cylinder. Since Thomson calculated the deflection would be very small, he treats L as a straight line. Therefore L = 2 R 2 − b 2 {\displaystyle L=2{\sqrt {R^{2}-b^{2}}}} where b is the distance of this chord from the centre. The mean of L {\displaystyle {\sqrt {L}}} is given by the integral 1 π R 2 ∫ 0 R 2 R 2 − b 2 ⋅ 2 π b ⋅ d b = 4 5 2 R {\displaystyle {\frac {1}{\pi R^{2}}}\int _{0}^{R}{\sqrt {2{\sqrt {R^{2}-b^{2}}}}}\cdot 2\pi b\cdot \mathrm {d} b={\frac {4}{5}}{\sqrt {2R}}} We can now replace L {\displaystyle {\sqrt {L}}} in the equation for θ 1 {\displaystyle \theta _{1}} to obtain the mean deflection θ ¯ 1 {\displaystyle {\bar {\theta }}_{1}} : θ ¯ 1 = 4 k q e q e m v 2 s ⋅ N 0 π s 2 ⋅ 4 5 2 R {\displaystyle {\bar {\theta }}_{1}={\frac {4kq_{\text{e}}q_{\text{e}}}{mv^{2}s}}\cdot {\sqrt {N_{0}\pi s^{2}}}\cdot {\frac {4}{5}}{\sqrt {2R}}} = 16 5 ⋅ k q e q e m v 2 ⋅ 1 R ⋅ 3 N 2 {\displaystyle ={\frac {16}{5}}\cdot {\frac {kq_{\text{e}}q_{\text{e}}}{mv^{2}}}\cdot {\frac {1}{R}}\cdot {\sqrt {\frac {3N}{2}}}} where N is the number of electrons in the atom, equal to N 0 4 3 π R 3 {\displaystyle N_{0}{\tfrac {4}{3}}\pi R^{3}} . === Deflection by the positive charge in discrete units === In his 1910 paper, Thomson proposed an alternative model in which the positive charge exists in discrete units separated by empty space, with those units being evenly distributed throughout the atom's volume. In this concept, the average scattering angle of the beta particle is given by: θ ¯ 2 = 16 5 ⋅ k q e q e m v 2 ⋅ 1 R ⋅ 3 N 2 1 − ( 1 − π 8 ) σ {\displaystyle {\bar {\theta }}_{2}={\frac {16}{5}}\cdot {\frac {kq_{\text{e}}q_{\text{e}}}{mv^{2}}}\cdot {\frac {1}{R}}\cdot {\sqrt {\frac {3N}{2}}}{\sqrt {1-\left(1-{\frac {\pi }{8}}\right){\sqrt {\sigma }}}}} where σ is the ratio of the volume occupied by the positive charge to the volume of the whole atom. Thomson did not explain how he arrived at this equation. === Net deflection === To find the combined effect of the positive charge and the electrons on the beta particle's path, Thomson provided the following equation: θ ¯ = θ ¯ 1 2 + θ ¯ 2 2 {\displaystyle {\bar {\theta }}={\sqrt {{\bar {\theta }}_{1}^{2}+{\bar {\theta }}_{2}^{2}}}} == Demise of the plum pudding model == Thomson probed the structure of atoms through beta particle scattering, whereas his former student Ernest Rutherford was interested in alpha particle scattering. Beta particles are electrons emitted by radioactive decay, whereas alpha particles are essentially helium atoms, also emitted in process of decay. Alpha particles have considerably more momentum than beta particles and Rutherford found that matter scatters alpha particles in ways that Thomson's plum pudding model could not predict. Between 1908 and 1913, Ernest Rutherford, Hans Geiger, and Ernest Marsden collaborated on a series of experiments in which they bombarded thin metal foils with a beam of alpha particles and measured the intensity versus scattering angle of the particles. They found that the metal foil could scatter alpha particles by more than 90°.: 4  This should not have been possible according to the Thomson model: the scattering into large angles should have been negligible. The odds of a beta particle being scattered by more than 90° under such circumstances is astronomically small, and since alpha particles typically have much more momentum than beta particles, their deflection should be smaller still. The Thomson models simply could not produce electrostatic forces of sufficient strength to cause such large deflection. The charges in the Thomson model were too diffuse. This led Rutherford to discard the Thomson for a new model where the positive charge of the atom is concentrated in a tiny nucleus. Rutherford went on to make more compelling discoveries. In Thomson's model, the positive charge sphere was just an abstract component, but Rutherford found something concrete to attribute the positive charge to: particles he dubbed "protons". Whereas Thomson believed that the electron count was roughly correlated to the atomic weight, Rutherford showed that (in a neutral atom) it is exactly equal to the atomic number. Thomson hypothesised that the arrangement of the electrons in the atom somehow determined the spectral lines of a chemical element. He was on the right track, but it had nothing to do with how atoms circulated in a sphere of positive charge. Scientists eventually discovered that it had to do with how electrons absorb and release energy in discrete quantities, moving through energy levels which correspond to emission and absorption spectra. Thomson had not incorporated quantum mechanics into his atomic model, which at the time was a very new field of physics. Niels Bohr and Erwin Schroedinger later incorporated quantum mechanics into the atomic model. === Rutherford's nuclear model === Rutherford's 1911 paper on alpha particle scattering showed that Thomson's scattering model could not explain the large angle scattering and it showed that multiple scattering was not necessary to explain the data. However, in the years immediately following its publication few scientists took note. The scattering model predictions were not considered definitive evidence against Thomson's plum pudding model. Thomson and Rutherford had pioneered scattering as a technique to probe atoms, its reliability and value were unproven. Before Rutherford's paper the alpha particle was considered an atom, not a compact mass. It was not clear why it should be a good probe. Moreover, Rutherford's paper did not discuss the atomic electrons vital to practical problems like chemistry or atomic spectroscopy.: 300  Rutherford's nuclear model would only become widely accepted after the work of Niels Bohr. == Mathematical Thomson problem == The Thomson problem in mathematics seeks the optimal distribution of equal point charges on the surface of a sphere. Unlike the original Thomson atomic model, the sphere in this purely mathematical model does not have a charge, and this causes all the point charges to move to the surface of the sphere by their mutual repulsion. There is still no general solution to Thomson's original problem of how electrons arrange themselves within a sphere of positive charge. == Origin of the nickname == The first known writer to compare Thomson's model to a plum pudding was an anonymous reporter in an article for the British pharmaceutical magazine The Chemist and Druggist in August 1906. While the negative electricity is concentrated on the extremely small corpuscle, the positive electricity is distributed throughout a considerable volume. An atom would thus consist of minute specks, the negative corpuscles, swimming about in a sphere of positive electrification, like raisins in a parsimonious plum-pudding, units of negative electricity being attracted toward the centre, while at the same time repelling each other. The analogy was never used by Thomson nor his colleagues. It seems to have been coined by popular science writers to make the model easier to understand for the layman. == References == == Bibliography == Davis, E. A.; Falconer, I. J. (1997). J. J. Thomson and the Discovery of the Electron. Taylor & Francis. ISBN 0-203-79233-5. Thomson, J. J. (1897). "Cathode rays" (PDF). Philosophical Magazine. 44 (269): 293–316. doi:10.1080/14786449708621070. Thomson, J. J. (1899). "On the Masses of the Ions in Gases at Low Pressures". Philosophical Magazine. 5. 48 (295): 547–567. Thomson, J. J. (March 1904). "On the Structure of the Atom: an Investigation of the Stability and Periods of Oscillation of a number of Corpuscles arranged at equal intervals around the Circumference of a Circle; with Application of the Results to the Theory of Atomic Structure". Philosophical Magazine. Sixth series. 7 (39): 237–265. doi:10.1080/14786440409463107. Archived (PDF) from the original on 2022-10-09. Thomson, J. J. (1907). The Corpuscular Theory of Matter. Charles Scribner's Sons. Ernest Rutherford (1911). "The Scattering of α and β Particles by Matter and the Structure of the Atom" (PDF). Philosophical Magazine. Series 6. 21 (125): 669–688. doi:10.1080/14786440508637080. J. J. Thomson (1906). "On the Number of Corpuscles in an Atom" (PDF). Philosophical Magazine. 6. 11 (66): 769–781. doi:10.1080/14786440609463496. J. J. Thomson (1910). "On the Scattering of rapidly moving Electrified Particles". Proceedings of the Cambridge Philosophical Society. 15: 465–471.
Wikipedia/Plum_pudding_model
The chronology protection conjecture is a hypothesis first proposed by Stephen Hawking that laws of physics beyond those of standard general relativity prevent time travel—even when the latter theory states that it should be possible (such as in scenarios where faster than light travel is allowed). The permissibility of time travel is represented mathematically by the existence of closed timelike curves in some solutions to the field equations of general relativity. The chronology protection conjecture should be distinguished from chronological censorship under which every closed timelike curve passes through an event horizon, which might prevent an observer from detecting the causal violation (also known as chronology violation). == Etymology == In a 1992 paper, Hawking uses the metaphorical device of a "Chronology Protection Agency" as a personification of the aspects of physics that make time travel impossible at macroscopic scales, thus apparently preventing temporal paradoxes. He says: It seems that there is a Chronology Protection Agency which prevents the appearance of closed timelike curves and so makes the universe safe for historians. The idea of the Chronology Protection Agency appears to be drawn playfully from the Time Patrol or Time Police concept, which has been used in many works of science fiction such as Poul Anderson's series of Time Patrol stories or Isaac Asimov's novel The End of Eternity, or in the television series Doctor Who. "The Chronology Protection Case" by Paul Levinson, published after Hawking's paper, posits a universe that goes so far as to murder any scientists who are close to inventing any means of time travel. Larry Niven, in his short story ‘Rotating Cylinders and the possibility of Global Causality Violation’ expands this concept so that the universe causes environmental catastrophe, or global civil war, or the local sun going nova, to any civilisation which shows any sign of successful construction. == General relativity and quantum corrections == Many attempts to generate scenarios for closed timelike curves have been suggested, and the theory of general relativity does allow them in certain circumstances. Some theoretical solutions in general relativity that contain closed timelike curves would require an infinite universe with certain features that our universe does not appear to have, such as the universal rotation of the Gödel metric or the rotating cylinder of infinite length known as a Tipler cylinder. However, some solutions allow for the creation of closed timelike curves in a bounded region of spacetime, with the Cauchy horizon being the boundary between the region of spacetime where closed timelike curves can exist and the rest of spacetime where they cannot. One of the first such bounded time travel solutions found was constructed from a traversable wormhole, based on the idea of taking one of the two "mouths" of the wormhole on a round-trip journey at relativistic speed to create a time difference between it and the other mouth (see the discussion at Wormhole#Time travel). General relativity does not include quantum effects on its own, and a full integration of general relativity and quantum mechanics would require a theory of quantum gravity, but there is an approximate method for modeling quantum fields in the curved spacetime of general relativity, known as semiclassical gravity. Initial attempts to apply semiclassical gravity to the traversable wormhole time machine indicated that at exactly the moment that wormhole would first allow for closed timelike curves, quantum vacuum fluctuations build up and drive the energy density to infinity in the region of the wormholes. This occurs when the two wormhole mouths, call them A and B, have been moved in such a way that it becomes possible for a particle or wave moving at the speed of light to enter mouth B at some time T2 and exit through mouth A at an earlier time T1, then travel back towards mouth B through ordinary space, and arrive at mouth B at the same time T2 that it entered B on the previous loop; in this way the same particle or wave can make a potentially infinite number of loops through the same regions of spacetime, piling up on itself. Calculations showed that this effect would not occur for an ordinary beam of radiation, because it would be "defocused" by the wormhole so that most of a beam emerging from mouth A would spread out and miss mouth B. But when the calculation was done for vacuum fluctuations, it was found that they would spontaneously refocus on the trip between the mouths, indicating that the pileup effect might become large enough to destroy the wormhole in this case. Uncertainty about this conclusion remained, because the semiclassical calculations indicated that the pileup would only drive the energy density to infinity for an infinitesimal moment of time, after which the energy density would die down. But semiclassical gravity is considered unreliable for large energy densities or short time periods that reach the Planck scale; at these scales, a complete theory of quantum gravity is needed for accurate predictions. So, it remains uncertain whether quantum-gravitational effects might prevent the energy density from growing large enough to destroy the wormhole. Stephen Hawking conjectured that not only would the pileup of vacuum fluctuations still succeed in destroying the wormhole in quantum gravity, but also that the laws of physics would ultimately prevent any type of time machine from forming; this is the chronology protection conjecture. Subsequent works in semiclassical gravity provided examples of spacetimes with closed timelike curves where the energy density due to vacuum fluctuations does not approach infinity in the region of spacetime outside the Cauchy horizon. However, in 1997 a general proof was found demonstrating that according to semiclassical gravity, the energy of the quantum field (more precisely, the expectation value of the quantum stress-energy tensor) must always be either infinite or undefined on the horizon itself. Both cases indicate that semiclassical methods become unreliable at the horizon and quantum gravity effects would be important there, consistent with the possibility that such effects would always intervene to prevent time machines from forming. A definite theoretical decision on the status of the chronology protection conjecture would require a full theory of quantum gravity as opposed to semiclassical methods. There are also some arguments from string theory that seem to support chronology protection, but string theory is not yet a complete theory of quantum gravity. Experimental observation of closed timelike curves would of course demonstrate this conjecture to be false, but short of that, if physicists had a theory of quantum gravity whose predictions had been well-confirmed in other areas, this would give them a significant degree of confidence in the theory's predictions about the possibility or impossibility of time travel. Other proposals that allow for backwards time travel but prevent time paradoxes, such as the Novikov self-consistency principle, which would ensure the timeline stays consistent, or the idea that a time traveler is taken to a parallel universe while their original timeline remains intact, do not qualify as "chronology protection". == See also == Causality Cosmic censorship hypothesis Novikov self-consistency principle Time travel Wormhole == Notes == == References == Hawking, S.W., (1992) The chronology protection conjecture. Phys. Rev. D46, 603–611. Matt Visser, "The quantum physics of chronology protection" in The Future of Theoretical Physics and Cosmology: Celebrating Stephen Hawking's 60th Birthday by G. W. Gibbons (Editor), E. P. S. Shellard (Editor), S. J. Rankin (Editor) Li, Li-Xin (1996). "Must Time Machine Be Unstable against Vacuum Fluctuations?". Class. Quantum Grav. 13 (9): 2563–2568. arXiv:gr-qc/9703024. Bibcode:1996CQGra..13.2563L. doi:10.1088/0264-9381/13/9/019. S2CID 250909592. == External links == https://web.archive.org/web/20101125122824/http://hawking.org.uk/index.php/lectures/63 https://plus.maths.org/content/time-travel-allowed — Kip Thorne discusses time travel in general relativity, and the basis in quantum physics for the chronology protection conjecture
Wikipedia/Chronology_protection_conjecture
What is now often called Lorentz ether theory (LET) has its roots in Hendrik Lorentz's "theory of electrons", which marked the end of the development of the classical aether theories at the end of the 19th and at the beginning of the 20th century. Lorentz's initial theory was created between 1892 and 1895 and was based on removing assumptions about aether motion. It explained the failure of the negative aether drift experiments to first order in v/c by introducing an auxiliary variable called "local time" for connecting systems at rest and in motion in the aether. In addition, the negative result of the Michelson–Morley experiment led to the introduction of the hypothesis of length contraction in 1892. However, other experiments also produced negative results and (guided by Henri Poincaré's principle of relativity) Lorentz tried in 1899 and 1904 to expand his theory to all orders in v/c by introducing the Lorentz transformation. In addition, he assumed that non-electromagnetic forces (if they exist) transform like electric forces. However, Lorentz's expression for charge density and current were incorrect, so his theory did not fully exclude the possibility of detecting the aether. Eventually, it was Henri Poincaré who in 1905 corrected the errors in Lorentz's paper and actually incorporated non-electromagnetic forces (including gravitation) within the theory, which he called "The New Mechanics". Many aspects of Lorentz's theory were incorporated into special relativity (SR) with the works of Albert Einstein and Hermann Minkowski. Today LET is often treated as some sort of "Lorentzian" or "neo-Lorentzian" interpretation of special relativity. The introduction of length contraction and time dilation for all phenomena in a "preferred" frame of reference, which plays the role of Lorentz's immobile aether, leads to the complete Lorentz transformation (see the Robertson–Mansouri–Sexl test theory as an example), so Lorentz covariance doesn't provide any experimentally verifiable distinctions between LET and SR. The absolute simultaneity in the Mansouri–Sexl test theory formulation of LET implies that a one-way speed of light experiment could in principle distinguish between LET and SR, but it is now widely held that it is impossible to perform such a test. In the absence of any way to experimentally distinguish between LET and SR, SR is widely preferred over LET, due to the superfluous assumption of an undetectable aether in LET, and the validity of the relativity principle in LET seeming ad hoc or coincidental. == Historical development == === Basic concept === The Lorentz ether theory, which was developed mainly between 1892 and 1906 by Lorentz and Poincaré, was based on the aether theory of Augustin-Jean Fresnel, Maxwell's equations and the electron theory of Rudolf Clausius. Lorentz's 1895 paper rejected the aether drift theories, and refused to express assumptions about the nature of the aether. It said: That we cannot speak about an absolute rest of the aether, is self-evident; this expression would not even make sense. When I say for the sake of brevity, that the aether would be at rest, then this only means that one part of this medium does not move against the other one and that all perceptible motions are relative motions of the celestial bodies in relation to the aether. As Max Born later said, it was natural (though not logically necessary) for scientists of that time to identify the rest frame of the Lorentz aether with the absolute space of Isaac Newton. The condition of this aether can be described by the electric field E and the magnetic field H, where these fields represent the "states" of the aether (with no further specification), related to the charges of the electrons. Thus an abstract electromagnetic aether replaces the older mechanistic aether models. Contrary to Clausius, who accepted that the electrons operate by actions at a distance, the electromagnetic field of the aether appears as a mediator between the electrons, and changes in this field can propagate not faster than the speed of light. Lorentz theoretically explained the Zeeman effect on the basis of his theory, for which he received the Nobel Prize in Physics in 1902. Joseph Larmor found a similar theory simultaneously, but his concept was based on a mechanical aether. A fundamental concept of Lorentz's theory in 1895 was the "theorem of corresponding states" for terms of order v/c. This theorem states that a moving observer with respect to the aether can use the same electrodynamic equations as an observer in the stationary aether system, thus they are making the same observations. === Length contraction === A big challenge for the Lorentz ether theory was the Michelson–Morley experiment in 1887. According to the theories of Fresnel and Lorentz, a relative motion to an immobile aether had to be determined by this experiment; however, the result was negative. Michelson himself thought that the result confirmed the aether drag hypothesis, in which the aether is fully dragged by matter. However, other experiments like the Fizeau experiment and the effect of aberration disproved that model. A possible solution came in sight, when in 1889 Oliver Heaviside derived from Maxwell's equations that the magnetic vector potential field around a moving body is altered by a factor of 1 − v 2 / c 2 {\textstyle {\sqrt {1-v^{2}/c^{2}}}} . Based on that result, and to bring the hypothesis of an immobile aether into accordance with the Michelson–Morley experiment, George FitzGerald in 1889 (qualitatively) and, independently of him, Lorentz in 1892 (already quantitatively), suggested that not only the electrostatic fields, but also the molecular forces, are affected in such a way that the dimension of a body in the line of motion is less by the value v 2 / ( 2 c 2 ) {\displaystyle v^{2}/(2c^{2})} than the dimension perpendicularly to the line of motion. However, an observer co-moving with the earth would not notice this contraction because all other instruments contract at the same ratio. In 1895 Lorentz proposed three possible explanations for this relative contraction: The body contracts in the line of motion and preserves its dimension perpendicularly to it. The dimension of the body remains the same in the line of motion, but it expands perpendicularly to it. The body contracts in the line of motion and expands at the same time perpendicularly to it. Although the possible connection between electrostatic and intermolecular forces was used by Lorentz as a plausibility argument, the contraction hypothesis was soon considered as purely ad hoc. It is also important that this contraction would only affect the space between the electrons but not the electrons themselves; therefore the name "intermolecular hypothesis" was sometimes used for this effect. The so-called Length contraction without expansion perpendicularly to the line of motion and by the precise value l = l 0 ⋅ 1 − v 2 / c 2 {\textstyle l=l_{0}\cdot {\sqrt {1-v^{2}/c^{2}}}} (where l0 is the length at rest in the aether) was given by Larmor in 1897 and by Lorentz in 1904. In the same year, Lorentz also argued that electrons themselves are also affected by this contraction. For further development of this concept, see the section § Lorentz transformation. === Local time === An important part of the theorem of corresponding states in 1892 and 1895 was the local time t ′ = t − v x / c 2 {\displaystyle t'=t-vx/c^{2}} , where t is the time coordinate for an observer resting in the aether, and t' is the time coordinate for an observer moving in the aether. (Woldemar Voigt had previously used the same expression for local time in 1887 in connection with the Doppler effect and an incompressible medium.) With the help of this concept Lorentz could explain the aberration of light, the Doppler effect and the Fizeau experiment (i.e. measurements of the Fresnel drag coefficient) by Hippolyte Fizeau in moving and also resting liquids. While for Lorentz length contraction was a real physical effect, he considered the time transformation only as a heuristic working hypothesis and a mathematical stipulation to simplify the calculation from the resting to a "fictitious" moving system. Contrary to Lorentz, Poincaré saw more than a mathematical trick in the definition of local time, which he called Lorentz's "most ingenious idea". In The Measure of Time he wrote in 1898: We do not have a direct intuition for simultaneity, just as little as for the equality of two periods. If we believe to have this intuition, it is an illusion. We helped ourselves with certain rules, which we usually use without giving us account over it [...] We choose these rules therefore, not because they are true, but because they are the most convenient, and we could summarize them while saying: „The simultaneity of two events, or the order of their succession, the equality of two durations, are to be so defined that the enunciation of the natural laws may be as simple as possible. In other words, all these rules, all these definitions are only the fruit of an unconscious opportunism.“ In 1900 Poincaré interpreted local time as the result of a synchronization procedure based on light signals. He assumed that two observers, A and B, who are moving in the aether, synchronize their clocks by optical signals. Since they treat themselves as being at rest, they must consider only the transmission time of the signals and then crossing their observations to examine whether their clocks are synchronous. However, from the point of view of an observer at rest in the aether the clocks are not synchronous and indicate the local time t ′ = t − v x / c 2 {\displaystyle t'=t-vx/c^{2}} . But because the moving observers don't know anything about their movement, they don't recognize this. In 1904, he illustrated the same procedure in the following way: A sends a signal at time 0 to B, which arrives at time t. B also sends a signal at time 0 to A, which arrives at time t. If in both cases t has the same value, the clocks are synchronous, but only in the system in which the clocks are at rest in the aether. So, according to Darrigol, Poincaré understood local time as a physical effect just like length contraction – in contrast to Lorentz, who did not use the same interpretation before 1906. However, contrary to Einstein, who later used a similar synchronization procedure which was called Einstein synchronisation, Darrigol says that Poincaré had the opinion that clocks resting in the aether are showing the true time. However, at the beginning it was unknown that local time includes what is now known as time dilation. This effect was first noticed by Larmor (1897), who wrote that "individual electrons describe corresponding parts of their orbits in times shorter for the [aether] system in the ratio ε − 1 / 2 {\displaystyle \varepsilon ^{-1/2}} or ( 1 − ( 1 / 2 ) v 2 / c 2 ) {\displaystyle \left(1-(1/2)v^{2}/c^{2}\right)} ". And in 1899 also Lorentz noted for the frequency of oscillating electrons "that in S the time of vibrations be k ε {\displaystyle k\varepsilon } times as great as in S0", where S0 is the aether frame, S the mathematical-fictitious frame of the moving observer, k is 1 − v 2 / c 2 {\textstyle {\sqrt {1-v^{2}/c^{2}}}} , and ε {\displaystyle \varepsilon } is an undetermined factor. === Lorentz transformation === While local time could explain the negative aether drift experiments to first order to v/c, it was necessary – due to other unsuccessful aether drift experiments like the Trouton–Noble experiment – to modify the hypothesis to include second-order effects. The mathematical tool for that is the so-called Lorentz transformation. Voigt in 1887 had already derived a similar set of equations (although with a different scale factor). Afterwards, Larmor in 1897 and Lorentz in 1899 derived equations in a form algebraically equivalent to those which are used up to this day, although Lorentz used an undetermined factor l in his transformation. In his paper Electromagnetic phenomena in a system moving with any velocity smaller than that of light (1904) Lorentz attempted to create such a theory, according to which all forces between the molecules are affected by the Lorentz transformation (in which Lorentz set the factor l to unity) in the same manner as electrostatic forces. In other words, Lorentz attempted to create a theory in which the relative motion of earth and aether is (nearly or fully) undetectable. Therefore, he generalized the contraction hypothesis and argued that not only the forces between the electrons, but also the electrons themselves are contracted in the line of motion. However, Max Abraham (1904) quickly noted a defect of that theory: Within a purely electromagnetic theory the contracted electron-configuration is unstable and one has to introduce non-electromagnetic force to stabilize the electrons – Abraham himself questioned the possibility of including such forces within the theory of Lorentz. So it was Poincaré, on 5 June 1905, who introduced the so-called "Poincaré stresses" to solve that problem. Those stresses were interpreted by him as an external, non-electromagnetic pressure, which stabilize the electrons and also served as an explanation for length contraction. Although he argued that Lorentz succeeded in creating a theory which complies to the postulate of relativity, he showed that Lorentz's equations of electrodynamics were not fully Lorentz covariant. So by pointing out the group characteristics of the transformation, Poincaré demonstrated the Lorentz covariance of the Maxwell–Lorentz equations and corrected Lorentz's transformation formulae for charge density and current density. He went on to sketch a model of gravitation (incl. gravitational waves) which might be compatible with the transformations. It was Poincaré who, for the first time, used the term "Lorentz transformation", and he gave them a form which is used up to this day. (Where ℓ {\displaystyle \ell } is an arbitrary function of ε {\displaystyle \varepsilon } , which must be set to unity to conserve the group characteristics. He also set the speed of light to unity.) x ′ = k ℓ ( x + ε t ) , y ′ = ℓ y , z ′ = ℓ z , t ′ = k ℓ ( t + ε x ) {\displaystyle x^{\prime }=k\ell \left(x+\varepsilon t\right),\qquad y^{\prime }=\ell y,\qquad z^{\prime }=\ell z,\qquad t^{\prime }=k\ell \left(t+\varepsilon x\right)} k = 1 1 − ε 2 {\displaystyle k={\frac {1}{\sqrt {1-\varepsilon ^{2}}}}} A substantially extended work (the so-called "Palermo paper") was submitted by Poincaré on 23 July 1905, but was published in January 1906 because the journal appeared only twice a year. He spoke literally of "the postulate of relativity", he showed that the transformations are a consequence of the principle of least action; he demonstrated in more detail the group characteristics of the transformation, which he called Lorentz group, and he showed that the combination x 2 + y 2 + z 2 − c 2 t 2 {\displaystyle x^{2}+y^{2}+z^{2}-c^{2}t^{2}} is invariant. While elaborating his gravitational theory, he noticed that the Lorentz transformation is merely a rotation in four-dimensional space about the origin by introducing c t − 1 {\textstyle ct{\sqrt {-1}}} as a fourth, imaginary, coordinate, and he used an early form of four-vectors. However, Poincaré later said the translation of physics into the language of four-dimensional geometry would entail too much effort for limited profit, and therefore he refused to work out the consequences of this notion. This was later done, however, by Minkowski; see "The shift to relativity". === Electromagnetic mass === J. J. Thomson (1881) and others noticed that electromagnetic energy contributes to the mass of charged bodies by the amount m = ( 4 / 3 ) E / c 2 {\displaystyle m=(4/3)E/c^{2}} , which was called electromagnetic or "apparent mass". Another derivation of some sort of electromagnetic mass was conducted by Poincaré (1900). By using the momentum of electromagnetic fields, he concluded that these fields contribute a mass of E e m / c 2 {\displaystyle E_{em}/c^{2}} to all bodies, which is necessary to save the center of mass theorem. As noted by Thomson and others, this mass increases also with velocity. Thus in 1899, Lorentz calculated that the ratio of the electron's mass in the moving frame and that of the aether frame is k 3 ε {\displaystyle k^{3}\varepsilon } parallel to the direction of motion, and k ε {\displaystyle k\varepsilon } perpendicular to the direction of motion, where k = 1 − v 2 / c 2 {\textstyle k={\sqrt {1-v^{2}/c^{2}}}} and ε {\displaystyle \varepsilon } is an undetermined factor. And in 1904, he set ε = 1 {\displaystyle \varepsilon =1} , arriving at the expressions for the masses in different directions (longitudinal and transverse): m L = m 0 ( 1 − v 2 c 2 ) 3 , m T = m 0 1 − v 2 c 2 , {\displaystyle m_{L}={\frac {m_{0}}{\left({\sqrt {1-{\frac {v^{2}}{c^{2}}}}}\right)^{3}}},\quad m_{T}={\frac {m_{0}}{\sqrt {1-{\frac {v^{2}}{c^{2}}}}}},} where m 0 = 4 3 E e m c 2 {\displaystyle m_{0}={\frac {4}{3}}{\frac {E_{em}}{c^{2}}}} Many scientists now believed that the entire mass and all forms of forces were electromagnetic in nature. This idea had to be given up, however, in the course of the development of relativistic mechanics. Abraham (1904) argued (as described in the preceding section #Lorentz transformation), that non-electrical binding forces were necessary within Lorentz's electrons model. But Abraham also noted that different results occurred, dependent on whether the em-mass is calculated from the energy or from the momentum. To solve those problems, Poincaré in 1905 and 1906 introduced some sort of pressure of non-electrical nature, which contributes the amount − ( 1 / 3 ) E / c 2 {\displaystyle -(1/3)E/c^{2}} to the energy of the bodies, and therefore explains the 4/3-factor in the expression for the electromagnetic mass-energy relation. However, while Poincaré's expression for the energy of the electrons was correct, he erroneously stated that only the em-energy contributes to the mass of the bodies. The concept of electromagnetic mass is not considered anymore as the cause of mass per se, because the entire mass (not only the electromagnetic part) is proportional to energy, and can be converted into different forms of energy, which is explained by Einstein's mass–energy equivalence. === Gravitation === ==== Lorentz's theories ==== In 1900 Lorentz tried to explain gravity on the basis of the Maxwell equations. He first considered a Le Sage type model and argued that there possibly exists a universal radiation field, consisting of very penetrating em-radiation, and exerting a uniform pressure on every body. Lorentz showed that an attractive force between charged particles would indeed arise, if it is assumed that the incident energy is entirely absorbed. This was the same fundamental problem which had afflicted the other Le Sage models, because the radiation must vanish somehow and any absorption must lead to an enormous heating. Therefore, Lorentz abandoned this model. In the same paper, he assumed like Ottaviano Fabrizio Mossotti and Johann Karl Friedrich Zöllner that the attraction of opposite charged particles is stronger than the repulsion of equal charged particles. The resulting net force is exactly what is known as universal gravitation, in which the speed of gravity is that of light. This leads to a conflict with the law of gravitation by Isaac Newton, in which it was shown by Pierre Simon Laplace that a finite speed of gravity leads to some sort of aberration and therefore makes the orbits unstable. However, Lorentz showed that the theory is not concerned by Laplace's critique, because due to the structure of the Maxwell equations only effects in the order v2/c2 arise. But Lorentz calculated that the value for the perihelion advance of Mercury was much too low. He wrote: The special form of these terms may perhaps be modified. Yet, what has been said is sufficient to show that gravitation may be attributed to actions which are propagated with no greater velocity than that of light. In 1908 Poincaré examined the gravitational theory of Lorentz and classified it as compatible with the relativity principle, but (like Lorentz) he criticized the inaccurate indication of the perihelion advance of Mercury. Contrary to Poincaré, Lorentz in 1914 considered his own theory as incompatible with the relativity principle and rejected it. ==== Lorentz-invariant gravitational law ==== Poincaré argued in 1904 that a propagation speed of gravity which is greater than c is contradicting the concept of local time and the relativity principle. He wrote: What would happen if we could communicate by signals other than those of light, the velocity of propagation of which differed from that of light? If, after having regulated our watches by the optimal method, we wished to verify the result by means of these new signals, we should observe discrepancies due to the common translatory motion of the two stations. And are such signals inconceivable, if we take the view of Laplace, that universal gravitation is transmitted with a velocity a million times as great as that of light? However, in 1905 and 1906 Poincaré pointed out the possibility of a gravitational theory, in which changes propagate with the speed of light and which is Lorentz covariant. He pointed out that in such a theory the gravitational force not only depends on the masses and their mutual distance, but also on their velocities and their position due to the finite propagation time of interaction. On that occasion Poincaré introduced four-vectors. Following Poincaré, also Minkowski (1908) and Arnold Sommerfeld (1910) tried to establish a Lorentz-invariant gravitational law. However, these attempts were superseded because of Einstein's theory of general relativity, see "The shift to relativity". The non-existence of a generalization of the Lorentz ether to gravity was a major reason for the preference for the spacetime interpretation. A viable generalization to gravity has been proposed only 2012 by Schmelzer. The preferred frame is defined by the harmonic coordinate condition. The gravitational field is defined by density, velocity and stress tensor of the Lorentz ether, so that the harmonic conditions become continuity and Euler equations. The Einstein Equivalence Principle is derived. The Strong Equivalence Principle is violated, but is recovered in a limit, which gives the Einstein equations of general relativity in harmonic coordinates. == Principles and conventions == === Constancy of the speed of light === Already in his philosophical writing on time measurements (1898), Poincaré wrote that astronomers like Ole Rømer, in determining the speed of light, simply assume that light has a constant speed, and that this speed is the same in all directions. Without this postulate it would not be possible to infer the speed of light from astronomical observations, as Rømer did based on observations of the moons of Jupiter. Poincaré went on to note that Rømer also had to assume that Jupiter's moons obey Newton's laws, including the law of gravitation, whereas it would be possible to reconcile a different speed of light with the same observations if we assumed some different (probably more complicated) laws of motion. According to Poincaré, this illustrates that we adopt for the speed of light a value that makes the laws of mechanics as simple as possible. (This is an example of Poincaré's conventionalist philosophy.) Poincaré also noted that the propagation speed of light can be (and in practice often is) used to define simultaneity between spatially separate events. However, in that paper he did not go on to discuss the consequences of applying these "conventions" to multiple relatively moving systems of reference. This next step was done by Poincaré in 1900, when he recognized that synchronization by light signals in earth's reference frame leads to Lorentz's local time. (See the section on "local time" above). And in 1904 Poincaré wrote: From all these results, if they were to be confirmed, would issue a wholly new mechanics which would be characterized above all by this fact, that there could be no velocity greater than that of light, any more than a temperature below that of absolute zero. For an observer, participating himself in a motion of translation of which he has no suspicion, no apparent velocity could surpass that of light, and this would be a contradiction, unless one recalls the fact that this observer does not use the same sort of timepiece as that used by a stationary observer, but rather a watch giving the “local time.[..] Perhaps, too, we shall have to construct an entirely new mechanics that we only succeed in catching a glimpse of, where, inertia increasing with the velocity, the velocity of light would become an impassable limit. The ordinary mechanics, more simple, would remain a first approximation, since it would be true for velocities not too great, so that the old dynamics would still be found under the new. We should not have to regret having believed in the principles, and even, since velocities too great for the old formulas would always be only exceptional, the surest way in practise would be still to act as if we continued to believe in them. They are so useful, it would be necessary to keep a place for them. To determine to exclude them altogether would be to deprive oneself of a precious weapon. I hasten to say in conclusion that we are not yet there, and as yet nothing proves that the principles will not come forth from out the fray victorious and intact.” === Principle of relativity === In 1895 Poincaré argued that experiments like that of Michelson–Morley show that it seems to be impossible to detect the absolute motion of matter or the relative motion of matter in relation to the aether. And although most physicists had other views, Poincaré in 1900 stood to his opinion and alternately used the expressions "principle of relative motion" and "relativity of space". He criticized Lorentz by saying, that it would be better to create a more fundamental theory, which explains the absence of any aether drift, than to create one hypothesis after the other. In 1902 he used for the first time the expression "principle of relativity". In 1904 he appreciated the work of the mathematicians, who saved what he now called the "principle of relativity" with the help of hypotheses like local time, but he confessed that this venture was possible only by an accumulation of hypotheses. And he defined the principle in this way (according to Miller based on Lorentz's theorem of corresponding states): "The principle of relativity, according to which the laws of physical phenomena must be the same for a stationary observer as for one carried along in a uniform motion of translation, so that we have no means, and can have none, of determining whether or not we are being carried along in such a motion." Referring to the critique of Poincaré from 1900, Lorentz wrote in his famous paper in 1904, where he extended his theorem of corresponding states: "Surely, the course of inventing special hypotheses for each new experimental result is somewhat artificial. It would be more satisfactory, if it were possible to show, by means of certain fundamental assumptions, and without neglecting terms of one order of magnitude or another, that many electromagnetic actions are entirely independent of the motion of the system." One of the first assessments of Lorentz's paper was by Paul Langevin in May 1905. According to him, this extension of the electron theories of Lorentz and Larmor led to "the physical impossibility to demonstrate the translational motion of the earth". However, Poincaré noticed in 1905 that Lorentz's theory of 1904 was not perfectly "Lorentz invariant" in a few equations such as Lorentz's expression for current density (Lorentz admitted in 1921 that these were defects). As this required just minor modifications of Lorentz's work, also Poincaré asserted that Lorentz had succeeded in harmonizing his theory with the principle of relativity: "It appears that this impossibility of demonstrating the absolute motion of the earth is a general law of nature. [...] Lorentz tried to complete and modify his hypothesis in order to harmonize it with the postulate of complete impossibility of determining absolute motion. It is what he has succeeded in doing in his article entitled Electromagnetic phenomena in a system moving with any velocity smaller than that of light [Lorentz, 1904b]." In his Palermo paper (1906), Poincaré called this "the postulate of relativity“, and although he stated that it was possible this principle might be disproved at some point (and in fact he mentioned at the paper's end that the discovery of magneto-cathode rays by Paul Ulrich Villard (1904) seems to threaten it), he believed it was interesting to consider the consequences if we were to assume the postulate of relativity was valid without restriction. This would imply that all forces of nature (not just electromagnetism) must be invariant under the Lorentz transformation. In 1921 Lorentz credited Poincaré for establishing the principle and postulate of relativity and wrote: "I have not established the principle of relativity as rigorously and universally true. Poincaré, on the other hand, has obtained a perfect invariance of the electro-magnetic equations, and he has formulated 'the postulate of relativity', terms which he was the first to employ." === Aether === Poincaré wrote in the sense of his conventionalist philosophy in 1889: "Whether the aether exists or not matters little – let us leave that to the metaphysicians; what is essential for us is, that everything happens as if it existed, and that this hypothesis is found to be suitable for the explanation of phenomena. After all, have we any other reason for believing in the existence of material objects? That, too, is only a convenient hypothesis; only, it will never cease to be so, while some day, no doubt, the aether will be thrown aside as useless." He also denied the existence of absolute space and time by saying in 1901: "1. There is no absolute space, and we only conceive of relative motion; and yet in most cases mechanical facts are enunciated as if there is an absolute space to which they can be referred. 2. There is no absolute time. When we say that two periods are equal, the statement has no meaning, and can only acquire a meaning by a convention. 3. Not only have we no direct intuition of the equality of two periods, but we have not even direct intuition of the simultaneity of two events occurring in two different places. I have explained this in an article entitled "Mesure du Temps" [1898]. 4. Finally, is not our Euclidean geometry in itself only a kind of convention of language?" However, Poincaré himself never abandoned the aether hypothesis and stated in 1900: "Does our aether actually exist ? We know the origin of our belief in the aether. If light takes several years to reach us from a distant star, it is no longer on the star, nor is it on the earth. It must be somewhere, and supported, so to speak, by some material agency." And referring to the Fizeau experiment, he even wrote: "The aether is all but in our grasp." He also said the aether is necessary to harmonize Lorentz's theory with Newton's third law. Even in 1912 in a paper called "The Quantum Theory", Poincaré ten times used the word "aether", and described light as "luminous vibrations of the aether". And although he admitted the relative and conventional character of space and time, he believed that the classical convention is more "convenient" and continued to distinguish between "true" time in the aether and "apparent" time in moving systems. Addressing the question if a new convention of space and time is needed he wrote in 1912: "Shall we be obliged to modify our conclusions? Certainly not; we had adopted a convention because it seemed convenient and we had said that nothing could constrain us to abandon it. Today some physicists want to adopt a new convention. It is not that they are constrained to do so; they consider this new convention more convenient; that is all. And those who are not of this opinion can legitimately retain the old one in order not to disturb their old habits, I believe, just between us, that this is what they shall do for a long time to come." Also Lorentz argued during his lifetime that in all frames of reference this one has to be preferred, in which the aether is at rest. Clocks in this frame are showing the "real“ time and simultaneity is not relative. However, if the correctness of the relativity principle is accepted, it is impossible to find this system by experiment. == The shift to relativity == === Special relativity === In 1905, Albert Einstein published his paper on what is now called special relativity. In this paper, by examining the fundamental meanings of the space and time coordinates used in physical theories, Einstein showed that the "effective" coordinates given by the Lorentz transformation were in fact the inertial coordinates of relatively moving frames of reference. From this followed all of the physically observable consequences of LET, along with others, all without the need to postulate an unobservable entity (the aether). Einstein identified two fundamental principles, each founded on experience, from which all of Lorentz's electrodynamics follows: Taken together (along with a few other tacit assumptions such as isotropy and homogeneity of space), these two postulates lead uniquely to the mathematics of special relativity. Lorentz and Poincaré had also adopted these same principles, as necessary to achieve their final results, but didn't recognize that they were also sufficient, and hence that they obviated all the other assumptions underlying Lorentz's initial derivations (many of which later turned out to be incorrect). Therefore, special relativity very quickly gained wide acceptance among physicists, and the 19th century concept of a luminiferous aether was no longer considered useful. Poincare (1905) and Hermann Minkowski (1905) showed that special relativity had a very natural interpretation in terms of a unified four-dimensional "spacetime" in which absolute intervals are seen to be given by an extension of the Pythagorean theorem. The utility and naturalness of the spacetime representation contributed to the rapid acceptance of special relativity, and to the corresponding loss of interest in Lorentz's aether theory. In 1909 and 1912 Einstein explained: ...it is impossible to base a theory of the transformation laws of space and time on the principle of relativity alone. As we know, this is connected with the relativity of the concepts of "simultaneity" and "shape of moving bodies." To fill this gap, I introduced the principle of the constancy of the velocity of light, which I borrowed from H. A. Lorentz’s theory of the stationary luminiferous aether, and which, like the principle of relativity, contains a physical assumption that seemed to be justified only by the relevant experiments (experiments by Fizeau, Rowland, etc.) In 1907 Einstein criticized the "ad hoc" character of Lorentz's contraction hypothesis in his theory of electrons, because according to him it was an artificial assumption to make the Michelson–Morley experiment conform to Lorentz's stationary aether and the relativity principle. Einstein argued that Lorentz's "local time" can simply be called "time", and he stated that the immobile aether as the theoretical foundation of electrodynamics was unsatisfactory. He wrote in 1920: As to the mechanical nature of the Lorentzian aether, it may be said of it, in a somewhat playful spirit, that immobility is the only mechanical property of which it has not been deprived by H. A. Lorentz. It may be added that the whole change in the conception of the aether which the special theory of relativity brought about, consisted in taking away from the aether its last mechanical quality, namely, its immobility. [...] More careful reflection teaches us, however, that the special theory of relativity does not compel us to deny aether. We may assume the existence of an aether; only we must give up ascribing a definite state of motion to it, i.e. we must by abstraction take from it the last mechanical characteristic which Lorentz had still left it. Minkowski argued that Lorentz's introduction of the contraction hypothesis "sounds rather fantastical", since it is not the product of resistance in the aether but a "gift from above". He said that this hypothesis is "completely equivalent with the new concept of space and time", though it becomes much more comprehensible in the framework of the new spacetime geometry. However, Lorentz disagreed that it was "ad-hoc" and he argued in 1913 that there is little difference between his theory and the negation of a preferred reference frame, as in the theory of Einstein and Minkowski, so that it is a matter of taste which theory one prefers. === Mass–energy equivalence === It was derived by Einstein (1905) as a consequence of the relativity principle, that inertia of energy is actually represented by E / c 2 {\displaystyle E/c^{2}} , but in contrast to Poincaré's 1900-paper, Einstein recognized that matter itself loses or gains mass during the emission or absorption. So the mass of any form of matter is equal to a certain amount of energy, which can be converted into and re-converted from other forms of energy. This is the mass–energy equivalence, represented by E = m c 2 {\displaystyle E=mc^{2}} . So Einstein didn't have to introduce "fictitious" masses and also avoided the perpetual motion problem, because according to Darrigol, Poincaré's radiation paradox can simply be solved by applying Einstein's equivalence. If the light source loses mass during the emission by E / c 2 {\displaystyle E/c^{2}} , the contradiction in the momentum law vanishes without the need of any compensating effect in the aether. Similar to Poincaré, Einstein concluded in 1906 that the inertia of (electromagnetic) energy is a necessary condition for the center of mass theorem to hold in systems, in which electromagnetic fields and matter are acting on each other. Based on the mass–energy equivalence, he showed that emission and absorption of em-radiation, and therefore the transport of inertia, solves all problems. On that occasion, Einstein referred to Poincaré's 1900-paper and wrote: Although the simple formal views, which must be accomplished for the proof of this statement, are already mainly contained in a work by H. Poincaré [Lorentz-Festschrift, p. 252, 1900], for the sake of clarity I won't rely on that work. Also Poincaré's rejection of the reaction principle due to the violation of the mass conservation law can be avoided through Einstein's E = m c 2 {\displaystyle E=mc^{2}} , because mass conservation appears as a special case of the energy conservation law. === General relativity === The attempts of Lorentz and Poincaré (and other attempts like those of Abraham and Gunnar Nordström) to formulate a theory of gravitation were superseded by Einstein's theory of general relativity. This theory is based on principles like the equivalence principle, the general principle of relativity, the principle of general covariance, geodesic motion, local Lorentz covariance (the laws of special relativity apply locally for all inertial observers), and that spacetime curvature is created by stress-energy within the spacetime. In 1920, Einstein compared Lorentz's aether with the "gravitational aether" of general relativity. He said that immobility is the only mechanical property of which the aether has not been deprived by Lorentz, but, contrary to the luminiferous and Lorentz's aether, the aether of general relativity has no mechanical property, not even immobility: The aether of the general theory of relativity is a medium which is itself devoid of all mechanical and kinematical qualities, but which helps to determine mechanical (and electromagnetic) events. What is fundamentally new in the aether of the general theory of relativity, as opposed to the aether of Lorentz, consists in this, that the state of the former is at every place determined by connections with the matter and the state of the aether in neighbouring places, which are amenable to law in the form of differential equations; whereas the state of the Lorentzian aether in the absence of electromagnetic fields is conditioned by nothing outside itself, and is everywhere the same. The aether of the general theory of relativity is transmuted conceptually into the aether of Lorentz if we substitute constants for the functions of space which describe the former, disregarding the causes which condition its state. Thus we may also say, I think, that the aether of the general theory of relativity is the outcome of the Lorentzian aether, through relativization. === Priority === Some claim that Poincaré and Lorentz are the true founders of special relativity, not Einstein. For more details see the article on this dispute. == Later activity == Viewed as a theory of elementary particles, Lorentz's electron/ether theory was superseded during the first few decades of the 20th century, first by quantum mechanics and then by quantum field theory. As a general theory of dynamics, Lorentz and Poincare had already (by about 1905) found it necessary to invoke the principle of relativity itself in order to make the theory match all the available empirical data. By this point, most vestiges of a substantial aether had been eliminated from Lorentz's "aether" theory, and it became both empirically and deductively equivalent to special relativity. The main difference was the metaphysical postulate of a unique absolute rest frame, which was empirically undetectable and played no role in the physical predictions of the theory, as Lorentz wrote in 1909, 1910 (published 1913), 1913 (published 1914), or in 1912 (published 1922). As a result, the term "Lorentz aether theory" is sometimes used today to refer to a neo-Lorentzian interpretation of special relativity. The prefix "neo" is used in recognition of the fact that the interpretation must now be applied to physical entities and processes (such as the standard model of quantum field theory) that were unknown in Lorentz's day. Subsequent to the advent of special relativity, only a small number of individuals have advocated the Lorentzian approach to physics. Many of these, such as Herbert E. Ives (who, along with G. R. Stilwell, performed the first experimental confirmation of time dilation) have been motivated by the belief that special relativity is logically inconsistent, and so some other conceptual framework is needed to reconcile the relativistic phenomena. For example, Ives wrote "The 'principle' of the constancy of the velocity of light is not merely 'ununderstandable', it is not supported by 'objective matters of fact'; it is untenable...". However, the logical consistency of special relativity (as well as its empirical success) is well established, so the views of such individuals are considered unfounded within the mainstream scientific community. John Stewart Bell advocated teaching special relativity first from the viewpoint of a single Lorentz inertial frame, then showing that Poincare invariance of the laws of physics such as Maxwell's equations is equivalent to the frame-changing arguments often used in teaching special relativity. Because a single Lorentz inertial frame is one of a preferred class of frames, he called this approach Lorentzian in spirit. Also some test theories of special relativity use some sort of Lorentzian framework. For instance, the Robertson–Mansouri–Sexl test theory introduces a preferred aether frame and includes parameters indicating different combinations of length and times changes. If time dilation and length contraction of bodies moving in the aether have their exact relativistic values, the complete Lorentz transformation can be derived and the aether is hidden from any observation, which makes it kinematically indistinguishable from the predictions of special relativity. Using this model, the Michelson–Morley experiment, Kennedy–Thorndike experiment, and Ives–Stilwell experiment put sharp constraints on violations of Lorentz invariance. == References == For a more complete list with sources of many other authors, see History of special relativity#References. === Works of Lorentz, Poincaré, Einstein, Minkowski (group A) === === Secondary sources (group B) === === Other notes and comments (group C) === == External links == Mathpages: Corresponding States, The End of My Latin, Who Invented Relativity?, Poincaré Contemplates Copernicus, Whittaker and the Aether, Another Derivation of Mass-Energy Equivalence
Wikipedia/Lorentz_ether_theory
In the history of physics, a line of force in Michael Faraday's extended sense is synonymous with James Clerk Maxwell's line of induction. According to J.J. Thomson, Faraday usually discusses lines of force as chains of polarized particles in a dielectric, yet sometimes Faraday discusses them as having an existence all their own as in stretching across a vacuum. In addition to lines of force, J.J. Thomson—similar to Maxwell—also calls them tubes of electrostatic inductance, or simply Faraday tubes. From the 20th century perspective, lines of force are energy linkages embedded in a 19th-century field theory that led to more mathematically and experimentally sophisticated concepts and theories, including Maxwell's equations and Albert Einstein's theory of relativity. Lines of force originated with Michael Faraday, whose theory holds that all of reality is made up of force itself. His theory predicts that electricity, light, and gravity have finite propagation delays. The theories and experimental data of later scientific figures such as Maxwell, Heinrich Hertz, Einstein, and others are in agreement with the ramifications of Faraday's theory. Nevertheless, Faraday's theory remains distinct. Unlike Faraday, Maxwell and others (e.g., J.J. Thomson) thought that light and electricity must propagate through an ether. In Einstein's relativity, there is no ether, yet the physical reality of force is much weaker than in the theories of Faraday. Historian Nancy J. Nersessian in her paper "Faraday's Field Concept" distinguishes between the ideas of Maxwell and Faraday: The specific features of Faraday's field concept, in its 'favourite' and most complete form, are that force is a substance, that it is the only substance and that all forces are interconvertible through various motions of the lines of force. These features of Faraday's 'favourite notion' were not carried on. Maxwell, in his approach to the problem of finding a mathematical representation for the continuous transmission of electric and magnetic forces, considered these to be states of stress and strain in a mechanical aether. This was part of the quite different network of beliefs and problems with which Maxwell was working. == Views of Faraday == At first Michael Faraday considered the physical reality of the lines of force as a possibility, yet several scholars agree that for Faraday their physical reality became a conviction. One scholar dates this change in the year 1838. Another scholar dates this final strengthening of his belief in 1852. Faraday experimentally studied lines of magnetic force and electrostatic force, showing them not to fit action at a distance models. In 1852 Faraday wrote the paper "On the Physical Character of the Lines of Magnetic Force" which examined gravity, radiation, and electricity, and their possible relationships with the transmission medium, transmission propagation, and the receiving entity. == Views of Maxwell == Initially, James Clerk Maxwell took an agnostic approach in his mathematization of Faraday's theories. This is seen in Maxwell's 1855 and 1856 papers: "On Faraday's Lines of Force" and "On Faraday's Electrotontic State". In the 1864 paper "A Dynamical Theory of the Electromagnetic Field" Maxwell gives scientific priority of the electromagnetic theory of light to Faraday and his 1846 paper "Thoughts on Ray Vibrations". Maxwell wrote: Faraday discovered that when a plane polarized ray traverses a transparent diamagnetic medium in the direction of the lines of magnetic force produced by magnets or currents in the neighborhood, the plane of polarization is caused to rotate.The conception of the propagation of transverse magnetic disturbances to the exclusion of normal ones is distinctly set forth by Professor Faraday in his "Thoughts on Ray Vibrations." The electromagnetic theory of light, as proposed by him, is the same in substance as that which I have begun to develop in this paper, except that in 1846 there was no data to calculate the velocity of propagation. == Tube of force == Maxwell changed Faraday's phrase lines of force to tubes of force, when expressing his fluidic assumptions involved in his mathematization of Faraday's theories. A tube of force, also called a tube of electrostatic induction or field tube, are the lines of electric force which moves so that its beginning traces a closed curve on a positive surface, its end will trace a corresponding closed curve on the negative surface, and the line of force itself will generate an inductive tubular surface. Such a tube is called a "solenoid". There is a pressure at right angles to a tube of force of one half the product of the dielectric and magnetic density. If through the growth of a field the tubes of force are spread sideways or in width there is a magnetic reaction to that growth in intensity of electric current. However, if a tube of force is caused to move endwise there is little or no drag to limit velocity. Tubes of force are absorbed by bodies imparting momentum and gravitational mass. Tubes of force are a group of electric lines of force. == Magnetic curves == Early on in his research (circa 1831), Faraday calls the patterns of apparently continuous curves traced out in metallic filings near a magnet magnetic curves. Later on he refers to them as just an instance of magnetic lines of force or simply lines of force. Eventually Faraday would also begin to use the phrase "magnetic field". == See also == Field line Flux tube Flux == Other relevant papers == Faraday, Michael, "Thoughts on Ray Vibrations", Philosophical Magazine, May 1846, or Experimental Researches, iii, p. 447 Faraday, Michael, Experimental Researches, Series 19. == Notes ==
Wikipedia/Lines_of_force
Kinetic exchange models are multi-agent dynamic models inspired by the statistical physics of energy distribution, which try to explain the robust and universal features of income/wealth distributions. Understanding the distributions of income and wealth in an economy has been a classic problem in economics for more than a hundred years. Today it is one of the main branches of econophysics. == Data and basic tools == In 1897, Vilfredo Pareto first found a universal feature in the distribution of wealth. After that, with some notable exceptions, this field had been dormant for many decades, although accurate data had been accumulated over this period. Considerable investigations with the real data during the last fifteen years (1995–2010) revealed that the tail (typically 5 to 10 percent of agents in any country) of the income/wealth distribution indeed follows a power law. However, the majority of the population (i.e., the low-income population) follows a different distribution which is debated to be either Gibbs or log-normal. Basic tools used in this type of modelling are probabilistic and statistical methods mostly taken from the kinetic theory of statistical physics. Monte Carlo simulations often come handy in solving these models. == Overview of the models == Since the distributions of income/wealth are the results of the interaction among many heterogeneous agents, there is an analogy with statistical mechanics, where many particles interact. This similarity was noted by Meghnad Saha and B. N. Srivastava in 1931 and thirty years later by Benoit Mandelbrot. In 1986, an elementary version of the stochastic exchange model was first proposed by J. Angle. for open online view only. In the context of kinetic theory of gases, such an exchange model was first investigated by A. Dragulescu and V. Yakovenko. Later, scholars found that in 1988, Bennati had independently introduced the same kinetic exchange dynamics, thus leading to the nomenclature of this model as Bennati-Dragulescu-Yakovenko (BDY) game. The main modelling efforts since then have been put to introduce the concepts of savings, and taxation in the setting of an ideal gas-like system. Basically, it assumes that in the short-run, an economy remains conserved in terms of income/wealth; therefore law of conservation for income/wealth can be applied. Millions of such conservative transactions lead to a steady state distribution of money (gamma function-like in the Chakraborti-Chakrabarti model with uniform savings, and a gamma-like bulk distribution ending with a Pareto tail in the Chatterjee-Chakrabarti-Manna model with distributed savings) and the distribution converges to it. The distributions derived thus have close resemblance with those found in empirical cases of income/wealth distributions. Though this theory had been originally derived from the entropy maximization principle of statistical mechanics, it had been shown by A. S. Chakrabarti and B. K. Chakrabarti that the same could be derived from the utility maximization principle as well, following a standard exchange-model with Cobb-Douglas utility function. Recently it has been shown that an extension of the Cobb-Douglas utility function (in the above-mentioned Chakrabarti-Chakrabarti formulation) by adding a production savings factor leads to the desired feature of growth of the economy in conformity with some earlier phenomenologically established growth laws in the economics literature. The exact distributions produced by this class of kinetic models are known only in certain limits and extensive investigations have been made on the mathematical structures of this class of models. The general forms have not been derived so far. For a recent review (in 2024) on these developments, see the article by M. Greenberg (Dept. Economics, University of Massachusetts Amherst & Systems Engineering, Cornell University) and H. Oliver Gao (Systems Engineering, Cornell University) in the last twenty five years of research on kinetic exchange modeling of income or wealth dynamics and the resulting statistical properties. A very simple model, based on the same kinetic exchange framework, was introduced by Chakraborti in 2002, now popularly called the "yard sale model", because it had few features of a real one-on-one economic transactions which led to an oligarchy; this has been extensively studied and reviewed by Boghosian. == Criticisms == This class of models has attracted criticisms from many dimensions. It has been debated for long whether the distributions derived from these models are representing the income distributions or wealth distributions. The law of conservation for income/wealth has also been a subject of criticism. == See also == Economic inequality Econophysics Thermoeconomics Wealth condensation == References == == Further reading == Brian Hayes, Follow the money, American Scientist, 90:400-405 (Sept.-Oct., 2002) Jenny Hogan, There's only one rule for rich, New Scientist, 6-7 (12 March 2005) Peter Markowich, Applied Partial Differential Equations, Springer-Verlag (Berlin, 2007) Arnab Chatterjee, Bikas K Chakrabarti, Kinetic exchange models for income and wealth distribution, European Physical Journal B, 60:135-149(2007) Victor Yakovenko, J. B. Rosser, Colloquium: statistical mechanics of money, wealth and income, Reviews of Modern Physics 81:1703-1725 (2009) Thomas Lux, F. Westerhoff, Economics crisis, Nature Physics, 5:2 (2009) Sitabhra Sinha, Bikas K Chakrabarti, Towards a physics of economics, Physics News 39(2) 33-46 (April 2009) Stephen Battersby, The physics of our finances, New Scientist, p. 41 (28 July 2012) Bikas K Chakrabarti, Anirban Chakraborti, Satya R Chakravarty, Arnab Chatterjee, Econophysics of Income & Wealth Distributions, Cambridge University Press (Cambridge 2013). Lorenzo Pareschi and Giuseppe Toscani, Interacting Multiagent Systems: Kinetic equations and Monte Carlo methods Oxford University Press (Oxford 2013) Kishore Chandra Dash, "Story of Econophysics" Cambridge Scholars Press (UK, 2019) Marcelo Byrro Ribeiro, Income Distribution Dynamics of Economic Systems: An Econophysical Approach, Cambridge University Press (Cambridge, UK, 2020) Giuseppe Toscani, Parongama Sen and Soumyajyoti Biswas (Eds), "Kinetic exchange models of societies and economies" Philosophical Transactions of the Royal Society A 380: 20210170 (Special Issue, May 2022)
Wikipedia/Kinetic_exchange_models_of_markets
The Wheeler–Feynman absorber theory (also called the Wheeler–Feynman time-symmetric theory), named after its originators, the physicists Richard Feynman and John Archibald Wheeler, is a theory of electrodynamics based on a relativistic correct extension of action at a distance electron particles. The theory postulates no independent electromagnetic field. Rather, the whole theory is encapsulated by the Lorentz-invariant action S {\displaystyle S} of particle trajectories a μ ( τ ) , b μ ( τ ) , ⋯ {\displaystyle a^{\mu }(\tau ),\,\,b^{\mu }(\tau ),\,\,\cdots } defined as S = − ∑ a m a c ∫ − d a μ d a μ + ∑ a < b e a e b c ∫ ∫ δ ( a b μ a b μ ) d a ν d b ν , {\displaystyle S=-\sum _{a}m_{a}c\int {\sqrt {-da_{\mu }da^{\mu }}}+\sum _{a<b}{\frac {e_{a}e_{b}}{c}}\int \int \delta (ab_{\mu }ab^{\mu })\,da_{\nu }db^{\nu },} where a b μ ≡ a μ − b μ {\displaystyle ab_{\mu }\equiv a_{\mu }-b_{\mu }} . The absorber theory is invariant under time-reversal transformation, consistent with the lack of any physical basis for microscopic time-reversal symmetry breaking. Another key principle resulting from this interpretation, and somewhat reminiscent of Mach's principle and the work of Hugo Tetrode, is that elementary particles are not self-interacting. This immediately removes the problem of electron self-energy giving an infinity in the energy of an electromagnetic field. == Motivation == Wheeler and Feynman begin by observing that classical electromagnetic field theory was designed before the discovery of electrons: charge is a continuous substance in the theory. An electron particle does not naturally fit in to the theory: should a point charge see the effect of its own field? They reconsider the fundamental problem of a collection of point charges, taking up a field-free action at a distance theory developed separately by Karl Schwarzschild, Hugo Tetrode, and Adriaan Fokker. Unlike instantaneous action at a distance theories of the early 1800s these "direct interaction" theories are based on interaction propagation at the speed of light. They differ from the classical field theory in three ways 1) no independent field is postulated; 2) the point charges do not act upon themselves; 3) the equations are time symmetric. Wheeler and Feynman propose to develop these equations into a relativistically correct generalization of electromagnetism based on Newtonian mechanics. == Problems with previous direct-interaction theories == The Tetrode-Fokker work left unsolved two major problems.: 171  First, in a non-instantaneous action at a distance theory, the equal action-reaction of Newton's laws of motion conflicts with causality. If an action propagates forward in time, the reaction would necessarily propagate backwards in time. Second, existing explanations of radiation reaction force or radiation resistance depended upon accelerating electrons interacting with their own field; the direct interaction models explicitly omit self-interaction. == Absorber and radiation resistance == Wheeler and Feynman postulate the "universe" of all other electrons as an absorber of radiation to overcome these issues and extend the direct interaction theories. Rather than considering an unphysical isolated point charge, they model all charges in the universe with a uniform absorber in a shell around a charge. As the charge moves relative to the absorber, it radiates into the absorber which "pushes back", causing the radiation resistance. == Key result == Feynman and Wheeler obtained their result in a very simple and elegant way. They considered all the charged particles (emitters) present in our universe and assumed all of them to generate time-reversal symmetric waves. The resulting field is E tot ( x , t ) = ∑ n E n ret ( x , t ) + E n adv ( x , t ) 2 . {\displaystyle E_{\text{tot}}(\mathbf {x} ,t)=\sum _{n}{\frac {E_{n}^{\text{ret}}(\mathbf {x} ,t)+E_{n}^{\text{adv}}(\mathbf {x} ,t)}{2}}.} Then they observed that if the relation E free ( x , t ) = ∑ n E n ret ( x , t ) − E n adv ( x , t ) 2 = 0 {\displaystyle E_{\text{free}}(\mathbf {x} ,t)=\sum _{n}{\frac {E_{n}^{\text{ret}}(\mathbf {x} ,t)-E_{n}^{\text{adv}}(\mathbf {x} ,t)}{2}}=0} holds, then E free {\displaystyle E_{\text{free}}} , being a solution of the homogeneous Maxwell equation, can be used to obtain the total field E tot ( x , t ) = ∑ n E n ret ( x , t ) + E n adv ( x , t ) 2 + ∑ n E n ret ( x , t ) − E n adv ( x , t ) 2 = ∑ n E n ret ( x , t ) . {\displaystyle E_{\text{tot}}(\mathbf {x} ,t)=\sum _{n}{\frac {E_{n}^{\text{ret}}(\mathbf {x} ,t)+E_{n}^{\text{adv}}(\mathbf {x} ,t)}{2}}+\sum _{n}{\frac {E_{n}^{\text{ret}}(\mathbf {x} ,t)-E_{n}^{\text{adv}}(\mathbf {x} ,t)}{2}}=\sum _{n}E_{n}^{\text{ret}}(\mathbf {x} ,t).} The total field is then the observed pure retarded field.: 173  The assumption that the free field is identically zero is the core of the absorber idea. It means that the radiation emitted by each particle is completely absorbed by all other particles present in the universe. To better understand this point, it may be useful to consider how the absorption mechanism works in common materials. At the microscopic scale, it results from the sum of the incoming electromagnetic wave and the waves generated from the electrons of the material, which react to the external perturbation. If the incoming wave is absorbed, the result is a zero outgoing field. In the absorber theory the same concept is used, however, in presence of both retarded and advanced waves. == Arrow of time ambiguity == The resulting wave appears to have a preferred time direction, because it respects causality. However, this is only an illusion. Indeed, it is always possible to reverse the time direction by simply exchanging the labels emitter and absorber. Thus, the apparently preferred time direction results from the arbitrary labelling.: 52  Wheeler and Feynman claimed that thermodynamics picked the observed direction; cosmological selections have also been proposed. The requirement of time-reversal symmetry, in general, is difficult to reconcile with the principle of causality. Maxwell's equations and the equations for electromagnetic waves have, in general, two possible solutions: a retarded (delayed) solution and an advanced one. Accordingly, any charged particle generates waves, say at time t 0 = 0 {\displaystyle t_{0}=0} and point x 0 = 0 {\displaystyle x_{0}=0} , which will arrive at point x 1 {\displaystyle x_{1}} at the instant t 1 = x 1 / c {\displaystyle t_{1}=x_{1}/c} (here c {\displaystyle c} is the speed of light), after the emission (retarded solution), and other waves, which will arrive at the same place at the instant t 2 = − x 1 / c {\displaystyle t_{2}=-x_{1}/c} , before the emission (advanced solution). The latter, however, violates the causality principle: advanced waves could be detected before their emission. Thus the advanced solutions are usually discarded in the interpretation of electromagnetic waves. In the absorber theory, instead charged particles are considered as both emitters and absorbers, and the emission process is connected with the absorption process as follows: Both the retarded waves from emitter to absorber and the advanced waves from absorber to emitter are considered. The sum of the two, however, results in causal waves, although the anti-causal (advanced) solutions are not discarded a priori. Alternatively, the way that Wheeler/Feynman came up with the primary equation is: They assumed that their Lagrangian only interacted when and where the fields for the individual particles were separated by a proper time of zero. So since only massless particles propagate from emission to detection with zero proper time separation, this Lagrangian automatically demands an electromagnetic like interaction. == New interpretation of radiation damping == One of the major results of the absorber theory is the elegant and clear interpretation of the electromagnetic radiation process. A charged particle that experiences acceleration is known to emit electromagnetic waves, i.e., to lose energy. Thus, the Newtonian equation for the particle ( F = m a {\displaystyle F=ma} ) must contain a dissipative force (damping term), which takes into account this energy loss. In the causal interpretation of electromagnetism, Hendrik Lorentz and Max Abraham proposed that such a force, later called Abraham–Lorentz force, is due to the retarded self-interaction of the particle with its own field. This first interpretation, however, is not completely satisfactory, as it leads to divergences in the theory and needs some assumptions on the structure of charge distribution of the particle. Paul Dirac generalized the formula to make it relativistically invariant. While doing so, he also suggested a different interpretation. He showed that the damping term can be expressed in terms of a free field acting on the particle at its own position: E damping ( x j , t ) = E j ret ( x j , t ) − E j adv ( x j , t ) 2 . {\displaystyle E^{\text{damping}}(\mathbf {x} _{j},t)={\frac {E_{j}^{\text{ret}}(\mathbf {x} _{j},t)-E_{j}^{\text{adv}}(\mathbf {x} _{j},t)}{2}}.} However, Dirac did not propose any physical explanation of this interpretation. A clear and simple explanation can instead be obtained in the framework of absorber theory, starting from the simple idea that each particle does not interact with itself. This is actually the opposite of the first Abraham–Lorentz proposal. The field acting on the particle j {\displaystyle j} at its own position (the point x j {\displaystyle x_{j}} ) is then E tot ( x j , t ) = ∑ n ≠ j E n ret ( x j , t ) + E n adv ( x j , t ) 2 . {\displaystyle E^{\text{tot}}(\mathbf {x} _{j},t)=\sum _{n\neq j}{\frac {E_{n}^{\text{ret}}(\mathbf {x} _{j},t)+E_{n}^{\text{adv}}(\mathbf {x} _{j},t)}{2}}.} If we sum the free-field term of this expression, we obtain E tot ( x j , t ) = ∑ n ≠ j E n ret ( x j , t ) + E n adv ( x j , t ) 2 + ∑ n E n ret ( x j , t ) − E n adv ( x j , t ) 2 {\displaystyle E^{\text{tot}}(\mathbf {x} _{j},t)=\sum _{n\neq j}{\frac {E_{n}^{\text{ret}}(\mathbf {x} _{j},t)+E_{n}^{\text{adv}}(\mathbf {x} _{j},t)}{2}}+\sum _{n}{\frac {E_{n}^{\text{ret}}(\mathbf {x} _{j},t)-E_{n}^{\text{adv}}(\mathbf {x} _{j},t)}{2}}} and, thanks to Dirac's result, E tot ( x j , t ) = ∑ n ≠ j E n ret ( x j , t ) + E damping ( x j , t ) . {\displaystyle E^{\text{tot}}(\mathbf {x} _{j},t)=\sum _{n\neq j}E_{n}^{\text{ret}}(\mathbf {x} _{j},t)+E^{\text{damping}}(\mathbf {x} _{j},t).} Thus, the damping force is obtained without the need for self-interaction, which is known to lead to divergences, and also giving a physical justification to the expression derived by Dirac. == Developments since original formulation == === Gravity theory === Inspired by the Machian nature of the Wheeler–Feynman absorber theory for electrodynamics, Fred Hoyle and Jayant Narlikar proposed their own theory of gravity in the context of general relativity. This model still exists in spite of recent astronomical observations that have challenged the theory. Stephen Hawking had criticized the original Hoyle-Narlikar theory believing that the advanced waves going off to infinity would lead to a divergence, as indeed they would, if the universe were only expanding. === Transactional interpretation of quantum mechanics === Again inspired by the Wheeler–Feynman absorber theory, the transactional interpretation of quantum mechanics (TIQM) first proposed in 1986 by John G. Cramer, describes quantum interactions in terms of a standing wave formed by retarded (forward-in-time) and advanced (backward-in-time) waves. Cramer claims it avoids the philosophical problems with the Copenhagen interpretation and the role of the observer, and resolves various quantum paradoxes, such as quantum nonlocality, quantum entanglement and retrocausality. === Attempted resolution of causality === T. C. Scott and R. A. Moore demonstrated that the apparent acausality suggested by the presence of advanced Liénard–Wiechert potentials could be removed by recasting the theory in terms of retarded potentials only, without the complications of the absorber idea. The Lagrangian describing a particle ( p 1 {\displaystyle p_{1}} ) under the influence of the time-symmetric potential generated by another particle ( p 2 {\displaystyle p_{2}} ) is L 1 = T 1 − 1 2 ( ( V R ) 1 2 + ( V A ) 1 2 ) , {\displaystyle L_{1}=T_{1}-{\frac {1}{2}}\left((V_{R})_{1}^{2}+(V_{A})_{1}^{2}\right),} where T i {\displaystyle T_{i}} is the relativistic kinetic energy functional of particle p i {\displaystyle p_{i}} , and ( V R ) i j {\displaystyle (V_{R})_{i}^{j}} and ( V A ) i j {\displaystyle (V_{A})_{i}^{j}} are respectively the retarded and advanced Liénard–Wiechert potentials acting on particle p i {\displaystyle p_{i}} and generated by particle p j {\displaystyle p_{j}} . The corresponding Lagrangian for particle p 2 {\displaystyle p_{2}} is L 2 = T 2 − 1 2 ( ( V R ) 2 1 + ( V A ) 2 1 ) . {\displaystyle L_{2}=T_{2}-{\frac {1}{2}}\left((V_{R})_{2}^{1}+(V_{A})_{2}^{1}\right).} It was originally demonstrated with computer algebra and then proven analytically that ( V R ) j i − ( V A ) i j {\displaystyle (V_{R})_{j}^{i}-(V_{A})_{i}^{j}} is a total time derivative, i.e. a divergence in the calculus of variations, and thus it gives no contribution to the Euler–Lagrange equations. Thanks to this result the advanced potentials can be eliminated; here the total derivative plays the same role as the free field. The Lagrangian for the N-body system is therefore L = ∑ i = 1 N T i − 1 2 ∑ i ≠ j N ( V R ) j i . {\displaystyle L=\sum _{i=1}^{N}T_{i}-{\frac {1}{2}}\sum _{i\neq j}^{N}(V_{R})_{j}^{i}.} The resulting Lagrangian is symmetric under the exchange of p i {\displaystyle p_{i}} with p j {\displaystyle p_{j}} . For N = 2 {\displaystyle N=2} this Lagrangian will generate exactly the same equations of motion of L 1 {\displaystyle L_{1}} and L 2 {\displaystyle L_{2}} . Therefore, from the point of view of an outside observer, everything is causal. This formulation reflects particle-particle symmetry with the variational principle applied to the N-particle system as a whole, and thus Tetrode's Machian principle. Only if we isolate the forces acting on a particular body do the advanced potentials make their appearance. This recasting of the problem comes at a price: the N-body Lagrangian depends on all the time derivatives of the curves traced by all particles, i.e. the Lagrangian is infinite-order. However, much progress was made in examining the unresolved issue of quantizing the theory. Also, this formulation recovers the Darwin Lagrangian, from which the Breit equation was originally derived, but without the dissipative terms. This ensures agreement with theory and experiment, up to but not including the Lamb shift. Numerical solutions for the classical problem were also found. Furthermore, Moore showed that a model by Feynman and Albert Hibbs is amenable to the methods of higher than first-order Lagrangians and revealed chaotic-like solutions. Moore and Scott showed that the radiation reaction can be alternatively derived using the notion that, on average, the net dipole moment is zero for a collection of charged particles, thereby avoiding the complications of the absorber theory. This apparent acausality may be viewed as merely apparent, and this entire problem goes away. An opposing view was held by Einstein. === Alternative Lamb shift calculation === As mentioned previously, a serious criticism against the absorber theory is that its Machian assumption that point particles do not act on themselves does not allow (infinite) self-energies and consequently an explanation for the Lamb shift according to quantum electrodynamics (QED). Ed Jaynes proposed an alternate model where the Lamb-like shift is due instead to the interaction with other particles very much along the same notions of the Wheeler–Feynman absorber theory itself. One simple model is to calculate the motion of an oscillator coupled directly with many other oscillators. Jaynes has shown that it is easy to get both spontaneous emission and Lamb shift behavior in classical mechanics. Furthermore, Jaynes' alternative provides a solution to the process of "addition and subtraction of infinities" associated with renormalization. This model leads to the same type of Bethe logarithm (an essential part of the Lamb shift calculation), vindicating Jaynes' claim that two different physical models can be mathematically isomorphic to each other and therefore yield the same results, a point also apparently made by Scott and Moore on the issue of causality. == Relationship to quantum field theory == This universal absorber theory is mentioned in the chapter titled "Monster Minds" in Feynman's autobiographical work Surely You're Joking, Mr. Feynman! and in Vol. II of the Feynman Lectures on Physics. It led to the formulation of a framework of quantum mechanics using a Lagrangian and action as starting points, rather than a Hamiltonian, namely the formulation using Feynman path integrals, which proved useful in Feynman's earliest calculations in quantum electrodynamics and quantum field theory in general. Both retarded and advanced fields appear respectively as retarded and advanced propagators and also in the Feynman propagator and the Dyson propagator. In hindsight, the relationship between retarded and advanced potentials shown here is not so surprising as, in quantum field theory, the advanced propagator can be obtained from the retarded propagator by exchanging the roles of field source and test particle (usually within the kernel of a Green's function formalism). In quantum field theory, advanced and retarded fields are simply viewed as mathematical solutions of Maxwell's equations whose combinations are decided by the boundary conditions. == See also == Abraham–Lorentz force Causality Paradox of radiation of charged particles in a gravitational field Retrocausality Symmetry in physics and T-symmetry Transactional interpretation Two-state vector formalism == Notes == == Sources == Wheeler, J. A.; Feynman, R. P. (April 1945). "Interaction with the Absorber as the Mechanism of Radiation" (PDF). Reviews of Modern Physics. 17 (2–3): 157–181. Bibcode:1945RvMP...17..157W. doi:10.1103/RevModPhys.17.157. Wheeler, J. A.; Feynman, R. P. (July 1949). "Classical Electrodynamics in Terms of Direct Interparticle Action". Reviews of Modern Physics. 21 (3): 425–433. Bibcode:1949RvMP...21..425W. doi:10.1103/RevModPhys.21.425.
Wikipedia/Wheeler–Feynman_time-symmetric_theory
The theoretical study of time travel generally follows the laws of general relativity. Quantum mechanics requires physicists to solve equations describing how probabilities behave along closed timelike curves (CTCs), which are theoretical loops in spacetime that might make it possible to travel through time. In the 1980s, Igor Novikov proposed the self-consistency principle. According to this principle, any changes made by a time traveler in the past must not create historical paradoxes. If a time traveler attempts to change the past, the laws of physics will ensure that events unfold in a way that avoids paradoxes. This means that while a time traveler can influence past events, those influences must ultimately lead to a consistent historical narrative. However, Novikov's self-consistency principle has been debated in relation to certain interpretations of quantum mechanics. Specifically, it raises questions about how it interacts with fundamental principles such as unitarity and linearity. Unitarity ensures that the total probability of all possible outcomes in a quantum system always sums to 1, preserving the predictability of quantum events. Linearity ensures that quantum evolution preserves superpositions, allowing quantum systems to exist in multiple states simultaneously. There are two main approaches to explaining quantum time travel while incorporating Novikov's self-consistency principle. The first approach uses density matrices to describe the probabilities of different outcomes in quantum systems, providing a statistical framework that can accommodate the constraints of CTCs. The second approach involves state vectors, which describe the quantum state of a system. Both approaches can lead to insights into how time travel might be reconciled with quantum mechanics, although they may introduce concepts that challenge conventional understandings of these theories. == Deutsch's prescription for closed timelike curves (CTCs) == In 1991, David Deutsch proposed a method to explain how quantum systems interact with closed timelike curves (CTCs) using time evolution equations. This method aims to address paradoxes like the grandfather paradox, which suggests that a time traveler who stops their own birth would create a contradiction. One interpretation of Deutsch's approach is that it allows for self-consistency without necessarily implying the existence of parallel universes. === Method overview === To analyze the system, Deutsch divided it into two parts: a subsystem outside the CTC and the CTC itself. To describe the combined evolution of both parts over time, he used a unitary operator (U). This approach relies on a specific mathematical framework to describe quantum systems. The overall state is represented by combining the density matrices (ρ) for both parts using a tensor product (⊗). While Deutsch's approach does not assume initial correlation between these two parts, this does not inherently break time symmetry. Deutsch's proposal uses the following key equation to describe the fixed-point density matrix (ρCTC) for the CTC: ρ CTC = Tr A [ U ( ρ A ⊗ ρ CTC ) U † ] {\displaystyle \rho _{\text{CTC}}={\text{Tr}}_{A}\left[U\left(\rho _{A}\otimes \rho _{\text{CTC}}\right)U^{\dagger }\right]} . The unitary evolution involving both the CTC and the external subsystem determines the density matrix of the CTC as a fixed point, focusing on its state. === Ensuring self-consistency === Deutsch's proposal ensures that the CTC returns to a self-consistent state after each loop. However, if a system retains memories after traveling through a CTC, it could create scenarios where it appears to have experienced different possible pasts. Furthermore, Deutsch's method may not align with common probability calculations in quantum mechanics unless we consider multiple paths leading to the same outcome. There can also be multiple solutions (fixed points) for the system's state after the loop, introducing randomness (nondeterminism). Deutsch suggested using solutions that maximize entropy, aligning with systems' natural tendency to evolve toward higher entropy states. To calculate the final state outside the CTC, trace operations consider only the external system's state after combining both systems' evolution. === Implications and criticisms === Deutsch's approach has intriguing implications for paradoxes like the grandfather paradox. For instance, if everything except a single qubit travels through a time machine and flips its value according to a specific operator: U = ( 0 1 1 0 ) {\displaystyle U={\begin{pmatrix}0&1\\1&0\end{pmatrix}}} . Deutsch argues that maximizing von Neumann entropy is relevant in this context. In this scenario, outcomes may mix starting at 0 and ending at 1 or vice versa. While this interpretation can align with many-worlds views of quantum mechanics, it does not necessarily imply branching timelines after interacting with a CTC. Researchers have explored Deutsch's ideas further. If feasible, his model might allow computers near a time machine to solve problems beyond classical capabilities; however, debates about CTCs' feasibility continue. Despite its theoretical nature, Deutsch's proposal has faced significant criticism. For example, Tolksdorf and Verch demonstrated that quantum systems in spacetimes without CTCs can achieve results similar to Deutsch's criterion with any prescribed accuracy. This finding challenges claims that quantum simulations of CTCs are related to closed timelike curves as understood in general relativity. Their research also shows that classical systems governed by statistical mechanics could also meet these criteria without invoking peculiarities attributed solely to quantum mechanics. Consequently, they argue that their findings raise doubts about Deutsch's explanation of his time travel scenario using many-worlds interpretations of quantum physics. == Lloyd's prescription: Post-selection and time travel with CTCs == Seth Lloyd proposed an alternative approach to time travel with closed timelike curves (CTCs), based on "post-selection" and path integrals. Path integrals are a powerful tool in quantum mechanics that involve summing probabilities over all possible ways a system could evolve, including paths that do not strictly follow a single timeline. Unlike classical approaches, path integrals can accommodate histories involving CTCs, although their application requires careful consideration of quantum mechanics' principles. He proposes an equation that describes the transformation of the density matrix, which represents the system's state outside the CTC after a time loop: ρ f = C ρ i C † Tr [ C ρ i C † ] {\displaystyle \rho _{f}={\frac {C\rho _{i}C^{\dagger }}{{\text{Tr}}\left[C\rho _{i}C^{\dagger }\right]}}} , where C = Tr CTC [ U ] {\displaystyle C={\text{Tr}}_{\text{CTC}}\left[U\right]} . In this equation: ρ f {\displaystyle \rho _{f}} is the density matrix of the system after interacting with the CTC. ρ i {\displaystyle \rho _{i}} is the initial density matrix of the system before the time loop. C {\displaystyle C} is a transformation operator derived from the trace operation over the CTC, applied to the unitary evolution operator U {\displaystyle U} . The transformation relies on the trace operation, which summarizes aspects of the matrix. If this trace term is zero ( Tr [ C ρ i C † ] = 0 {\displaystyle {\text{Tr}}\left[C\rho _{i}C^{\dagger }\right]=0} ), it indicates that the transformation is invalid in that context, but does not directly imply a paradox like the grandfather paradox. Conversely, a non-zero trace suggests a valid transformation leading to a unique solution for the external system's state. Thus, Lloyd's approach aims to filter out histories that lead to inconsistencies by allowing only those consistent with both initial and final states. This aligns with post-selection, where specific outcomes are considered based on predetermined criteria; however, it does not guarantee that all paradoxical scenarios are eliminated. == Entropy and computation == Michael Devin (2001) proposed a model that incorporates closed timelike curves (CTCs) into thermodynamics, suggesting it as a potential way to address the grandfather paradox. This model introduces a "noise" factor to account for imperfections in time travel, proposing a framework that could help mitigate paradoxes. In contrast, Carlo Rovelli has argued that thermodynamics inhibits time travel to the past. == See also == Novikov self-consistency principle Grandfather paradox Causal loop Chronology protection conjecture Retrocausality == References ==
Wikipedia/Quantum_mechanics_of_time_travel
This timeline lists significant discoveries in physics and the laws of nature, including experimental discoveries, theoretical proposals that were confirmed experimentally, and theories that have significantly influenced current thinking in modern physics. Such discoveries are often a multi-step, multi-person process. Multiple discovery sometimes occurs when multiple research groups discover the same phenomenon at about the same time, and scientific priority is often disputed. The listings below include some of the most significant people and ideas by date of publication or experiment. == Antiquity == 624–546 BCE – Thales of Miletus: Introduced natural philosophy 610–546 BCE – Anaximander: Concept of Earth floating in space 460–370 BCE – Democritus: Atomism via thought experiment 384–322 BCE – Aristotle: Aristotelian physics, earliest effective theory of physics c. 300 BCE – Euclid: Euclidean geometry c. 250 BCE – Archimedes: Archimedes' principle 310–230 BCE – Aristarchos: Proposed heliocentricism 276–194 BCE – Eratosthenes: Circumference of the Earth measured 190–150 BCE – Seleucus: Support of heliocentrism based on reasoning 220–150 BCE – Apollonius: and Hipparchus: Invention of Astrolabe 205–86 BCE – Hipparchus or unknown: Antikythera mechanism an analog computer of planetary motions 129 BCE – Hipparchus: Hipparchus star catalog of the entire sky and precession of the equinoxes 60 CE – Hero of Alexandria: Catoptrics: Hero's principle of the shortest path of light c.150 CE – Ptolemy: Ptolomaic model standardized geocentricism == Middle Ages == 500 CE – John Philoponus: Theory of impetus 984 CE – Ibn Sahl: Law of refraction 1010 – Ibn al-Haytham (Alhazen): Optics, finite speed of light c. 1030 – Ibn Sina (Avicenna): Concept of force c. 1050 – al-Biruni: Speed of light is much larger than speed of sound c. 1100 – Al-Baghdadi: Theory of motion with distinction between velocity and acceleration == 16th century == 1514 – Nicolaus Copernicus: Heliocentrism 1586 – Simon Stevin: Delft tower experiment == 17th century == 1608 – Earliest known telescopes 1609, 1619 – Kepler: Kepler's laws of planetary motion 1610 – Galileo Galilei: discovered the Galilean moons of Jupiter 1613 – Galileo Galilei: Inertia 1621 – Willebrord Snellius: Snell's law 1632 – Galileo Galilei: The Galilean principle (the laws of motion are the same in all inertial frames) 1660 – Blaise Pascal: Pascal's law 1660 – Robert Hooke: Hooke's law 1662 – Robert Boyle: Boyle's law 1663 – Otto von Guericke: first electrostatic generator 1676 – Ole Rømer: Rømer's determination of the speed of light traveling from the moons of Jupiter. 1678 – Christiaan Huygens mathematical wave theory of light, published in his Treatise on Light 1687 – Isaac Newton: Newton's laws of motion, and Newton's law of universal gravitation == 18th century == 1738 – Daniel Bernoulli: First model of the kinetic theory of gases 1745–46 – Ewald Georg von Kleist and Pieter van Musschenbroek: discovery of the Leyden jar 1752 – Benjamin Franklin: kite experiment 1760 – Joseph-Louis Lagrange: Lagrangian mechanics 1782 – Antoine Lavoisier: conservation of mass 1785 – Charles-Augustin de Coulomb: Coulomb's inverse-square law for electric charges confirmed 1800 – Alessandro Volta: discovery of voltaic pile == 19th century == 1800 - William Herschel: Infrared light 1801 – Thomas Young: Wave theory of light 1801 - Johann Wilhelm Ritter: Ultraviolet light 1803 – John Dalton: Atomic theory of matter 1806 – Thomas Young: Kinetic energy 1814 – Augustin-Jean Fresnel: Wave theory of light, optical interference 1820 – André-Marie Ampère, Jean-Baptiste Biot, and Félix Savart: Evidence for electromagnetic interactions (Biot–Savart law) 1822 – Joseph Fourier: Heat equation 1824 – Nicolas Léonard Sadi Carnot: Ideal gas cycle analysis (Carnot cycle), internal combustion engine 1826 – Ampère's circuital law 1827 – Georg Ohm: Electrical resistance 1831 – Michael Faraday: Faraday's law of induction 1833 – William Rowan Hamilton: Hamiltonian mechanics 1838 – Michael Faraday: Lines of force 1838 – Wilhelm Eduard Weber and Carl Friedrich Gauss: Earth's magnetic field 1842–43 – William Thomson, 1st Baron Kelvin and Julius von Mayer: Conservation of energy 1842 – Christian Doppler: Doppler effect 1845 – Michael Faraday: Faraday rotation (interaction of light and magnetic field) 1847 – Hermann von Helmholtz & James Prescott Joule: Conservation of Energy 2 1850–51 – William Thomson, 1st Baron Kelvin & Rudolf Clausius: Second law of thermodynamics 1857 – Rudolf Clausius: Introduced translational, rotational, and vibrational molecular motions 1857 – Rudolf Clausius: Introduced the concept of mean free path 1860 – James Clerk Maxwell: Introduced statistical mechanics with the Maxwell distribution 1861 – Gustav Kirchhoff: Black body 1861–62 – Maxwell's equations 1863 – Rudolf Clausius: Entropy 1864 – James Clerk Maxwell: A Dynamical Theory of the Electromagnetic Field (electromagnetic radiation) 1867 – James Clerk Maxwell: On the Dynamical Theory of Gases (kinetic theory of gases) 1871–89 – Ludwig Boltzmann & Josiah Willard Gibbs: Statistical mechanics (Boltzmann equation, 1872) 1873 – Maxwell: A Treatise on Electricity and Magnetism 1884 – Boltzmann derives Stefan radiation law 1887 – Michelson–Morley experiment 1887 – Heinrich Rudolf Hertz: Electromagnetic waves 1888 – Johannes Rydberg: Rydberg formula 1889, 1892 – Lorentz-FitzGerald contraction 1893 – Wilhelm Wien: Wien's displacement law for black-body radiation 1895 – Wilhelm Röntgen: X-rays 1896 – Henri Becquerel: Radioactivity 1896 – Pieter Zeeman: Zeeman effect 1897 – J. J. Thomson: Electron discovered 1900 – Max Planck: Formula for black-body radiation – the quanta solution to radiation ultraviolet catastrophe 1900 - Paul Villard: Gamma rays == 20th century == 1904 – J. J. Thomson's plum pudding model of the atom 1904 1905 – Albert Einstein: Special relativity, proposes light quantum (later named photon) to explain the photoelectric effect, Brownian motion, Mass–energy equivalence 1908 – Hermann Minkowski: Minkowski space 1911 – Ernest Rutherford: Discovery of the atomic nucleus (Rutherford model) 1911 – Kamerlingh Onnes: Superconductivity 1912 - Victor Francis Hess: Cosmic rays 1913 – Niels Bohr: Bohr model of the atom 1915 – Albert Einstein: General relativity 1915 – Emmy Noether: Noether's theorem relates symmetries to conservation laws. 1916 – Schwarzschild metric modeling gravity outside a large sphere 1917 - Ernest Rutherford: Proton proved 1919 – Arthur Eddington:Light bending confirmed – evidence for general relativity 1919–1926 – Kaluza–Klein theory proposing unification of gravity and electromagnetism 1922 – Alexander Friedmann proposes expanding universe 1922–37 – Friedmann–Lemaître–Robertson–Walker metric cosmological model 1923 – Stern–Gerlach experiment 1923 – Edwin Hubble: Galaxies discovered 1923 – Arthur Compton: Particle nature of photons confirmed by observation of photon momentum 1924 – Bose–Einstein statistics 1924 – Louis de Broglie: De Broglie wave 1925 – Werner Heisenberg: Matrix mechanics 1925–27 – Niels Bohr & Max Planck: Quantum mechanics 1925 – Stellar structure understood 1926 – Fermi-Dirac Statistics 1926 – Erwin Schrödinger: Schrödinger Equation 1927 – Werner Heisenberg: Uncertainty principle 1927 – Georges Lemaître: Big Bang 1927 – Paul Dirac: Dirac equation 1927 – Max Born: Born rule 1928 – Paul Dirac proposes the antiparticle 1929 – Edwin Hubble: Expansion of the universe confirmed 1932 – Carl David Anderson: Antimatter (positrons) discovered 1932 – James Chadwick: Neutron discovered 1933 – Ernst Ruska: Invention of the electron microscope 1935 – Subrahmanyan Chandrasekhar: Chandrasekhar limit for black hole collapse 1937 - Majorana particle, hypothesized as a fermion that is its own antiparticle. 1937 – Muon discovered by Carl David Anderson and Seth Neddermeyer 1938 – Pyotr Kapitsa: Superfluidity discovered 1938 – Otto Hahn, Lise Meitner and Fritz Strassmann Nuclear fission discovered 1938–39 – Stellar fusion explains energy production in stars 1939 – Uranium fission discovered 1941 – Feynman path integral 1944 – Theory of magnetism in 2D: Ising model 1947 – C.F. Powell, Giuseppe Occhialini, César Lattes: Pion discovered 1948 – Richard Feynman, Shinichiro Tomonaga, Julian Schwinger, Freeman Dyson: Quantum electrodynamics 1948 – Invention of the maser and laser by Charles Townes 1948 – Feynman diagrams 1955 - Emilio Segrè and Owen Chamberlain: Antiproton discovered 1956 – Bruce Cork: Antineutron discovered 1956 – Electron neutrino discovered 1956–57 – Parity violation proved by Chien-Shiung Wu 1957 - Many-worlds, also called the relative state formulation or the Everett interpretation. 1957 – BCS theory explaining superconductivity 1959–60 – Role of topology in quantum physics predicted and confirmed 1962 – Murray Gell-Mann and Yuval Ne'eman: SU(3) theory of strong interactions 1962 – Muon neutrino discovered 1963 – Chien-Shiung Wu confirms the conserved vector current theory for weak interactions 1963 – Murray Gell-Mann and George Zweig: Quarks predicted 1964 – Bell's Theorem initiates quantitative study of quantum entanglement 1964 - First black hole, Cygnus X-1, discovered 1964 – CP violation discovered by James Cronin and Val Fitch. 1965 – Arno Penzias and Robert Wilson: Cosmic Microwave Background (CMB) discovered 1967 – Unification of weak interaction and electromagnetism (electroweak theory) 1967 – Solar neutrino problem found 1967 – Pulsars (rotating neutron stars) discovered 1968 – Experimental evidence for quarks found 1968 – Vera Rubin: Dark matter theories 1970–73 – Standard Model of elementary particles invented 1971 – Helium 3 superfluidity 1971–75 – Michael Fisher, Kenneth G. Wilson, and Leo Kadanoff: Renormalization group 1972 – Jacob Bekenstein: Black Hole Entropy suggested 1974 – Stephen Hawking: Black hole radiation (Hawking radiation) predicted 1974 – Charmed quark discovered 1975 – Tau lepton found 1975 – Abraham Pais and Sam Treiman: Introduction of the Standard Model of particle physics term 1977 – Bottom quark found 1977 – Anderson localization recognised (Nobel prize in 1977, Philip W. Anderson, Mott, Van Fleck) 1980 – Strangeness as a signature of quark-gluon plasma predicted 1980 – Richard Feynman proposes quantum computing 1980 – Quantum Hall effect 1981 – Alan Guth Theory of cosmic inflation proposed 1982 – Aspect experiment confirms violations of Bell's inequalities 1981 – Fractional quantum Hall effect discovered 1983 – Simulated annealing 1984 – W and Z bosons directly observed 1984 – First laboratory implementation of quantum cryptography 1987 – High-temperature superconductivity discovered in 1986, awarded Nobel prize in 1987 (J. Georg Bednorz and K. Alexander Müller) 1989–98 – Quantum annealing 1993 – Quantum teleportation of unknown states proposed 1994 – Shor's algorithm discovered, initiating the serious study of quantum computation 1994–97 – Matrix models/M-theory 1995 – Wolfgang Ketterle: Bose–Einstein condensate observed 1995 – Top quark discovered 1995–2000 – Econophysics and Kinetic exchange models of markets 1997 – Juan Maldacena proposed the AdS/CFT correspondence 1998 – Accelerating expansion of the universe discovered by the Supernova Cosmology Project and the High-Z Supernova Search Team 1998 – Atmospheric neutrino oscillation established 1999 – Lene Vestergaard Hau: Slow light experimentally demonstrated 2000 – Quark-gluon plasma found 2000 – Tau neutrino found == 21st century == 2001 – Solar neutrino oscillation observed, resolving the solar neutrino problem 2003 – WMAP observations of cosmic microwave background 2004 – Exceptional properties of graphene discovered 2007 – Giant magnetoresistance recognized (Nobel prize, Albert Fert and Peter Grünberg) 2008 – First artificial production of antimatter (positrons), by the LLNL 2008 – 16-year study of stellar orbits around Sagittarius A* provides strong evidence for a supermassive black hole at the centre of the Milky Way galaxy 2009 – Planck begins observations of cosmic microwave background 2012 – Higgs boson found by the Compact Muon Solenoid and ATLAS experiments at the Large Hadron Collider 2015 – Gravitational waves are observed 2016 – Topological order – topological phase transitions and order – recognized (Nobel prize, David J. Thouless, F. Duncan M. Haldane and J. Michael Kosterlitz) 2019 – First image of a black hole 2023 – Experimental evidence of stochastic Gravitational wave background 2023 – First "image" of the Milky Way in neutrinos instead of light == See also == Physics List of timelines List of unsolved problems in physics == References ==
Wikipedia/Timeline_of_developments_in_theoretical_physics
Double field theory in theoretical physics refers to formalisms that capture the T-duality property of string theory as a manifest symmetry of a field theory. == Background == In double field theory, the T-duality transformation of exchanging momentum and winding modes of closed strings on toroidal backgrounds translates to a generalized coordinate transformation on a doubled spacetime, where one set of its coordinates is dual to momentum modes and the second set of coordinates is interpreted as dual to winding modes of the closed string. Whether the second set of coordinates has physical meaning depends on how the level-matching condition of closed strings is implemented in the theory: either through the weak constraint or the strong constraint. In strongly constrained double field theory, which was introduced by Warren Siegel in 1993, the strong constraint ensures the dependency of the fields on only one set of the doubled coordinates; it describes the massless fields of closed string theory, i.e. the graviton, Kalb Ramond B-field, and dilaton, but does not include any winding modes, and serves as a T-duality invariant reformulation of supergravity. Weakly constrained double field theory, introduced by Chris Hull and Barton Zwiebach in 2009, allows for the fields to depend on the whole doubled spacetime and encodes genuine momentum and winding modes of the string. Double field theory has been a setting for studying various string theoretical properties such as: consistent Kaluza-Klein truncations of higher-dimensional supergravity to lower-dimensional theories, generalized fluxes, and alpha-prime corrections of string theory in the context of cosmology and black holes. == References ==
Wikipedia/Double_field_theory
In Scotland, the Industrial Revolution was the transition to new manufacturing processes and economic expansion between the mid-eighteenth century and the late nineteenth century. By the start of the eighteenth century, a political union between Scotland and England became politically and economically attractive, promising to open up the much larger markets of England, as well as those of the growing British Empire, resulting in the Treaty of Union of 1707. There was a conscious attempt among the gentry and nobility to improve agriculture in Scotland. New crops were introduced and enclosures began to displace the run rig system and free pasture. The economic benefits of union were very slow to appear, some progress was visible, such as the sales of linen and cattle to England, the cash flows from military service, and the tobacco trade that was dominated by Glasgow after 1740. Merchants who profited from the American trade began investing in leather, textiles, iron, coal, sugar, rope, sailcloth, glass-works, breweries, and soap-works, setting the foundations for the city's emergence as a leading industrial center after 1815. The linen industry was Scotland's premier industry in the eighteenth century and formed the basis for the later cotton, jute, and woolen industries. Encouraged and subsidized by the Board of Trustees so it could compete with German products, merchant entrepreneurs became dominant in all stages of linen manufacturing and built up the market share of Scottish linens, especially in the American colonial market. Historians often emphasize that the flexibility and dynamism of the Scottish banking system contributed significantly to the rapid development of the economy in the nineteenth century. At first the leading industry, based in the west, was the spinning and weaving of cotton. After the cutting off of supplies of raw cotton from 1861 as a result of the American Civil War Scottish entrepreneurs and engineers, and its large stock of easily mined coal, the country diversified into engineering, shipbuilding, and locomotive construction, with steel replacing iron after 1870. As a result, Scotland became a center for engineering, shipbuilding and the production of locomotives. Scotland was already one of the most urbanized societies in Europe by 1800. Glasgow became one of the largest cities in the world, and known as "the Second City of the Empire" after London. Dundee upgraded its harbor and established itself as an industrial and trading center. The industrial developments, while they brought work and wealth, were so rapid that housing, town-planning, and provision for public health did not keep pace with them, and for a time living conditions in some of the towns and cities were notoriously bad, with overcrowding, high infant mortality, and growing rates of tuberculosis. Owners to support government sponsored housing programs as well as self-help projects among the respectable working class. Even with the growth of industry there were insufficient good jobs, as a result, during the period 1841–1931, about two million Scots emigrated to North America and Australia, and another 750,000 Scots relocated to England. By the twenty-first century, there were about as many people who were Scottish Canadians and Scottish Americans as the five million remaining in Scotland. == Background == By the start of the eighteenth century, a political union between Scotland and England became politically and economically attractive, promising to open up the much larger markets of England, as well as those of the growing British Empire. The Scottish parliament voted on 6 January 1707, by 110 to 69 to adopt the Treaty of Union. It was a full economic union. Most of its 25 articles dealt with economic arrangements for the new state known as "Great Britain". It added 45 Scots to the 513 members of the House of Commons of Great Britain and 16 Scots to the 190 members of the House of Lords, and ended the Scottish parliament. It also replaced the Scottish systems of currency, taxation and laws regulating trade with laws made in London. England had about five times the population of Scotland at the time, and about 36 times as much wealth. Major factors that facilitated industrialisation in Scotland included cheap and abundant labour; natural resources that included coal, blackband ironstone and potential water power; the development of new technologies, among them the steam engine and markets that would buy Scottish products. Other factors that also contributed to the process included the improvement of transport links, which helped facilitated the movement of goods, an extensive banking system, and the widespread adoption of ideas about economic development with their origins in the Scottish Enlightenment. === Enlightenment === In the eighteenth century, the Scottish Enlightenment brought the country to the front of intellectual achievement in Europe. The focus of the Scottish Enlightenment ranged from intellectual and economic matters to the specifically scientific. Adam Smith developed and published The Wealth of Nations, the first work of modern economics. It had an immediate impact on British economic policy and still frames discussions on globalisation and tariffs. Key scientific work included the discoveries of William Cullen, physician and chemist; James Anderson, an agronomist; Joseph Black, physicist and chemist; and James Hutton, the first modern geologist. While the Scottish Enlightenment is traditionally considered to have concluded toward the end of the eighteenth century, disproportionately large Scottish contributions to British science and letters continued for another 50 years or more, thanks to such figures as James Hutton, James Watt, William Murdoch, James Clerk Maxwell and Lord Kelvin. === Agricultural revolution === After the union with England in 1707, there was a conscious attempt among the gentry and nobility to improve agriculture in Scotland. The Society of Improvers was founded in 1723, including in its 300 members dukes, earls, lairds and landlords. In the first half of the century these changes were limited to tenanted farms in East Lothian and the estates of a few enthusiasts, such as John Cockburn and Archibald Grant. Not all were successful, with Cockburn driving himself into bankruptcy, but the ethos of improvement spread among the landed classes. Haymaking was introduced along with the English plough and foreign grasses, the sowing of rye grass and clover. Turnips and cabbages were introduced, lands enclosed and marshes drained, lime was put down, roads built and woods planted. Drilling and sowing and crop rotation were introduced. The introduction of the potato to Scotland in 1739 greatly improved the diet of the peasantry. Enclosures began to displace the runrig system and free pasture. There was increasing specialisation, with the Lothians became a major centre of grain, Ayrshire of cattle breading and the borders of sheep. Although some estate holders improved the quality of life of their displaced workers, the Agricultural Revolution led directly to what is increasingly known as the Lowland Clearances, when hundreds of thousands of cottars and tenant farmers from central and southern Scotland were forcibly moved from the farms and small holdings their families had occupied for hundreds of years. Improvement continued in the nineteenth century. Innovations included the first working reaping machine, developed by Patrick Bell in 1828. His rival James Smith turned to improving sub-soil drainage and developed a method of ploughing that could break up the subsoil barrier without disturbing the topsoil. Previously unworkable low-lying carselands could now be brought into arable production and the result was the even Lowland landscape that still predominates. The development of Scottish agriculture meant that Scotland could support its increased population with food and it released labour that would take part in industrial production. === Banking === The first banks formed in Scotland were the Bank of Scotland (Edinburgh, 1695) and the Royal Bank of Scotland (Edinburgh, 1727). Glasgow would soon follow with branches of its own (notably, the first was to be Dunlop, Houston & Co. in 1749, known as "the Ship Bank" for the image of a ship printed on all their bills) and Scotland had a flourishing financial system by the end of the century. There were over 400 branches, amounting to one office per 7,000 people, double the level in England. The banks were more lightly regulated than those in England. Historians often emphasise that the flexibility and dynamism of the Scottish banking system contributed significantly to the rapid development of the economy in the nineteenth century. As a joint-stock company the British Linen Company had the right to raise funds through the issue of promissory notes or bonds. With its bonds functioning as bank notes, the company gradually moved into the business of lending and discounting to other linen manufacturers, and in the early 1770s banking became its main activity. === Transport === The extensive Scottish coastline meant that few parts of the country that were not within easy reach of sea transportation, particularly the central belt that would be the heartland of industrial development. Before the eighteenth century most roads were relatively poor dirt tracks. In the late eighteenth century there were improvements carried out by turnpike trusts and the creation of a series of military roads. Canal building also developed, with four major lowland canals: the Forth and Clyde, Union, Monkland and Crinan and further north the Paisley, Caledonian and Inverurie canals, carrying thousands of passengers and tons of goods by the early nineteenth century. === Exports === With tariffs with England abolished, the potential for trade for Scottish merchants was considerable, especially with Colonial America. However, the economic benefits of union were very slow to appear, primarily because Scotland was too poor to exploit the opportunities of the greatly expanded free market. Scotland in 1750 was still a poor rural, agricultural society with a population of 1.3 million. Furthermore, Scotland's economy had been ravaged by the Darien scheme: according to some estimates, half of all the circulating wealth in Scotland went into the scheme. Glasgow merchants had been particularly enthusiastic, and consequently had no ships of their own for twenty years following the disaster. Some progress was visible, such as the sales of linen and cattle to England, the cash flows from military service, and the tobacco trade that was dominated by Glasgow after 1740. The clippers belonging to the Glasgow Tobacco Lords were the fastest ships on the route to Virginia. The trade had started as smuggling during the 1600s, but with the Act of Union, it became legal and trade picked up. Merchants who profited from the American trade began investing in leather, textiles, iron, coal, sugar, rope, sailcloth, glassworks, breweries, and soapworks, setting the foundations for the city's emergence as a leading industrial centre after 1815. The tobacco trade collapsed during the American Revolution (1776–1783), when its sources were cut off by the British blockade of American ports. However, trade with the West Indies began to make up for the loss of the tobacco business, reflecting the extensive growth of the cotton industry, the British demand for sugar and the demand in the West Indies for herring and linen goods. During 1750–1815, 78 Glasgow merchants not only specialised in the importation of sugar, cotton, and rum from the West Indies, but diversified their interests by purchasing West Indian plantations, Scottish estates, or cotton mills. They were not to be self-perpetuating due to the hazards of the trade, the incident of bankruptcy, and the changing complexity of Glasgow's economy. Other burghs also benefited. Greenock enlarged its port in 1710 and sent its first ship to the Americas in 1719, but was soon playing a major part in importing sugar and rum. === Linen === Linen manufacture was Scotland's premier industry in the eighteenth century and formed the basis for the later cotton, jute, and woollen industries. Scottish industrial policy was made by the Board of Trustees for Fisheries and Manufactures in Scotland, which sought to build an economy complementary, not competitive, with England. Since England had woolens, this meant linen. The Scottish members of parliament managed to see off an attempt to impose an export duty on linen and from 1727 it received subsidies of £2,750 a year for six years, resulting in a considerable expansion of the trade. Paisley adopted Dutch methods and became a major centre of production. Glasgow manufactured for the export trade, which doubled between 1725 and 1738. Encouraged and subsidised by the Board of Trustees, so that they could compete with German products, merchant entrepreneurs became dominant in all stages of linen manufacturing and built up the market share of Scottish linens, especially in the American colonial market. The British Linen Company, established in 1746, was the largest firm in the Scottish linen industry in the eighteenth century, exporting linen to England and America. In 1728, 2.2 million yards of linen cloth had been produced and by 1730 it had already supplanted woollen cloth as the major manufacturing industry. By 1750 it reached 7.6 million and it peaked at 12.1 million yards in 1775. However, there were sharp slumps, particularly in the periods 1734–43 and 1763–72. It was a mainly rural industry, with most of the manufacture carried out in homes, rather than factories. It employed perhaps 100,000 people, four out of five of which were women who spun the flax, while men operated the looms. The government promoted the use of linen from the late 17th century: an act of Parliament, the Linen Act 1686 (c. 28) stated that all Scots were to be buried in Scottish-made linen winding sheets, using Scottish flax. In 1748, an embargo on the import or use of French cambric provided a further boost to the linen industry. By 1770, Glasgow was the largest linen manufacturer in Britain, and in 1787, Calton, Glasgow was the site of Scotland's first industrial dispute when 7,000 weavers went on strike in protest against a 25% cut in their wages. The 39th Foot were sent in, and three people were killed. Sheer linen, which had then come into vogue, was almost unobtainable in Scotland in the 1780s. In a bid to stay competitive, Glasgow manufacturers turned to fine cotton muslin, at which they succeeded so well that it became cheaper than imported Indian muslins. With the popularity of Indian muslins, from the 1760s onwards, had come a fashion for tambour lace, or sewed muslin, which briefly became a flourishing business in Ayrshire, thanks to the enterprising spirit of Mrs Jamieson. == Nineteenth century == The economy, long based on agriculture, began to industrialize after 1790. At first the leading industry, based in the west, was the production of cotton. After the cutting off of supplies of raw cotton from 1861 as a result of the American Civil War the country diversified into engineering, shipbuilding, and locomotive construction, with steel replacing iron after 1870. === Cotton === From about 1790 textiles became the most important industry in the west of Scotland, especially the spinning and weaving of cotton. The first cotton spinning mill was opened at Penicuik in 1778. By 1787 Scotland had 19 mills, 95 by 1795 and there were 192 by 1839. The rise of cotton was the result of a sudden fall in the price of the raw materials because of slavery, mostly imported from the US, and the availability of a pool of cheap labour caused by population rise and migration. In 1775 137,000 lb of raw cotton were being imported into the Clyde and by 1812 it had increased eightfold to over 11 million lb. The capital invested in the industry increased sevenfold between 1790 and 1840. By 1800, cotton was the main industry in the Glasgow area: New Lanark mills were at the time the largest in the world. Early production was aided by the new technology of the spinning mule, water frame and water power. Steam powered machines were introduced into the industry from 1782. However, only about a third of workers were employed in factories and it continued to rely heavily on the hand loom weaver, working in his own home. In 1790 there were about 10,000 weavers involved in cotton manufacture and by 1800 it was 50,000. The cotton industry flourished until in 1861 the American Civil War cut off the supplies of raw cotton. The industry never recovered, but by that time Scotland had developed heavy industries based on its coal and iron resources. === Coal === Coal mining became a major industry, and continued to grow into the twentieth century, producing the fuel to smelt iron, heat homes and factories and drive steam engines locomotives and steamships. Coal mining expanded rapidly in the eighteenth century, reaching 700,000 tons a year by 1750. Most coal was in five fields across the Central Belt. The first Newcomen Steam Engine was introduced into a Scottish colliery in 1719, but water remained the most important source of power for most of the century. With increased demand for household fuel from a growing urban population and the emerging demands of heavy industry, production grew from an estimated 1 million tons a year in 1775, to 3 million by 1830. Production almost doubled by the 1840s and peaked in 1914 at about 42 million tons a year. Initially increased production was made possible by the introduction of cheap labour, provided from the 1830s by large numbers of Irish immigrants. There were then changes in mining practices, which included the introduction of blasting powder in the 1850s and the use of mechanised methods of transferring the coal to the surface, along with the introduction of steam power in the 1870s. Landed proprietors were replaced by profit-seeking leasehold partnerships and joint-stock companies, whose members were often involved in the emerging iron industry. By 1914 there were a million coal miners in Scotland. The stereotype emerged early on of Scottish colliers as brutish, non-religious and socially isolated serfs; that was an exaggeration, for their life style resembled coal miners everywhere, with a strong emphasis on masculinity, egalitarianism, group solidarity, and support for radical labour movements. === Iron and steel === The invention of James Beaumont Neilson's hot blast process for smelting iron in 1828 revolutionised the Scottish iron industry, allowing abundant native blackstone iron ore to be smelted with ordinary coal. In 1830 Scotland had 27 iron furnaces and by 1840 it was 70, 143 in 1850 and it peaked at 171 in 1860. Output was over 2,500,000 tons of iron ore in 1857, 6.5 per cent of UK output. Output of pig iron rose from 797,000 tons in 1854 to peak at 1,206,00 in 1869. As a result, Scotland became a centre for engineering, shipbuilding and the production of locomotives. In the 1871 census, the workforce in heavy industry overtook textiles in the Strathclyde region and in 1891 it became the majority employer in the country. Toward the end of the nineteenth century, steel production largely replaced iron production. === Railways === Britain was the world leader in the construction of railways, and their use to expand trade and coal supplies. The first successful locomotive-powered line in Scotland, between Monkland and Kirkintilloch, opened in 1826. By the late 1830s there was a network of railways that included lines between Dundee and Arbroath, and connecting Glasgow, Paisley and Ayr. The line between Glasgow and Edinburgh, largely designed for passenger transport, opened in 1842 and proved highly successful. By the mid-1840s the mania for railways had begun. A good passenger service established by the late 1840s, and a network of freight lines reduced the cost of shipping coal, making products manufactured in Scotland competitive throughout Britain. The North British Railway was formed in 1844 to link Edinburgh and eastern Scotland with Newcastle and the next year the Caledonian Railway began connecting Glasgow and the west to Carlisle. The creation of a dense network in the Lowlands with connections to England would take until the 1870s to complete. A series of amalgamations meant that five main companies operated 98 per cent of the system by the 1860s. The capital invested in Scottish railways was £26.6 million in 1859 and by 1900 it had reached £166.1 million. The floatation of railway companies was a major factor in the formation of the Scottish stock exchanges and the rise of share holding in Scotland and after the early stages drew in large amounts of English investment. The travel time between Edinburgh or Glasgow and London was cut from 43 hours to 17 and by the 1880s it had been reduced to 8. Railways opened the London market to Scottish beef and milk. They enabled the Aberdeen Angus to become a cattle breed of worldwide reputation. === Shipbuilding === Shipbuilding on Clydeside (the river Clyde through Glasgow and other points) began when the first small yards were opened in 1712 at the Scott family's shipyard at Greenock. Major firms included Denny of Dumbarton, Scotts Shipbuilding and Engineering Company of Greenock, Lithgows of Port Glasgow, Simon and Lobnitz of Renfrew, Alexander Stephen and Sons of Linthouse, Fairfield of Govan, Inglis of Pointhouse, Barclay Curle of Whiteinch, Connell and Yarrow of Scotstoun. Equally important were the engineering firms that supplied the machinery to drive these vessels, the boilers and pumps and steering gear – Rankin & Blackmore, Hastie's and Kincaid's of Greenock, Rowan's of Finnieston, Weir's of Cathcart, Howden's of Tradeston and Babcock & Wilcox of Renfrew. The biggest customer was Sir William Mackinnon, who ran five shipping companies in the nineteenth century from his base in Glasgow. The Vulcan works, owned by Robert Napier and Sons were the first to start the production of large passenger iron ships in the 1840s. In 1835 the Clyde produced only 5 per cent of the ship tonnage built in Britain. The transition from wooden to iron ships was an uneven one, with wooden ships still cheaper to build until around 1850 and the material continued to be used until the 1860s in ships with composite hulls, like the clipper the Cutty Sark, which was launched from Dumbarton in 1869. Cost also delayed the transition from iron to steel shipbuilding and as late as 1879 only 18,000 tons of steel-built shipping was launched on the Clyde, 10 per cent of all tonnage. A similar process occurred in means of propulsion with shifts from sail to steam and back again between the 1840s and the introduction of the more efficient steam turbine engine that became dominant in the mid-1880s. Tonnage increased by more than a factor of six between 1880 and 1914. Production peaked during the First World War and the term "Clyde-built" became synonymous with industrial quality. === Engineering architecture === The nineteenth century saw some major engineering projects including Thomas Telford's (1757–1834) stone Dean Bridge (1829–31) and iron Craigellachie Bridge (1812–14). In the 1850s the possibilities of new wrought- and cast-iron construction were explored in the building of commercial warehouses in Glasgow. This adopted a round-arched Venetian style first used by Alexander Kirkland (1824–92) at the heavily ornamented 37–51 Miller Street (1854) and translated into iron in John Baird I's Gardner's Warehouse (1855–56), with an exposed iron frame and almost uninterrupted glazing. Most industrial buildings avoided this cast-iron aesthetic, like William Spence's (1806?–83) Elgin Engine Works built in 1856–58, using massive rubble blocks. The most important engineering project was the Forth Bridge, a cantilever railway bridge over the Firth of Forth in the east of Scotland, 14 kilometres (9 mi) west of central Edinburgh. Construction of a suspension bridge designed by Thomas Bouch (1822–80), was stopped after the collapse of another of his works, the Tay Bridge in 1847. The project was taken over by John Fowler (1817–98) and Benjamin Baker (1840–1907), who designed a structure that was built by Glasgow-based company Sir William Arrol & Co. from 1883. It was opened on 4 March 1890, and spans a total length of 2,528.7 metres (8,296 ft). It was the first major structure in Britain to be constructed of steel; its contemporary, the Eiffel Tower was built of wrought iron. == Impact == === Population and urbanisation === The census conducted by the Reverend Alexander Webster in 1755 showed the inhabitants of Scotland as 1,265,380 persons. By the time of the first decadal census in 1801, the population was 1,608,420. It grew steadily in the nineteenth century, to 2,889,000 in 1851 and 4,472,000 in 1901. While population fell in some rural areas as a result of the agricultural revolution, it rose rapidly in the towns. Aberdeen, Dundee and Glasgow all grew by a third or more between 1755 and 1775 and the textile town of Paisley more than doubled its population. Scotland was already one of the most urbanised societies in Europe by 1800. In 1800, 17 per cent of people in Scotland lived in towns of more than 10,000 inhabitants. By 1850 it was 32 per cent and by 1900 it was 50 per cent. By 1900 one in three of the entire population were in the four cities of Glasgow, Edinburgh, Dundee and Aberdeen. Glasgow emerged as the largest city. Its population in 1780 was 43,000, reaching 147,000 by 1820; by 1901 it had grown to 762,000. This was due to a high birth rate and immigration from the countryside and particularly from Ireland; but from the 1870s there was a fall in the birth rate and lower rates of migration and much of the growth was due to longer life expectancy. Glasgow was now one of the largest cities in the world, and it became known as "the Second City of the Empire" after London. Dundee upgraded its harbour and established itself as an industrial and trading centre. Dundee's industrial heritage was based on "the three Js": jute, jam and journalism. East-central Scotland became too heavily dependent on linens, hemp, and jute. Despite the cyclical nature of the trade which periodically ruined weaker companies, profits held up well in the nineteenth century. Typical firms were family affairs, even after the introduction of limited liability in the 1890s. The profits helped make the city an important source of overseas investment, especially in North America. However, the profits were seldom invested locally, apart from the linen trade. The reasons were that low wages limited local consumption, and because there were no important natural resources; thus the Dundee region offered little opportunity for profitable industrial diversification. The industrial developments, while they brought work and wealth, were so rapid that housing, town-planning, and provision for public health did not keep pace with them, and for a time living conditions in some of the towns and cities were notoriously bad, with overcrowding, high infant mortality, and growing rates of tuberculosis. Mortality rates were high compared with England and other European nations. Evidence suggests a national death rate of 30 per 1,000 in 1755, 24 in the 1790s and 22 in the early 1860s. Mortality tended to be much higher in urban than rural settlements. The first time these were measured, 1861–82, in the four major cities these were 28.1 per 1,000 and 17.9 in rural areas. Mortality probably peaked in Glasgow in the 1840s, when large inflows of population from the Highlands and Ireland combined population outgrowing sanitary provision and combining with outbreaks of epidemic disease. National rates began to fall in the 1870s, particularly in the cities, as environmental conditions improved. The companies attracted rural workers, as well as immigrants from Catholic Ireland, by inexpensive company housing that was a dramatic move upward from the inner-city slums. This paternalistic policy led many owners to support government sponsored housing programs as well as self-help projects among the respectable working class. === Class identity === One of the consequences of industrialisation and urbanisation was the development of a distinct skilled working class. W. H. Fraser argues that the emergence of a class identity can be located to the period before the 1820s, when cotton workers in particular were involved in a series of political protests and events. This led to the Radical War of 1820, in which a declaration of a provisional government by three weavers coincided with a strike by Glasgow cotton workers. The climax of the five-day war was a march from Glasgow Green to Falkirk to take control of the Carron Iron Works. It ended in a cavalry charge by government forces at Bonnymuir. The result was a discouragement of direct political action by workers, although attempts at political reform continued in movements like Chartism and the short hours movement in the 1830s. From the 1830s the political influence of the working classes was expanded through the widening of the franchise, industrial action and the growth and organisation of trade unionism. There were less than 5,000 eligible voters in Scotland before the 1832 Reform Act saw the enlargement of the franchise to include middle-class men of business. The 1868 Act brought in skilled artisans and that of 1884 admitted many farm workers, crofters, miners and unskilled men. These changes were supported by trade unions, which developed from the mid-century. Concerted industrial action was undertaken by spinners in the cotton industry in 1836-7 after a collapse of foreign markets led to wage cuts, but was ultimately defeated by the factory owners. The most sustained industrial action was in the mining industry, where the owners controlled employment as well as housing and retail trade through the truck system. In 1887 colliers in the west of Scotland won a major victory over both wages and rents. Scottish trade unionism in the nineteenth century differed from that in the rest of Britain in that unions were often small and highly localised and lacked higher industrial and national organisation. Trade councils were established in Edinburgh in 1853 and in Glasgow in 1858 in an attempt to organise on a regional basis, but were often ineffectual. The new unionism of the last two decades of the century saw the dockers and railwaymen organise a network of regional and national support, but this began to wain towards the end of the century and the situation of union parochialism would remain the dominant mode until union amalgamations got under way after 1914. === Women === The industrialisation of Scotland had a major impact of the roles of women. Women and girls formed a much higher proportion of the workforce than elsewhere in Britain and were the majority of workers in some industries. The expansion of flax spinning and the rise of linen industry in the eighteenth century was almost totally dependent on female labour and the situation was similar in the sewed muslin industry in the West of Scotland towards the end of the century. When flax spinning became mechanised the proportion of men to women was 100:280, the highest proportion of women in the United Kingdom. In Dundee in the 1840s, while male employment increased by a factor of 1.6, female employment went up by 2.5, making it the only large town in Scotland with the majority of its female population in paid employment. In cotton in the nineteenth century women and girls accounted for 61 per cent of the workforce in Scottish mills, compared with 50 per cent in Lancashire. Although most women were employed in the textile industries, they were also a significant proportion of the workforce in other areas, making up 12 per cent of underground workers in mining, compared with 4 per cent in Britain overall. The expanded opportunities for women, and the extra income they and children brought into the household, probably did the most to help increase living standards for working-class families. The role of women in workforce peaked in the 1830s. As heavy industry began to dominate there were less opportunities for women. From the mid-nineteenth century there were a series of laws that limited female roles in industry, beginning with the Mines and Collieries Act 1842, which prevented them from working underground. This put 2,500 women out of work in the east of Scotland, causing real hardship as their contribution to the family economy was vital. This was followed by a series of factory acts that placed restrictions on the employment of women. Many of these acts were brought in because of pressure from trade unions who were attempting to secure a living wage for their male members. Women had relatively little involvement in official trade unions for much of the period of industrialisation. However, they were frequently involved in unofficial disputes, the first being recorded in 1768 and of which there are known to have been 300 strikes that involved women between 1850 and 1914. Towards the end of the century there were increasing attempts to unionise women. The Scottish Women's Trade Council (SWTC) was formed in 1887. From this emerged the Women's Protective and Provident League (WPPL) and the Glasgow Council for Women's Trades (GCWT). In 1893, the National Federal Council of Scotland for Women's Trades (NFCSWT) was established and the Scottish Council for Women's Trades (SCWT) in 1900. By 1895 the NFCSWT alone had an affiliated membership of 100,000. === Migration === The growth of industry resulted in the arrival of large numbers of workers from Ireland who moved into the factories and mines in the 1830s and 1840s. Many were seasonal workers employed as navvies on the construction of docks, canals and then railways. An estimated 60–70 per cent of colliers in Lanarkshire were Irish by the 1840s. The arrivals intensified with the Great Famine of 1845. By the census of 1841, 126,321, or 4.6 per cent of Scotland's population, had been born in Ireland and many more were of Irish descent. Most were concentrated in the west of Scotland, and in Glasgow there were 44,000 people who were born in Ireland, 16 per cent of the city's population. Most Irish immigrants, about three quarters, were Catholic, leading to a major cultural and religious change in Scotland, but a quarter were Protestant, eventually bringing with them institutions like the Orange Order and intensifying a sectarian divide in the major cities. Even with the growth of industry there were insufficient good jobs, which with major changes in agriculture, meant that during the period 1841–1931, about two million Scots emigrated to North America and Australasia, and another 750,000 Scots relocated to England. Of those who migrated to non-European locations in the century before 1914, 44 per cent went to the US, 28 per cent to Canada, and 25 per cent to Australia and New Zealand. Other important locations included the Caribbean, India and South Africa. By the twenty-first century, there were about as many people who were Scottish Canadians and Scottish Americans as the five million remaining in Scotland. There was little support from the government and in the early stages many migrants agreed to indentures, particularly to the Thirteen Colonies, that paid for their passage and guaranteed accommodation and work for five or seven years. Later immigration was assisted by agents and societies, such as the Salvation Army, Barnados and the Aberdeen Ladies Union, who often focused on the young or female immigrants. == Notes ==
Wikipedia/Industrial_Revolution_in_Scotland
The 1970s energy crisis occurred when the Western world, particularly the United States, Canada, Western Europe, Australia, and New Zealand, faced substantial petroleum shortages as well as elevated prices. The two worst crises of this period were the 1973 oil crisis and the 1979 energy crisis, when, respectively, the Yom Kippur War and the Iranian Revolution triggered interruptions in Middle Eastern oil exports. The crisis began to unfold as petroleum production in the United States and some other parts of the world peaked in the late 1960s and early 1970s. World oil production per capita began a long-term decline after 1979. The oil crises prompted the first shift towards energy-saving (in particular, fossil fuel-saving) technologies. The major industrial centers of the world were forced to contend with escalating issues related to petroleum supply. Western countries relied on the resources of countries in the Middle East and other parts of the world. The crisis led to stagnant economic growth in many countries as oil prices surged. Although there were genuine concerns with supply, part of the run-up in prices resulted from the perception of a crisis. The combination of stagnant growth and price inflation during this era led to the coinage of the term stagflation. By the 1980s, both the recessions of the 1970s and adjustments in local economies to become more efficient in petroleum usage, controlled demand sufficiently for petroleum prices worldwide to return to more sustainable levels. The period was not uniformly negative for all economies. Petroleum-rich countries in the Middle East benefited from increased prices and the slowing production in other areas of the world. Some other countries, such as Norway, Mexico, and Venezuela, benefited as well. In the United States, Texas and Alaska, as well as some other oil-producing areas, experienced major economic booms due to soaring oil prices even as most of the rest of the nation struggled with the stagnant economy. Many of these economic gains, however, came to a halt as prices stabilized and dropped in the 1980s. == Key periods == === Arab-Israeli conflict === Conflict between Arabs and Israelis in the Middle East has existed since Israel's declaration of independence in 1948, including a number of wars. The Suez Crisis, also known as the Second Arab–Israeli war, was sparked when Israel's southern port of Eilat was blocked by Egypt, which also nationalized the Suez Canal belonging to Anglo-French investors. One of the objectives of the invasion was the removal of President Gamal Abdel Nasser who was aligning with the Soviet Union. The Six-Day War of 1967 included an Israeli invasion of the Egyptian Sinai Peninsula, which resulted in Egypt's closure of the Suez Canal for eight years. The canal was cleared in 1974 and opened again in 1975 after the 1973 Yom Kippur War, when Egypt tried to take back the Sinai. OAPEC countries cut production of oil and placed an embargo on oil exports to the United States after Richard Nixon requested $2.2 billion to support Israel in the war. Nevertheless, the embargo lasted only until January 1974, though the price of oil remained high afterwards. === Production peaks around 1970 === The real price of petroleum was stable in the 1970 timeframe, but there had been a sharp increase in American imports, putting a strain on American balance of trade, alongside other developed nations. During the 1960s, petroleum production in some of the world's top producers with extraction technology at the time began to peak. West Germany reached its production peak in 1966, Venezuela and the United States in 1970, and Iran in 1974. Canada's conventional oil production peaked around this same time (though non-conventional production later helped revive Canadian production to some degree). The worldwide production per capita peaked soon afterward. Although production in other parts of the world was increasing, the peaks in these regions began to put substantial upward pressure on world oil prices. Equally as important, control of the oil supply became an increasingly important problem as countries like West Germany and the U.S. became increasingly dependent on foreign suppliers for this key resource. === 1973 oil crisis === The 1973 oil crisis was a direct consequence of the US production peak in late 1960 and the beginning of 1971 (and shortages, especially for heating oil, started from there). The "embargo" as described below is the "practical name" given to the crisis. For the main Arab producers, the "embargo" allowed them to show to "the Arab street" that they were doing something for the Palestinians. In real market terms (number of barrels) the embargo was almost a non-event, and only from a few countries, towards a few countries. The embargo was never effective from Saudi Arabia towards the US, as reported by James E. Akins in interview at 24:10 in the documentary "la face cachée du pétrole part 2". Akins, who audited US capacity for Nixon after US peak, was US ambassador in Saudi Arabia at that time. Lawrence Rocks and Richard Runyon captured the unfolding of these events at the time in The Energy Crisis book. In October 1973, the members of Organization of Arab Petroleum Exporting Countries or the OAPEC (consisting of the Arab members of OPEC) proclaimed an oil embargo "in response to the U.S. decision to re-supply the Israeli military" during the Yom Kippur war; it lasted until March 1974. OAPEC declared it would limit or stop oil shipments to the United States and other countries if they supported Israel in the conflict. With the US actions seen as initiating the oil embargo, the long-term possibility of embargo-related high oil prices, disrupted supply and recession, created a strong rift within NATO; both European countries and Japan sought to disassociate themselves from the US Middle East policy. Arab oil producers had also linked the end of the embargo with successful US efforts to create peace in the Middle East, which complicated the situation. To address these developments, the Nixon Administration began parallel negotiations with both Arab oil producers to end the embargo, and with Egypt, Syria, and Israel to arrange an Israeli pull back from the Sinai and the Golan Heights after the fighting stopped. By January 18, 1974, Secretary of State Henry Kissinger had negotiated an Israeli troop withdrawal from parts of the Sinai. The promise of a negotiated settlement between Israel and Syria was sufficient to convince Arab oil producers to lift the embargo in March 1974. By May, Israel agreed to withdraw from the Golan Heights. Independently, the OPEC members agreed to use their leverage over the world price-setting mechanism for oil to stabilize their real incomes by raising world oil prices. This action followed several years of steep income declines after the recent failure of negotiations with the major Western oil companies earlier in the month. For the most part, industrialized economies relied on crude oil, and OPEC was a major supplier. Because of the dramatic inflation experienced during thwasis period, a popular economic theory has been that these price increases were to blame, as being suppressive of economic activity. However, the causality stated by this theory is often questioned. The targeted countries responded with a wide variety of new, and mostly permanent, initiatives to contain their further dependency. The 1973 "oil price shock", along with the 1973–1974 stock market crash, have been regarded as the first event since the Great Depression to have a persistent economic effect. === 1979 energy crisis === A crisis emerged in the United States in 1979 during the wake of the Iranian Revolution. Amid massive protests, the Shah of Iran, Mohammad Reza Pahlavi, fled his country in early 1979, allowing the Ayatollah Khomeini to gain control. The protests shattered the Iranian oil sector. While the new regime resumed oil exports, it was inconsistent and at a lower volume, forcing prices to go up. Saudi Arabia and other OPEC nations, under the presidency of Dr. Mana Alotaiba increased production to offset the decline, and the overall loss in production was about 4 percent. However, a widespread panic resulted, driving the price far higher than would be expected under normal circumstances. In 1980, following the Iraqi invasion of Iran, oil production in Iran nearly stopped, and Iraq's oil production was severely cut as well. After 1980, oil prices began a decline as other countries began to fill the production shortfalls from Iran and Iraq. === 1980s oil glut === The 1973 and 1979 energy crisis had caused petroleum prices to peak in 1980 at over US$35 per barrel (US$134 in today's dollars). Following these events slowing industrial economies and stabilization of supply and demand caused prices to begin falling in the 1980s. The glut began in the early 1980s as a result of slowed economic activity in industrial countries (due to the 1973 and 1979 energy crises) and the energy conservation spurred by high fuel prices. The inflation adjusted real 2004 dollar value of oil fell from an average of $78.2 per barrel in 1981 to an average of $26.8 in 1986. In June 1981, The New York Times stated an "Oil glut! ... is here" and Time Magazine stated: "the world temporarily floats in a glut of oil", though the next week a New York Times article warned that the word "glut" was misleading, and that in reality, while temporary surpluses had brought down prices somewhat, prices were still well above pre-energy crisis levels. This sentiment was echoed in November 1981, when the CEO of Exxon also characterized the glut as a temporary surplus, and that the word "glut" was an example of "our American penchant for exaggerated language". He wrote that the main cause of the glut was declining consumption. In the United States, Europe and Japan, oil consumption had fallen 13% from 1979 to 1981, due to "in part, in reaction to the very large increases in oil prices by the Organization of Petroleum Exporting Countries and other oil exporters", continuing a trend begun during the 1973 price increases. After 1980, reduced demand and overproduction produced a glut on the world market, causing a six-year-long decline in oil prices culminating with a 46 percent price drop in 1986. == Effects == === Recession === The U.S. reported a negative economic growth during the period concerning the 1970s and it remained weak till the 1980s as the post world war II economic boom drew to a close. But it was a different type of recession as it was a scenario of stagflation which is a rare economic consequence. Other causes that contributed to the recession included the Vietnam War, which turned out costly for the United States and the fall of the Bretton Woods system. The emergence of newly industrialized countries rose competition in the metal industry, triggering a steel crisis, where industrial core areas in North America and Europe were forced to re-structure. The 1973–1974 stock market crash made the recession evident. According to the National Bureau of Economic Research, the U.S. economy slid into a recession during the period of 1973-75. Inflation levels remained high even when an economic expansion took place afterwards. During this recession, the Gross Domestic Product of the United States fell 3.2%. Although the recession ended in March 1975, the unemployment rate did not peak for several months. In May 1975, the rate reached its height for the cycle of 9%. The recession also lasted from 1973 to 1975 in the United Kingdom. The GDP declined by 3.9% or 3.37% depending on the source. It took 14 quarters for the UK's GDP to recover to that at the start of recession. === Emergence of new oil producers === High oil prices in the 1970s induced investment in oil production by non-OPEC countries, particularly for reserves with a higher cost of production. These included Prudhoe Bay in Alaska, the North Sea offshore fields of the United Kingdom and Norway, the Cantarell offshore field of Mexico, and oil sands in Canada. === Strategic petroleum reserves === As a result of the 1973 crisis many nations created strategic petroleum reserves (SPRs), crude oil inventories (or stockpiles) held by the governments of particular countries or private industry, for the purpose of providing economic and national security during an energy crisis. The International Energy Agency (IEA) was formed in the wake of this crisis and currently comprises 31 member countries. According to the IEA, approximately 4.1 billion barrels (650,000,000 m3) of oil are held in strategic reserves by the member countries, of which 1.4 billion barrels (220,000,000 m3) is government-controlled. The remainder is held by private industry. These reserves are intended to be equivalent to at least 90 days of net imports. At the moment the U.S. Strategic Petroleum Reserve is one of the largest government-owned reserves, with a capacity of up to 713.5 million barrels (113,440,000 m3). Recently, other non-IEA countries have begun creating their own strategic petroleum reserves, with China being the second largest overall and the largest non-IEA country. === Middle East === Since Israel's declaration of independence in 1948 this state has found itself in nearly continual conflict with the Arab world and some other predominantly Muslim countries. The animosity between the Arabs and the Israelis became a global issue during the 1970s. The Yom Kippur War of 1973, with the supplying of Israel by its Western allies while some Arab states received Soviet supplies, made this one of the most internationally threatening confrontations of the period. The large oil discoveries in the Middle East gave some Muslim countries unique leverage in the world, beginning in the 1960s. The 1973 and 1979 crises, in particular, were demonstrations of the new power that these countries had found. The United States and other countries were forced to become more involved in the conflicts between these states and Israel. === OPEC === One of the first challenges OPEC faced in the 1970s was the United States' unilaterally pulling out of the Bretton Woods Accord and taking the U.S. off the established Gold Exchange Standard in 1971. The change resulted in instability in world currencies and depreciation of the value of the U.S. dollar, as well as other currencies. The revenues of OPEC also took a hit since they priced oil in dollars. Finally OPEC started pricing oil against gold to combat the situation. But OPEC still struggled to maintain stability in the region as the negotiations between them and other Western oil companies bore little to no positive results. === "Oil Patch" === The major oil-producing regions of the U.S.—Texas, Oklahoma, Louisiana, Colorado, Wyoming, and Alaska—benefited greatly from the price inflation of the 1970s as did the U.S. oil industry in general. Oil prices generally increased throughout the decade; between 1978 and 1980 the price of West Texas Intermediate crude oil increased 250 percent. Although all states felt the effects of the stock market crash and related national economic problems, the economic benefits of increased oil revenue in the Oil Patch states generally offset much of this. === Energy mix === Following the 1970s, the global energy consumption per capita broke away from its previous trend of rapid growth, instead remaining relatively flat for multiple decades until the next century with the rise of large Asian economy like China. In the meantime the use of nuclear energy have picked up, but until 1990s after the Chernobyl disaster occurred, the growth of nuclear energy stopped, and its place have been taken by re-accelerated growth of natural gas, as well as the growing use of coal following an almost a century long stagnation, as well as the growth of other alternative energy. == See also == Energy crisis 1973–75 recession 1979 world oil market chronology 1980s oil glut 1990 oil price shock 2020s commodities boom Hubbert peak theory International Energy Forum == References ==
Wikipedia/1970s_energy_crisis
Textile manufacture during the British Industrial Revolution was centred in south Lancashire and the towns on both sides of the Pennines in the United Kingdom. The main drivers of the Industrial Revolution were textile manufacturing, iron founding, steam power, oil drilling, the discovery of electricity and its many industrial applications, the telegraph and many others. Railroads, steamboats, the telegraph and other innovations massively increased worker productivity and raised standards of living by greatly reducing time spent during travel, transportation and communications. Before the 18th century, the manufacture of cloth was performed by individual workers, in the premises in which they lived and goods were transported around the country by packhorses or by river navigations and contour-following canals that had been constructed in the early 18th century. In the mid-18th century, artisans were inventing ways to become more productive. Silk, wool, and linen fabrics were being eclipsed by cotton which became the most important textile. Innovations in carding and spinning enabled by advances in cast iron technology resulted in the creation of larger spinning mules and water frames. The machinery was housed in water-powered mills on streams. The need for more power stimulated the production of steam-powered beam engines, and rotative mill engines transmitting the power to line shafts on each floor of the mill. Surplus power capacity encouraged the construction of more sophisticated power looms working in weaving sheds. The scale of production in the mill towns round Manchester created a need for a commercial structure; for a cotton exchange and warehousing. The technology was used in woollen and worsted mills in the West Yorkshire and elsewhere. == Elements of the Industrial Revolution == The commencement of the Industrial Revolution is closely linked to a small number of innovations, made in the second half of the 18th century: John Kay's 1733 flying shuttle enabled cloth to be woven faster, of a greater width, and for the process to later be mechanised. Cotton spinning using Richard Arkwright's water frame, James Hargreaves' Spinning Jenny, and Samuel Crompton's Spinning Mule (a combination of the Spinning Jenny and the Water Frame). This was patented in 1769 and so came out of patent in 1783. The end of the patent was rapidly followed by the erection of many cotton mills. Similar technology was subsequently applied to spinning worsted yarn for various textiles and flax for linen. The improved steam engine invented by James Watt and patented in 1775 was initially mainly used for pumping out mines, for water supply systems and to a lesser extent to power air blast for blast furnaces, but from the 1780s was applied to power machines. This enabled rapid development of efficient semi-automated factories on a previously unimaginable scale in places where waterpower was not available or not steady throughout the seasons. Early steam engines had poor speed control, which caused thread breakage, limiting their use in operations like spinning; however, this problem could be overcome by using the engine to pump water over a water wheel to drive the machinery. In the Iron industry, coke was finally applied to all stages of iron smelting, replacing charcoal. This had been achieved much earlier for lead and copper as well as for producing pig iron in a blast furnace, but the second stage in the production of bar iron depended on the use of potting and stamping (for which a patent expired in 1786) or puddling (patented by Henry Cort in 1783 and 1784). Using a steam engine to power blast air to blast furnaces made higher furnace temperatures possible, which allowed the use of more lime to tie up sulfur in coal or coke. The steam engine also overcame the shortage of water power for iron works. Iron production surged after the 1750s when steam engines were increasingly employed in iron works. These represent three 'leading sectors', in which there were key innovations, which allowed the economic takeoff by which the Industrial Revolution is usually defined. Later inventions such as the power loom and Richard Trevithick's high pressure steam engine were also important in the growing industrialisation of Britain. The application of steam engines to powering cotton mills and ironworks enabled these to be built in places that were most convenient because other resources were available, rather than where there was water to power a watermill. == Industry and invention == Before the 1760s, textile production was a cottage industry using mainly flax and wool. A typical weaving family would own one handloom, which would be operated by the man with help of a boy; the wife, girls and other women could make sufficient yarn for that loom. The knowledge of textile production had existed for centuries. India had a textile industry that used cotton, from which it manufactured cotton textiles. When raw cotton was exported to Europe it could be used to make fustian. Two systems had developed for spinning: the simple wheel, which used an intermittent process and the more refined, Saxony wheel which drove a differential spindle and flyer with a heck that guided the thread onto the bobbin, as a continuous process. This was satisfactory for use on handlooms, but neither of these wheels could produce enough thread for the looms after the invention by John Kay in 1734 of the flying shuttle, which made the loom twice as productive. Cloth production moved away from the cottage into manufactories. The first moves towards manufactories called mills were made in the spinning sector. The move in the weaving sector was later. By the 1820s, all cotton, wool, and worsted was spun in mills; but this yarn went to outworking weavers who continued to work in their own homes. A mill that specialized in weaving fabric was called a weaving shed. == Early inventions == === East India Company === During the second half of the 17th century, the newly established factories of the East India Company in South Asia started to produce finished cotton goods in quantity for the British market. The imported Calico and chintz garments competed with, and acted as a substitute for Indian wool and the linen produce, resulting in local weavers, spinners, dyers, shepherds and farmers petitioning their MPs and in turn Parliament for a ban on the importation, and later the sale of woven cotton goods. Which they eventually achieved via the 1700 and 1721 Calico Acts. The acts banned the importation and later the sale of finished pure cotton produce, but did not restrict the importation of raw cotton, or sale or production of Fustian. The exemption of raw cotton saw two thousand bales of cotton being imported annually, from Asia and the Americas, and forming the basis of a new indigenous industry, initially producing Fustian for the domestic market, though more importantly triggering the development of a series of mechanised spinning and weaving technologies, to process the material. This mechanised production was concentrated in new cotton mills, which slowly expanded till by the beginning of the 1770s seven thousand bales of cotton were imported annually, and pressure was put on Parliament, by the new mill owners, to remove the prohibition on the production and sale of pure cotton cloth, as they wished to compete with the EIC imports. Indian cotton textiles, mainly those from Bengal, continued to maintain a competitive advantage up until the 19th century. In order to compete with Indian goods, British merchants invested in labour-saving technical advancements, while the government implemented protectionist policies such as bans and tariffs to restrict Indian imports. Britain eventually surpassed India as the world's leading cotton textile manufacturer in the 19th century. === Britain === During the 18th and 19th centuries, much of the imported cotton came from plantations in the American South. In periods of political uncertainty in North America, during the Revolutionary War and later American Civil War, however, Britain relied more heavily on imports from the Indian subcontinent to supply its cotton manufacturing industry. Ports on the west coast of Britain, such as Liverpool, Bristol, and Glasgow, became important in determining the sites of the cotton industry. Lancashire became a centre for the nascent cotton industry because the damp climate was better for spinning the yarn. As the cotton thread was not strong enough to use as warp, wool or linen or fustian had to be used. Lancashire was an existing wool centre. Likewise, Glasgow benefited from the same damp climate. The early advances in weaving had been halted by the lack of thread. The spinning process was slow and the weavers needed more cotton and wool thread than their families could produce. In the 1760s, James Hargreaves improved thread production when he invented the Spinning Jenny. By the end of the decade, Richard Arkwright had developed the water frame. This invention had two important consequences: it improved the quality of the thread, which meant that the cotton industry was no longer dependent on wool or linen to make the warp, and it took spinning away from the artisans' homes to specific locations where fast-flowing streams could provide the water power needed to drive the larger machines. The Western Pennines of Lancashire became the centre for the cotton industry. Not long after the invention of the water frame, Samuel Crompton combined the principles of the Spinning Jenny and the Water Frame to produce his Spinning Mule. This provided even tougher and finer cotton thread. The textile industry was also to benefit from other developments of the period. As early as 1691, Thomas Savery had made a vacuum steam engine. His design, which was unsafe, was improved by Thomas Newcomen in 1698. In 1765, James Watt further modified Newcomen's engine to design an external condenser steam engine. Watt continued to make improvements on his design, producing a separate condenser engine in 1774 and a rotating separate condensing engine in 1781. Watt formed a partnership with businessman Matthew Boulton, and together they manufactured steam engines which could be used by industry. Prior to the 1780s, most of the fine quality cotton muslin in circulation in Britain had been manufactured in India. Due to advances in technique, British "mull muslin" was able to compete in quality with Indian muslin by the end of the 18th century. === Timeline of inventions === In 1734 in Bury, Lancashire, John Kay invented the flying shuttle — one of the first of a series of inventions associated with the cotton industry. The flying shuttle increased the width of cotton cloth and speed of production of a single weaver at a loom. Resistance by workers to the perceived threat to jobs delayed the widespread introduction of this technology, even though the higher rate of production generated an increased demand for spun cotton. In 1738, Lewis Paul (one of the community of Huguenot weavers that had been driven out of France in a wave of religious persecution) settled in Birmingham and with John Wyatt, of that town, they patented the Roller Spinning machine and the flyer-and-bobbin system, for drawing wool to a more even thickness. Using two sets of rollers that travelled at different speeds yarn could be twisted and spun quickly and efficiently. This was later used in the first cotton spinning mill during the Industrial Revolution. 1742: Paul and Wyatt opened a mill in Birmingham which used their new rolling machine powered by donkey; this was not profitable and was soon closed. 1743: A factory opened in Northampton, fifty spindles turned on five of Paul and Wyatt's machines proving more successful than their first mill. This operated until 1764. 1748: Lewis Paul invented the hand driven carding machine. A coat of wire slips were placed around a card which was then wrapped around a cylinder. Lewis's invention was later developed and improved by Richard Arkwright and Samuel Crompton, although this came about under great suspicion after a fire at Daniel Bourn's factory in Leominster which specifically used Paul and Wyatt's spindles. Bourn produced a similar patent in the same year. 1758: Paul and Wyatt based in Birmingham improved their roller spinning machine and took out a second patent. Richard Arkwright later used this as the model for his water frame. === Start of the Revolution === The Duke of Bridgewater's canal connected Manchester to the coal fields of Worsley. It was opened in July 1761. Matthew Boulton opened the Soho Foundry engineering works in Handsworth, Birmingham in 1762. These were both events that enabled cotton mill construction and the move away from home-based production. In 1764, Thorp Mill the first water-powered cotton mill in the world was constructed at Royton, Lancashire, England. It was used for carding cotton. The multiple spindle spinning jenny was invented in 1764. James Hargreaves is credited as the inventor. This machine increased the thread production capacity of a single worker — initially eightfold and subsequently much further. Others credit the original invention to Thomas Highs. Industrial unrest forced Hargreaves to leave Blackburn, but more importantly for him, his unpatented idea was exploited by others. He finally patented it in 1770. As a result, there were over 20,000 spinning jennies in use (mainly unlicensed) by the time of his death. Richard Arkwright first spinning mill, Cromford Mill, Derbyshire, was built in 1771. It contained his invention the water frame. The water frame was developed from the spinning frame that Arkwright had developed with (a different) John Kay, from Warrington. The original design was again claimed by Thomas Highs: which he purposed he had patented in 1769. Arkwright used waterwheels to power the textile machinery. His initial attempts at driving the frame had used horse power, but a mill needed far more power. Using a waterwheel demanded a location with a ready supply of water, hence the mill at Cromford. This mill is preserved as part of the Derwent Valley Mills Arkwright generated jobs and constructed accommodation for his workers which he moved into the area. This led to a sizeable industrial community. Arkwright protected his investment from industrial rivals and potentially disruptive workers. This model worked and he expanded his operations to other parts of the country. Matthew Boulton partnership with Scottish engineer James Watt resulted, in 1775, in the commercial production of the more efficient Watt steam engine which used a separate condensor. Samuel Crompton of Bolton combined elements of the spinning jenny and water frame in 1779, creating the spinning mule. This mule produced a stronger thread than the water frame could. Thus in 1780, there were two viable hand-operated spinning systems that could be easily adapted to run by power of water. Early mules were suitable for producing yarn for use in the manufacture of muslin, and were known as the muslin wheel or the Hall i' th' Wood (pronounced Hall-ith-wood) wheel. As with Kay and Hargreaves, Crompton was not able to exploit his invention for his own profit, and died a pauper. In 1783 a mill was built in Manchester at Shudehill, at the highest point in the city away from the river. Shudehill Mill was powered by a 30 ft diameter waterwheel. Two storage ponds were built, and the water from one passed from one to the other turning the wheel. A steam driven pump returned the water to the higher reservoir. The steam engine was of the atmospheric type. An improvement devised by Joshua Wrigley, trialled in Chorlton-upon-Medlock used two Savery engines to supplement the river in driving on overshot waterwheel. In 1784, Edmund Cartwright invented the power loom, and produced a prototype in the following year. His initial venture to exploit this technology failed, although his advances were recognised by others in the industry. Others such as Robert Grimshaw (whose factory was destroyed in 1790 as part of the growing reaction against the mechanization of the industry) and Austin – developed the ideas further. In the 1790s industrialists, such as John Marshall at Marshall's Mill in Leeds, started to work on ways to apply some of the techniques which had proved so successful in cotton to other materials, such as flax. In 1803, William Radcliffe invented the dressing frame which was patented under the name of Thomas Johnson which enabled power looms to operate continuously. === Later developments === With the Cartwright Loom, the Spinning Mule and the Boulton & Watt steam engine, the pieces were in place to build a mechanised textile industry. From this point there were no new inventions, but a continuous improvement in technology as the mill-owner strove to reduce cost and improve quality. Developments in the transport infrastructure - the canals and, after 1831, the railways - facilitated the import of raw materials and export of finished cloth. The use of water power to drive mills was supplemented by steam driven water pumps, and then superseded completely by the steam engines. For example, Samuel Greg joined his uncle's firm of textile merchants, and, on taking over the company in 1782, he sought out a site to establish a mill. Quarry Bank Mill was built on the River Bollin at Styal in Cheshire. It was initially powered by a water wheel, but installed steam engines in 1810. In 1830, the average power of a mill engine was 48 hp, but Quarry Bank mill installed a new 100 hp water wheel. This was to change in 1836, when Horrocks & Nuttall, Preston took delivery of 160 hp double engine. William Fairbairn addressed the problem of line-shafting and was responsible for improving the efficiency of the mill. In 1815 he replaced the wooden turning shafts that drove the machines at 50rpm, to wrought iron shafting working at 250 rpm, these were a third of the weight of the previous ones and absorbed less power. The mill operated until 1959. ==== Robert's power loom ==== In 1830, using an 1822 patent, Richard Roberts manufactured the first loom with a cast-iron frame, the Roberts Loom. In 1842 James Bullough and William Kenworthy, made the Lancashire Loom. It is a semiautomatic power loom. Although it is self-acting, it has to be stopped to recharge empty shuttles. It was the mainstay of the Lancashire cotton industry for a century, when the Northrop Loom invented in 1894 with an automatic weft replenishment function gained ascendancy. ==== Robert's self acting mule ==== Also in 1830, Richard Roberts patented the first self-acting mule. The Stalybridge mule spinners strike was in 1824, this stimulated research into the problem of applying power to the winding stroke of the mule. The draw while spinning had been assisted by power, but the push of the wind had been done manually by the spinner, the mule could be operated by semiskilled labour. Before 1830, the spinner would operate a partially powered mule with a maximum of 400 spindles after, self-acting mules with up to 1,300 spindles could be built. The savings that could be made with this technology were considerable. A worker spinning cotton at a hand-powered spinning wheel in the 18th century would take more than 50,000 hours to spin 100 lb of cotton; by the 1790s, the same quantity could be spun in 300 hours by mule, and with a self-acting mule it could be spun by one worker in just 135 hours. == Working practices == The nature of work changed during industrialisation from a craft production model to a factory-centric model. It was during the years 1761 to 1850 that these changes happened. Textile factories organized workers' lives much differently from craft production. Handloom weavers worked at their own pace, with their own tools, and within their own cottages. Factories set hours of work, and the machinery within them shaped the pace of work. Factories brought workers together within one building to work on machinery that they did not own. Factories also increased the division of labour. They narrowed the number and scope of tasks. They included children and women within a common production process. As Manchester mill owner Friedrich Engels decried, the family structure itself was "turned upside down" as women's wages undercut men's, forcing men to "sit at home" and care for children while the wife worked long hours. Factories flourished over manual craftsmanship because they had more efficient production output per worker, keeping prices down for the public, and they had much more consistent quality of product. The work-discipline was forcefully instilled upon the workforce by the factory owners, and he found that the working conditions were poor, and poverty levels were at an unprecedented high. Engels was appalled, and his research in Derby played a large role in his and Marx's book 'Das Kapital'. At times, the workers rebelled against poor wages. The first major industrial action in Scotland was that of the Calton weavers in Glasgow, who went on strike for higher wages in the summer of 1787. In the ensuing disturbances, troops were called in to keep the peace and three of the weavers were killed. There was continued unrest. In Manchester in May 1808, 15,000 protesters gathered on St George's Fields and were fired on by dragoons, with one man dying. A strike followed, but was eventually settled by a small wage increase. In the general strike of 1842, half a million workers demanded the Charter and an end to pay cuts. Again, troops were called in to keep the peace, and the strike leaders were arrested, but some of the worker demands were met. The early textile factories employed a large share of children, but the share declined over time. In England and Scotland in 1788, two-thirds of the workers in 143 water-powered cotton mills were described as children. Sir Robert Peel, a mill owner turned reformer, promoted the 1802 Health and Morals of Apprentices Act, which was intended to prevent pauper children from working more than 12 hours a day in mills. Children had started in the mills at around the age of four, working as mule scavengers under the working machinery until they were eight, they progressed to working as little piecers which they did until they were 15. During this time they worked 14 to 16 hours a day, being beaten if they fell asleep. The children were sent to the mills of Derbyshire, Yorkshire and Lancashire from the workhouses in London and other towns in the south of England. A well-documented example was that of Litton Mill. Further legislation followed. By 1835, the share of the workforce under 18 years of age in cotton mills in England and Scotland had fallen to 43%. About half of workers in Manchester and Stockport cotton factories surveyed in 1818 and 1819 had begun work at under ten years of age. Most of the adult workers in cotton factories in mid-19th-century Britain were workers who had begun work as child labourers. The growth of this experienced adult factory workforce helps to account for the shift away from child labour in textile factories. == A representative early spinning mill 1771 == Cromford Mill was an early Arkwright mill and was the model for future mills. The site at Cromford had year-round supply of warm water from the sough which drained water from nearby lead mines, together with another brook. It was a five-storey mill. Starting in 1772, the mills ran day and night with two 12-hour shifts. It started with 200 workers, more than the locality could provide so Arkwright built housing for them nearby, one of the first manufacturers to do so. Most of the employees were women and children, the youngest being only 7 years old. Later, the minimum age was raised to 10 and the children were given 6 hours of education a week, so that they could do the record keeping their illiterate parents could not. The first stage of the spinning process is carding, initially this was done by hand, but in 1775 he took out a second patent for a water-powered carding machine and this led to increased output. He was soon building further mills on this site and eventually employed 1,000 workers at Cromford. By the time of his death in 1792, he was the wealthiest untitled person in Britain. The gate to Cromford Mill was shut at precisely 6am and 6pm every day and any worker who failed to get through it not only lost a day's pay but was fined another day's pay. In 1779, Arkwright installed a cannon, loaded with grapeshot, just inside the factory gate, as a warning to would-be rioting textile workers, who had burned down another of his mills in Birkacre, Lancashire. The cannon was never used. The mill structure is classified as a Grade I listed building, it was first classified in June 1950. == A representative mid-century spinning mill 1840 == Brunswick Mill, Ancoats is a cotton spinning mill in Ancoats, Manchester, Greater Manchester. It was built around 1840, part of a group of mills built along the Ashton Canal, and at that time it was one of the country's largest mills. It was built round a quadrangle, a seven-storey block faced the canal. It was taken over by the Lancashire Cotton Corporation in the 1930s and passed to Courtaulds in 1964. Production finished in 1967. The Brunswick mill was built around 1840 in one phase. The main seven storey block that faces the Ashton Canal was used for spinning. The preparation was done on the second floor and the self-acting mules with 400 spindles were arranged transversely on the floors above. The wings contained the blowing rooms, some spinning and ancillary processes like winding. The four storey range facing Bradford Road was used for warehousing and offices. The mill was built by David Bellhouse, but it is suspected that William Fairbairn was involved in the design. It is built from brick, and has slate roofs. Fireproof internal construction was now standard. Brunswick was built using cast iron columns and beams, each floor was vaulted with transverse brick arches. There was no wood in the structure. It was powered by a large double beam engine. In 1850 the mill had some 276 carding machines, and 77,000 mule spindles, 20 drawing frames, fifty slubbing frames and eighty one roving frames. The structure was good and it successfully converted to ring spinning in 1920- and was the first mill to adopt mains electricity as its principal source of power. The mill structure was classified as a Grade II listed building in June 1994. == Export of technology == While profiting from expertise arriving from overseas (e.g. Lewis Paul), Britain was very protective of home-grown technology. In particular, engineers with skills in constructing the textile mills and machinery were not permitted to emigrate — particularly to the fledgeling America. Horse power (1780–1790) The earliest cotton mills in the United States were horse-powered. The first mill to use this method was the Beverly Cotton Manufactory, built in Beverly, Massachusetts. It was started on 18 August 1788 by entrepreneur John Cabot and brothers. It was operated in joint by Moses Brown, Israel Thorndike, Joshua Fisher, Henry Higginson, and Deborah Higginson Cabot. The Salem Mercury reported that in April 1788 the equipment for the mill was complete, consisting of a spinning jenny, a carding machine, warping machine, and other tools. That same year the mill's location was finalized and built in the rural outsets of North Beverly. The location had the presence of natural water, but it was cited the water was used for upkeep of the horses and cleaning of equipment, and not for mass-production. Much of the internal designs of the Beverly mill were hidden due to concerns of competitors stealing designs. The beginning efforts were all researched behind closed doors, even to the point that the owners of the mill set up milling equipment on their estates to experiment with the process. There were no published articles describing exactly how their process worked in detail. Additionally, the mill's horse-powered technology was quickly dwarfed by new water-powered methods. Slater Following the creation of the United States, an engineer who had worked as an apprentice to Arkwright's partner Jedediah Strutt evaded the ban. In 1789, Samuel Slater took his skills in designing and constructing factories to New England, and he was soon engaged in reproducing the textile mills that helped America with its own industrial revolution. Local inventions spurred this on, and in 1793 Eli Whitney invented and patented the cotton gin, which sped up the processing of raw cotton by over 50 times. === 1800s === In the mid-1800s some of the technology and tools were sold and exported to Russia. The Morozovs family, a well known 19th-century Russian merchant and textile family established a private company in central Russia that produced dyed fabrics on an industrial scale. Savva Morozov studied the process at the University of Cambridge in England and later, with the help of his family, widened his family's business and made it one of the most profitable in the Russian Empire. == Art and literature == William Blake: "And did those feet in ancient time", also known as "Jerusalem", (1804) and other works. Mrs Gaskell: Mary Barton (1848), North and South (1855) Charlotte Brontë: Shirley (1849) Cynthia Harrod-Eagles wrote fictional accounts of the early days of factories and the events of the Industrial Revolution in The Maiden (1985), The Flood Tide (1986), The Tangled Thread (1987), The Emperor (1988), The Victory (1989), The Regency (1990), The Reckoning (1992) and The Devil's Horse (1993), Volumes 8-13, 15 and 16 of The Morland Dynasty. Textile workshops and the Calico Acts are featured in the board game John Company. == See also == Textile manufacturing by pre-industrial methods == References == Footnotes Notes Bibliography Copeland, Melvin Thomas. The cotton manufacturing industry of the United States (Harvard University Press, 1912) online Cameron, Edward H. Samuel Slater, Father of American Manufactures (1960) scholarly biography Conrad Jr, James L. (1995). "'Drive That Branch': Samuel Slater, the Power Loom, and the Writing of America's Textile History". Technology and Culture. 36 (1): 1–28. doi:10.2307/3106339. JSTOR 3106339. S2CID 112131140. Griffin, Emma, A Short History of the British Industrial Revolution (Palgrave, 2010), pp. 86–104 Griffiths, T.; Hunt, P.A.; O'Brien, P. K. (1992). "Inventive activity in the British textile industry". Journal of Economic History. 52: 881–906. doi:10.1017/s0022050700011943. S2CID 154338291. Griffiths, Trevor; Hunt, Philip; O'Brien, Patrick (2008). "Scottish, Irish, and imperial connections: Parliament, the three kingdoms, and the mechanization of cotton spinning in eighteenth-century Britain". Economic History Review. 61 (3): 625–650. doi:10.1111/j.1468-0289.2007.00414.x. S2CID 144918748. Hills, Richard Leslie (1993), Power from Steam: A History of the Stationary Steam Engine (paperback ed.), Cambridge University Press, p. 244, ISBN 9780521458344, retrieved 12 June 2010 Miller, Ian; Wild, Chris (2007), A & G Murray and the Cotton Mills of Ancoats, Lancaster Imprints, ISBN 978-0-904220-46-9 Ray, Indrajit (2011). Bengal Industries and the British Industrial Revolution (1757-1857), Routledge, ISBN 1136825525. Tucker, Barbara M. "The Merchant, the Manufacturer, and the Factory Manager: The Case of Samuel Slater," Business History Review, Vol. 55, No. 3 (Autumn, 1981), pp. 297–313 in JSTOR Tucker, Barbara M. Samuel Slater and the Origins of the American Textile Industry, 1790–1860 (1984) Williams, Mike; Farnie, Douglas Anthony (1992), Cotton Mills of Greater Manchester, Carnegie Publishing, ISBN 0948789697 == External links == University of Arizona pdf Repository of Textile books Essay and source material on Arkwright and Cartwright The cotton industry during the Industrial Revolution Factory Workers in the British Industrial Revolution Spartacus Educational(2014) Cotton times
Wikipedia/Textile_manufacture_during_the_British_Industrial_Revolution
Industrial archaeology (IA) is the systematic study of material evidence associated with the industrial past. This evidence, collectively referred to as industrial heritage, includes buildings, machinery, artifacts, sites, infrastructure, documents and other items associated with the production, manufacture, extraction, transport or construction of a product or range of products. The field of industrial archaeology incorporates a range of disciplines including archaeology, architecture, construction, engineering, historic preservation, museology, technology, urban planning and other specialties, in order to piece together the history of past industrial activities. The scientific interpretation of material evidence is often necessary, as the written record of many industrial techniques is often incomplete or nonexistent. Industrial archaeology includes both the examination of standing structures and sites that must be studied by an excavation. The field of industrial archaeology developed during the 1950s in Great Britain, at a time when many historic industrial sites and artifacts were being lost throughout that country, including the notable case of Euston Arch in London. In the 1960s and 1970s, with the rise of national cultural heritage movements, industrial archaeology grew as a distinct form of archaeology, with a strong emphasis on preservation, first in Great Britain, and later in the United States and other parts of the world. During this period, the first organized national industrial heritage inventories were begun, including the Industrial Monuments Survey in England and the Historic American Engineering Record in the United States. Additionally, a number of regional and national IA organizations were established, including the North American-based Society for Industrial Archeology in 1971, and the British-based Association for Industrial Archaeology in 1973. That same year, the First International Conference on the Conservation of Industrial Monuments was held at Ironbridge in Shropshire. This conference led, in 1978, to the formal establishment of The International Committee for the Conservation of the Industrial Heritage (commonly known as "TICCIH") as a worldwide organization for the promotion of industrial heritage. The members of these and other IA groups are generally a diverse mix of professionals and amateurs who share a common interest in promoting the study, appreciation and preservation of industrial heritage resources. == Industrial archaeology topics and sites == Industrial archaeology covers a wide range of topics, from early ironworks and water-powered mills to large modern factories, as well as ancillary sites and structures such as worker housing, warehouses and infrastructure. IA topics generally fall into one of four categories: Extractive (also known as "basic materials", which includes mining, quarrying, petroleum, lumbering, etc.), Manufacturing (mills and factories, including their power systems and machinery), Public utilities (water, sewer, electric, gas, etc.), and Transport (canals, railways, roads, aviation, bridges, tunnels, etc.). Additionally, the topic of power generation (water, wind, steam, electric, etc.), while applicable to each of the four major IA categories, is sometimes considered its own category. The work of industrial archaeologists has led to greater public awareness of industrial heritage, including the creation of industry museums and the inclusion of sites on national and international historic cultural registers in many parts of the world. Notable examples include the Ironbridge Gorge Museums, Engelsberg Ironworks and Lowell National Historical Park, among many others. == History of industrial archaeology == === Early developments === One of the earliest forerunners of the mid-20th-century IA-movement was the Sheffield Trades Technical Societies, established in 1918 at the University of Sheffield to preserve elements of that city's industrial history. In 1920, the Newcomen Society was founded in Great Britain to foster the study of the history of engineering and technology, including many relics of the Industrial Revolution, such as steam engines, canals, iron bridges, machinery, and other historical artifacts. The Newcomen Society also established the Journal of Industrial Archaeology in 1964, first national IA-publication in the UK. Another early development was the formation of the Cornish Engines Preservation Committee (CEPC) in 1935, to rescue the Levant Mine and Beam Engine in Cornwall. During the early 20th century, the historic preservation movement in the United States was still in its infancy. Most of the historic sites that received any attention were related to presidents and political figures, or the early colonial period. However, in 1925, one of the first industrial museums in the United States opened at Old Slater Mill, in Pawtucket, Rhode Island, at the site of the first successful textile mill in the country, built in 1793. The museum was founded by a group of business leaders with ties to the New England textile industry, during a period of decline due to Southern competition. The Old Slater Mill Association had the foresight to restore the old mill to its early 19th-century appearance, and fill it with a representative collection of textile machinery. In 1966, Old Slater Mill was declared a National Historic Landmark. In the early 1970s, Paul E. Rivard, then the director of the Old Slater Mill museum was one of the key figures in the founding of the Society for Industrial Archeology. Another notable example of an early industrial archaeology site (one that predates the widespread IA-movement), is the Saugus Iron Works National Historic Site in Saugus, Massachusetts. It is the site of the first integrated iron works in North America, and was reconstructed in the 1950s after extensive archaeological excavations that began in the late 1940s by Roland W. Robbins. === Beginnings of the IA movement === The term "industrial archaeology" was popularised in Great Britain in 1955 by Michael Rix of Birmingham University, who wrote an article in The Amateur Historian, about the need for greater study and preservation of 18th and 19th century industrial sites and relics of the British Industrial Revolution. In 1959, Council for British Archaeology (CBA) established an industrial archaeology research committee. The CBA soon developed a standardized record card for industrial monuments, which it distributed to volunteer groups around the UK. In 1965, the National Record of Industrial Monuments (NRIM) was created as a central archive for the record cards that had been collected by Angus Buchanan at the University of Bath. By the late 1960s, a number of local industrial archaeology groups had been formed in the UK, including the Gloucestershire Society for Industrial Archaeology in 1963, the Bristol Industrial Archaeological Society in 1967, and the Greater London Industrial Archaeology Society in 1968, among others. The primary mission of these local IA groups during this period was recording the remaining relics of industrial history, especially those deemed to be most at risk from urban redevelopment schemes. Depending on the condition of the site or artifact, recording typically consists of compiling a brief summary of the site's history through available records, including old maps or photographs, followed by detailed onsite measurements, drawings and photographs of the existing conditions of the site. Generally, a report is prepared and copies are filed in a public archive for the benefit of future generations. Most recording trips are intended to obtain a general overview of existing conditions, and are not meant to be an exhaustive study. One of the first areas to be the subject of a systematic study of industrial archaeology was the Ironbridge Gorge in Shropshire, United Kingdom. This landscape developed from the 17th century as one of the first industrial landscapes in the world, and by the 18th century had a range of extractive industries as well as extensive iron making, ceramic manufacturing, and a series of early railways. The Ironbridge Gorge Museum Trust was established in 1967, and the significance of the Ironbridge Gorge was recognized in 1986 with its designation as a UNESCO World Heritage Site. In 1963, British journalist Kenneth Hudson published the first IA text, titled Industrial archaeology: an introduction. Four years later in April 1967, Hudson spoke at a seminar at the Smithsonian Institution in Washington, D.C., at what is considered the birth of the IA-movement in the United States. The seminar, which was attended by an audience of historic preservationists, museum professionals and others, focused on what was being done to promote the study of industrial archaeology in Great Britain and in Europe, and what needed to be done in the United States. By this time, a number of select historic industrial sites had been recorded by the Historic American Buildings Survey (HABS), which until then had mainly concentrated its efforts on architecturally significant sites. In 1967, the notable New England Textile Mills Survey (NETMS) was performed under the HABS umbrella, led by Robert M. Vogel, curator of the Division of Mechanical and Civil at the Smithsonian Museum of History and Technology. The NETMS was the first large-scale, industrial recording project by HABS. It was followed by the New England Textile Mill Survey II in 1968. The full reports from the 1967 and 1968 textile mill surveys are now available for public viewing on the Library of Congress website, including the Amoskeag Millyard in Manchester, New Hampshire, which was drastically altered soon after the survey was completed. The success of the 1967 and 1968 mill surveys led to the formation of the Historic American Engineering Record (HAER) in 1969, in conjunction with the American Society of Civil Engineers. Since then, thousands of industrial / engineering sites and structures throughout the United States have been recorded by HAER, and are on record at the Library of Congress for public benefit. === 1970s-1980s === By the early 1970s, industrial archaeology was, for the most part, being practiced in a few select countries by amateurs and professionals with different backgrounds and objectives. While much had been accomplished during the preceding decade, the "new" field of industrial archaeology was still struggling to gain acceptance as a true scholarly pursuit. In October 1971, a group of representatives from various museums, universities, and government organizations in the United States and Canada met in Washington, D.C. to establish a means to improve the exchange of ideas and information. The result was the first national-level, IA-related academic society in the world; the Society for Industrial Archeology (SIA). It was decided that the name of the Society would take on the US-Government's spelling of "archeology", instead of "archaeology". The first SIA newsletter was published in January 1972, with Robert M. Vogel as editor. In April of that same year the new group held its first annual conference in New York City. In 1975, the SIA introduced its academic journal, IA, The Journal of the Society for Industrial Archeology, with Emory Kemp as editor. In 1973, the Association for Industrial Archaeology (AIA) was founded in Great Britain. It brought together the numerous local IA-groups that had been formed throughout the country. The AIA publishes a newsletter, Industrial Archaeology News, along with its academic journal, Industrial Archaeology Review, introduced in 1976. Many AIA members have been active in promoting the mission of IA throughout Europe and the rest of the world. With the rapid decline of many established industries in North America and Europe during the 1970s, industrial archaeologists began to take on a new role of recording and preserving recently closed sites, as opposed to antique relics from earlier periods. Among the notable projects during this decade was the successful transformation of Sloss Furnaces in Birmingham, Alabama after it shut down in 1971 into an open air industrial museum. Sloss Furnaces was declared an NHL in 1981. The museum opened in 1983 and offers a variety of educational and civic programs. In 1977, on the initiative of Bruno Corti and following the first studies in Italy, the Italian Society of Industrial Archaeology, SIAI, was founded. The first president is art historian Eugenio Battisti. The SIAI immediately publishes an important journal for defining the cultural boundaries of the continental paths of industrial archaeology, “Il Coltello di Delfo". The following year the British Council of Rome promotes the exhibition "I resti di una rivoluzione", set up in Milan, Florence, Perugia and Naples. In 1982, I.A.Recordings was founded by a small group of volunteers in the UK, to record past and present industries on film and video, as a resource for future generations. During the 1980s, the scope of the field of industrial archaeology in Great Britain shifted away from what was taken place in North America, where the theories of social archaeology that were developed in the historical archaeology field began to be applied to the study of industrial sites. British industrial archaeologists meanwhile mainly focused on the recording of the technical aspects of sites and artifacts. One key development during this period was the shift toward thematic studies of monuments by type, including three initial textile mill surveys in Greater Manchester, Yorkshire and eastern Cheshire led by Keith Falconer. === Since 1990 === Since 1990, there has been an ever-increasing awareness of the importance of industrial heritage, confirmed most prominently by the addition of numerous industrial sites to the UNESCO World Heritage List. Many preserved industrial sites have become a vital part of heritage tourism, including the European Route of Industrial Heritage (ERIH), established in 1999. Based on the success of the Route der Industriekultur in Ruhr, Germany, the ERIH has expanded to consist of sixteen routes in seven countries, with plans for new routes in additional countries. The number of industrial sites that have been preserved and converted to other uses such as apartments, public spaces or museums instead of being demolished is also a testament to the efforts of industrial archaeologists. Industrial archaeology has gradually gained acceptance in the academic arena. In the UK, where the field developed largely from the efforts of volunteer researchers, the emergence of developer-funded projects in the past two decades has led to an increased presence of professional practitioners, with the application of theoretical archaeology methods such as landscape archaeology to the industrial setting. However, while many university archaeology departments now include the industrial period in their degree courses, industrial archaeology remains a fairly limited field of study, with few dedicated industrial archaeology programs, such as those offered at Michigan Technological University and the Ironbridge Institute. Widespread appreciation of the importance of industrial heritage by the general public is still lacking in many areas, as the subject often maintains the perception of being "not old enough" to truly be considered archaeology. Additionally, there are often negative associations with neglected or abandoned industrial sites, including the social, economic and environmental consequences ("brownfield" sites). As with other history-based fields, one of the continuing challenges of industrial archaeologists throughout the world is the competition for ever-decreasing public funding for their research, educational and preservation projects. The sheer number of historic industrial sites and limited funding often means that many are still being lost to neglect, fire and demolition. In 2003, the Nizhny Tagil Charter was adopted by TICCIH at its XII Congress in Nizhny Tagil, Russia. It is the international standard for the study, documentation, conservation and interpretation of the industrial heritage. == IA organizations == There are national industrial archaeology societies in many countries. They bring together people interested in researching, recording, preserving and presenting industrial heritage. Industrial architecture, mineral extraction, heritage-based tourism, power technology, adaptive reuse, and transport history are just some of the themes that are investigated by society members. Most groups publish periodic newsletters and host a variety of conferences, seminars and tours of IA-sites and still-active industries (known as process tours). IA organizations may also be involved in advising on historic conservation matters, or advising government units on revision or demolition of significant sites or buildings. == See also == Arthur Raistrick Aviation archaeology I.A.Recordings Industrial archaeology of Dartmoor Industrial heritage List of tunnels in the United Kingdom Ken Major Major Mining Sites of Wallonia Mill conversion Quarry Bank Mill Railway archaeology Rex Wailes == References == == Further reading == Birmingham, J., Jack, R.I. and Jeans, D. (1979) Australian pioneer technology: sites and relics, Richmond, Vic.: Heinemann Educational Australia, ISBN 0-85859-185-5 Birmingham, J., Jack, R.I. and Jeans, D. (1983) Industrial Archaeology in Australia: rural industry, Richmond, Vic. : Heinemann Publishers Australia, ISBN 0-85859-319-X Buchanan, R.A. (1972) Industrial Archaeology in Britain, Harmondsworth : Penguin, ISBN 0-14-021413-5 Cossons, N. (ed.) (2000) Perspectives on Industrial Archaeology, London : Science Museum, ISBN 1-900747-31-6 Daunton, M.J. (1995) Progress and Poverty: an economic and social history of Britain, 1700–1850, Oxford University Press, ISBN 0-19-822281-5 Deetz, J. (1977) In Small Things Forgotten, Garden City, N.Y. : Anchor Press/Doubleday, ISBN 0-385-08031-X Douet, J. (ed.). (2012) Industrial Heritage Re-tooled: The TICCIH guide to Industrial Heritage Conservation, Lancaster: Carnegie, ISBN 978-1-85936-218-1 Gordon, R.B. and Malone, P.M. (1994), The texture of industry : an archaeological view of the industrialization of North America, Oxford University Press, ISBN 0-19-511141-9 Hamond, F. and McMahon, M. (2002) Recording and Conserving Ireland's Industrial Heritage, Kilkenny : Heritage Council, ISBN 1-901137-39-2 Hills, R. L. (1989) Power from Steam: a history of the stationary steam engine, Cambridge University Press, ISBN 0-521-34356-9 Hudson, K. (1966) Industrial Archaeology: an Introduction, 2nd rev. ed., London : John Baker, 184 p. Hudson, K. (1969) World Industrial Archaeology, Cambridge University Press, ISBN 0-521-21991-4 Itzen, P. and Müller, Chr. (ed.) (2013), The Invention of Industrial Pasts: Heritage, political culture and economic debates in Great Britain and Germany, 1850-2010, Augsburg: Wissner; pp. 184 ISBN 978-3-89639-910-6 Jack, R.I. and Cremin, A. (1994) Australia's Age of Iron, South Melbourne : Oxford University Press in association with Sydney University Press, ISBN 0-424-00158-6 Kane, R. [1844](1971) Industrial Resources of Ireland, The Development of industrial society series, Shannon, Ireland : Irish University Press, ISBN 0-7165-1599-7 McCutcheon, W.A. (1984) The Industrial Archaeology of Northern Ireland, Rutherford, N.J. : Fairleigh Dickinson University Press, ISBN 0-8386-3125-8 Newman, R. and Howard-Davis, C. (2001) The Historical Archaeology of Britain : c.1540-1900, Stroud : Sutton, ISBN 0-7509-1335-5 Orser, C.E., Jr (1996) Images of the Recent Past: readings in historical archaeology , Walnut Creek; London : Alta Mira Press, ISBN 0-7619-9141-7 Palmer, M. and Neverson, P. (1998) Industrial Archaeology : principles and practice [electronic resource], London; New York : Routledge, ISBN 0-203-17066-0 Thomas, J. (ed.) (2000) Interpretive Archaeology : a reader, London : Leicester University Press, ISBN 0-7185-0191-8 Watkins, G. (1999) The Textile Mill Engine: parts 1 & 2, Ashbourne : Landmark, ISBN 1-901522-43-1 == External links == === General === European Route of Industrial Heritage Heritage of Industry Tours Ironbridge Gorge Museum Industrial archaeology exhibit at Smithsonian Institution Steel City: an Archaeology of Sheffield's Industrial Past Italian Network of Industrial Turism === Local IA organisations === Great Britain Berkshire Industrial Archaeology Group Bristol Industrial Archaeological Society Cumbria Industrial History Society Gloucestershire Society for Industrial Archaeology Greater London Industrial Archaeology Society Ironbridge Archaeology unit Hampshire Industrial Archaeology Society Norfolk Industrial Archaeology Society North East Derbyshire Industrial Archaeology Society Northamptonshire Industrial Archaeology Group Scottish Industrial Heritage Society Somerset Industrial Archaeological Society Staffordshire (uk) Industrial Archaeology Society Archived 2014-08-02 at the Wayback Machine Sussex Industrial Archaeology Society Warwickshire Industrial Archaeology Society Yorkshire Archaeological Society - Industrial History Section Archived 2015-01-05 at the Wayback Machine United States The Society for Industrial Archeology has the following local chapters: Klepetko Chapter (Montana) New England Chapters Northern Ohio Chapter Archived 2014-06-03 at the Wayback Machine (Northern Ohio and Western Pennsylvania) Oliver Evans Chapter (Philadelphia metropolitan area) Roebling Chapter (New York metropolitan area and New Jersey) Samuel Knight Chapter (Northern California) === Reference materials === I.A. Recordings a web-based resource site === Degree Programs === Industrial Archaeology Master of Science and PhD degree program at Michigan Technological University Master of Arts degree Archived 2006-06-20 at the Wayback Machine at Ironbridge Institute
Wikipedia/Industrial_archaeology
In macroeconomics, the workforce or labour force is the sum of people either working (i.e., the employed) or looking for work (i.e., the unemployed): Labour force = Employed + Unemployed {\displaystyle {\text{Labour force}}={\text{Employed}}+{\text{Unemployed}}} Those neither working in the marketplace nor looking for work are out of the labour force. The sum of the labour force and out of the labour force results in the noninstitutional civilian population, that is, the number of people who (1) work (i.e., the employed), (2) can work but don't, although they are looking for a job (i.e., the unemployed), or (3) can work but don't, and are not looking for a job (i.e., out of the labour force). Stated otherwise, the noninstitutional civilian population is the total population minus people who cannot or choose not to work (children, retirees, soldiers, and incarcerated people). The noninstitutional civilian population is the number of people potentially available for civilian employment. Noninstitutional civilian population = Labour force + Out of the labour force = Employed + Unemployed + Out of the labour force = Total Population − People who can not work {\displaystyle {\begin{aligned}{\text{Noninstitutional civilian population}}&={\text{Labour force}}+{\text{Out of the labour force}}\\&={\text{Employed}}+{\text{Unemployed}}+{\text{Out of the labour force}}\\&={\text{Total Population}}-{\text{People who can not work}}\end{aligned}}} The labour force participation rate is defined as the ratio of the civilian labour force to the noninstitutional civilian population. Labour force participation rate = Labour force Noninstitutional civilian population {\displaystyle {\text{Labour force participation rate}}={\dfrac {\text{Labour force}}{\text{Noninstitutional civilian population}}}} == Formal and informal == Formal labour is any sort of employment that is structured and paid in a formal way. They are paid formally using payrolls paper, electronic card and alike. Unlike the informal sector of the economy, formal labour within a country contributes to that country's gross national product. Informal labour is labour that falls short of being a formal arrangement in law or in practice. Labour inherit may come as formal or non-formal, an employee old enough but below retirement age bracket passing on to his children. It can be paid or unpaid and it is always unstructured and unregulated. Formal employment is more reliable than informal employment. Generally, the former yields higher income and greater benefits and securities for both men and women. === Informal labour === The contribution of informal labourers is immense. Informal labour is expanding globally, most significantly in developing countries. According to a study done by Jacques Charmes, in the year 2000 informal labour made up 57% of non-agricultural employment, 40% of urban employment, and 83% of the new jobs in Latin America. That same year, informal labour made up 78% of non-agricultural employment, 61% of urban employment, and 93% of the new jobs in Africa. Particularly after an economic crisis, labourers tend to shift from the formal sector to the informal sector. This trend was seen after the Asian economic crisis which began in 1997. === Informal labour and gender === Gender is frequently associated with informal labour. Women are employed more often informally than they are formally, and informal labour is an overall larger source of employment for females than it is for males. Women frequent the informal sector of the economy through occupations like home-based workers and street vendors. The Penguin Atlas of Women in the World shows that in the 1990s, 81% of women in Benin were street vendors, 55% in Guatemala, 44% in Mexico, 33% in Kenya, and 14% in India. Overall, 60% of women workers in the developing world are employed in the informal sector. The specific percentages are 84% and 58% for women in Sub-Saharan Africa and Latin America respectively. The percentages for men in both of these areas of the world are lower, amounting to 63% and 48% respectively. In Asia, 65% of women workers and 65% of men workers are employed in the informal sector. Globally, a large percentage of women that are formally employed also work in the informal sector behind the scenes. These women make up the hidden work force. According to a 2021 FAO study, currently, 85 per cent of economic activity in Africa is conducted in the informal sector where women account for nearly 90 per cent of the informal labour force. According to the ILO's 2016 employment analysis, 64 per cent of informal employment is in agriculture (relative to industry and services) in sub-Saharan Africa. Women have higher rates of informal employment than men with 92 per cent of women workers in informal employment versus 86 per cent of men. Formal and informal labour can be divided into the subcategories of agricultural work and non-agricultural work. Martha Chen et al. believe these four categories of labour are closely related to one another. A majority of agricultural work is informal, which the Penguin Atlas for Women in the World defines as unregistered or unstructured. Non-agricultural work can also be informal. According to Martha Chen et al., informal labour makes up 48% of non-agricultural work in North Africa, 51% in Latin America, 65% in Asia, and 72% in Sub-Saharan Africa. Agriculture and informal economic activity are among some of the most important sources of livelihood for women. Women are estimated to account for approximately 70 per cent of informal cross-border traders and are also prevalent among owners of micro, small, or medium-sized enterprises (MSMEs). MSMEs are more vulnerable to market shocks and market disruptions. For women-owned MSMEs this is often compounded by their lack of access to credit and financial liquidity compared to larger businesses. However, MSMEs are often more vulnerable to market shocks and market disruptions. For women-owned MSMEs, this is often compounded by their lack of access to credit and financial liquidity compared to larger businesses. == Agricultural work == == Paid and unpaid == Paid and unpaid work are also closely related with formal and informal labour. Some informal work is unpaid, or paid under the table. Unpaid work can be work that is done at home to sustain a family, like child care work, or actual habitual daily labour that is not monetarily rewarded, like working the fields. Unpaid workers have zero earnings, and although their work is valuable, it is hard to estimate its value. Men and women tend to work in different areas of the economy, regardless of whether their work is paid or unpaid. Women focus on the service sector, while men focus on the industrial sector. === Unpaid work and gender === Women usually work fewer hours in income generating jobs than men do. Often it is housework that is unpaid. Worldwide, women and girls are responsible for a great amount of household work. The Penguin Atlas of Women in the World, published in 2008, stated that in Madagascar, women spend 20 hours per week on housework, while men spend only two. In Mexico, women spend 33 hours and men spend 5 hours. In Mongolia the housework hours amount to 27 and 12 for women and men respectively. In Spain, women spend 26 hours on housework and men spend 4 hours. Only in the Netherlands do men spend 10% more time than women do on activities within the home or for the household. The Penguin Atlas of Women in the World also stated that in developing countries, women and girls spend a significant amount of time fetching water for the week, while men do not. For example, in Malawi women spend 6.3 hours per week fetching water, while men spend 43 minutes. Girls in Malawi spend 3.3 hours per week fetching water, and boys spend 1.1 hours. Even if women and men both spend time on household work and other unpaid activities, this work is also gendered. === Sick leave and gender === In the United Kingdom in 2014, two-thirds of workers on long-term sick leave were women, despite women only constituting half of the workforce, even after excluding maternity leave. == Globalisation of the labour market == The global supply of labour almost doubled in absolute numbers between the 1980s and early 2000s, with half of that growth coming from Asia. At the same time, the rate at which new workers entered the workforce in the Western world began to decline. The growing pool of global labour is accessed by employers in more advanced economies through various methods, including imports of goods, offshoring of production, and immigration. Global labor arbitrage, the practice of accessing the lowest-cost workers from all parts of the world, is partly a result of this enormous growth in the workforce. While most of the absolute increase in this global labour supply consisted of less-educated workers (those without higher education), the relative supply of workers with higher education increased by about 50 percent during the same period. From 1980 to 2010, the global workforce grew from 1.2 to 2.9 billion people. According to a 2012 report by the McKinsey Global Institute, this was caused mostly by developing nations, where there was a "farm to factory" transition. Non-farming jobs grew from 54 percent in 1980 to almost 73 percent in 2010. This industrialization took an estimated 620 million people out of poverty and contributed to the economic development of China, India and others. Under the "old" international division of labor, until around 1970, underdeveloped areas were incorporated into the world economy principally as suppliers of minerals and agricultural commodities. However, as developing economies are merged into the world economy, more production takes place in these economies. This has led to a trend of transference, or what is also known as the "global industrial shift ", in which production processes are relocated from developed countries (such as the US, European countries, and Japan) to developing countries in Asia (such as China, Vietnam, and India), Mexico and Central America. This is because companies search for the cheapest locations to manufacture and assemble components, so low-cost labor-intensive parts of the manufacturing process are shifted to the developing world where costs are substantially lower. But not only manufacturing processes are shifted to the developing world. The growth of offshore outsourcing of IT-enabled services (such as offshore custom software development and business process outsourcing) is linked to the availability of large amounts of reliable and affordable communication infrastructure following the telecommunication and Internet expansion of the late 1990s. == See also == == References == == Sources == This article incorporates text from a free content work. Licensed under CC BY-SA 3.0 (license statement/permission). Text taken from Seizing the opportunities of the African Continental Free Trade Area for the economic empowerment of women in agriculture​, FAO, FAO. == External links == Media related to Workforce at Wikimedia Commons About the difference, in English, between the use/meaning of workforce/work force and labor/labour/labo(u)r pool
Wikipedia/Workforce
British industrial architecture has been created, mainly from 1700 onwards, to house industries of many kinds in Britain, home of the Industrial Revolution in this period. Both the new industrial technologies and industrial architecture soon spread worldwide. As such, the architecture of surviving industrial buildings records part of the history of the modern world. Some industries were immediately recognisable by the functional shapes of their buildings, as with glass cones and the bottle kilns of potteries. The transport industry was supported first by the growth of a network of canals, then of a network of railways, contributing landmark structures such as the Pontcysyllte Aqueduct and the Ribblehead Viaduct. New materials made available in large quantities by the newly-developed industries enabled novel types of construction, including reinforced concrete and steel. Industrial architects freely explored a variety of styles for their buildings, from Egyptian Revival to medieval castle, English country house to Venetian Gothic. Others sought to impress with scale, such as with tall chimneys as at the India Mill, Darwen. Some directly celebrated the modern, as with the "heroic" Power House, Chiswick, complete with statues of "Electricity" and "Locomotion". In the 20th century, long white "By-pass modern" company headquarters such as the Art Deco Hoover Building were conspicuously placed beside major roads out of London. == Industrial revolution == === Early works === From around 1700, Abraham Darby I made Coalbrookdale the focus of the Industrial Revolution with the production of goods made of cast iron, from cooking pots upwards. His descendant Abraham Darby III made and assembled the sections of The Iron Bridge across the Coalbrookdale Gorge. The company's Bedlam Furnaces were depicted in Philip de Loutherbourg's 1801 painting Coalbrookdale by Night. The Iron Bridge influenced engineers and architects around the world, and was the first of many large cast iron structures. The gorge is now a World Heritage site. === Growth === From 1700, Britain's economy was transformed by industrialisation, growth in trade, and numerous discoveries and inventions, making it the first country to take this step. The working population grew rapidly, especially in the north of England. The Industrial Revolution brought large-scale iron smelting using coke, iron puddling, steam engines, and machine production of textiles. Work was organised in factories that operated several processes on a single site. Some industries, such as steelmaking in Sheffield and textile manufacture in Lancashire, have left substantial surviving buildings; others such as mining and industrial chemistry have left scant remnants. Agricultural processing used corn mills, malt houses, breweries and tanneries; these advanced technically but did not create many large buildings because the industry was evenly distributed across the country, though multi-storey corn mills appeared around 1800 as war raised grain prices. Murrays' Mills, Manchester was begun in 1798, forming the longest mill range in the world; the cotton mills were conveniently placed on the Rochdale Canal, giving access to the 18th century industrial transport network. === Transport network === Industrial growth was accompanied and assisted by the rapid development of a nationwide canal network able to carry heavy goods of all kinds. Canals were cut so as to connect producers to their customers, for example the 1794 Glamorganshire Canal linking the Welsh ironworks at Merthyr Tydfil to the harbour at Cardiff. This spurred rapid industrialisation of the South Wales Valleys. The engineer Thomas Telford undertook some major canal works, including between 1795 and 1805 the 126 feet (38 m) high Pontcysyllte Aqueduct that enables the Llangollen Canal to cross the River Dee, Wales, and between 1803 and 1822 the Caledonian Canal linking a chain of freshwater lochs across Scotland with the enormous Neptune's Staircase, a series of eight large locks, each 180 feet (55 m) long by 40 feet (12 m) wide, that together enable barges to climb 64 feet (20 m). === Shipbuilding === Chatham Dockyard on the River Medway in Kent constructed and equipped ships of the Royal Navy from the time of Henry VIII for more than 400 years, using the most advanced technology for its ships and its industrial buildings. No. 3 covered slip in Chatham Dockyard provides a roof over a shipbuilding slipway, enabling the timbers of the ship under construction to stay dry and sound, unlike traditional outdoor construction. Its wooden roof trusses were built in 1838. No. 7 covered slip, built in 1852, is one of the earliest metal trussed roofs. === Functional design === Some industries had easily-recognised architectural elements, shaped by the functions they performed, such as the glass cones of glassworks, the bottle ovens such as those of the Staffordshire Potteries or the Royal Worcester porcelain works, the tapering roofs of the oast houses that dried the hops from Kent's hop orchards, and the pagoda-like ventilators of Scotch whisky distilleries. == Workshop of the world == In the mid-19th century, Britain became in Benjamin Disraeli's 1838 phrase the "workshop of the world". Production in many industries grew rapidly, assisted by the development of an efficient distribution system in the new railway network. This allowed industries to concentrate production at a distance from sources of raw materials, especially coal. It powered steam engines for mills of all types, for example freeing the cotton mills from having to be beside a fast-flowing river, and enabling iron foundries, and blast furnaces to increase greatly in size. === Designed to impress === The wealth generated by the new industries enabled mill-owners to build to impress. The cotton magnate Eccles Shorrock commissioned Ernest Bates to create a showy design for his India Mill at Darwen, Lancashire, complete with a 300 feet (91 m) tall Italianate campanile-style chimney. This was built in red, white, and black brick, topped with cornices of stone, an ornamental urn at each corner, and an ornate cresting consisting of over 300 pieces of cast iron. === Cathedrals of progress === Britain's railways, the first in the world, transformed both ordinary life and industry with unprecedentedly rapid transport. The railways showed off their importance with architecture that both referred to the past and celebrated the future. The French poet Théophile Gautier described the new railway stations as "cathedrals of the new humanity". Newcastle railway station, despite its curved platforms, was given a fully-covered roof in 1850, the earliest surviving one on the country. Bristol Temple Meads railway station has a cathedral-like exterior with Gothic arches and a pinnacled tower, while the 1841 old station there had a hammerbeam roof, said to have been modelled on Westminster Hall's timbers. The Great Western Railway's engineer, Isambard Kingdom Brunel, indeed described the station as "a cathedral to the iron horse". Paddington railway station was designed by Brunel, inspired by Joseph Paxton's Crystal Palace and the München Hauptbahnhof. === Experimenting with styles === Industrial architects experimented freely with non-industrial styles. One of the earliest was Egyptian Revival, a style that arose in response to Napoleon's conquest of Egypt, accompanied by a scientific expedition. Joseph Bonomi designed the Temple Works flax mill offices, in Holbeck, Leeds, modelled on the Mammisi of the Dendera Temple complex, in 1836–1840. At Stoke Newington, the Metropolitan Water Board's engine house was constructed to look something like a medieval castle, complete with towers and crenellation. The pumping station at Ryhope, Sunderland, was built in 1869, more or less Jacobean in style with curving Dutch gables, and an octagonal brick chimney. The architectural historian Hubert Pragnell calls it a "cathedral of pistons and brass set within a fine shell of Victorian brickwork with no expense spared". The Bliss Tweed Mill at Chipping Norton was designed in 1872 by George Woodhouse, a Lancashire mill architect. It is constructed of local limestone, and despite its 5 storeys, is grandly modelled to resemble a Charles Barry type English country house, with the addition of the dominant chimney stack, "a sophisticated aesthetic solution to a functional requirement". The chimney and curved stairwell tower are offset from the centre of the building, while the corners are balustraded and topped with urns. The Templeton Carpet Factory in Glasgow has been called "the most remarkable display of polychromatic brickwork in Britain". It was built in 1892 by William Leiper for James Templeton and Son, for the weaving of Axminster carpets. It was modelled in Venetian Gothic on the Doge's Palace in Venice. === Landmark structures === Some industrial structures have become landmarks in their own right. The Ribblehead Viaduct carries the Settle–Carlisle railway across the Ribble Valley in North Yorkshire. It was built by the Midland Railway to a design by John Sydney Crossley, opening in 1876. Faced with limestone and with almost semicircular red brick arches, it is 440 yards (400 m) long and 104 feet (32 m) high. It is now an admired Grade II*-listed structure. Gas for domestic heating, produced from coal, was stored in enormous cylindrical gasholders, their iron cage frames now surviving in some places around the country as memorials to long-vanished industry (such as the Bromley-by-Bow or Oval gasholders). === Moving towards the modern === The Power House, Chiswick is an electricity generating station, designed by William Curtis Green and J. Clifton Robinson in 1901 for the London United Electrical Tramway Company. It is described by the architectural historian Nikolaus Pevsner as a "monumental free Baroque brick and stone composition" from the "early, heroic era of generating stations" with enormous stone voussoirs. Above the entrance is a pair of large stone figures: one representing "Electricity", her foot on a globe, and her hand emitting lightning flashes by the rotor of a generator; the other representing "Locomotion", her foot on an electric tram and her hand on a winged wheel. Arthur Sanderson & Sons' Grade II* listed wallpaper printing works in Chiswick was designed by the modernist architect Charles Voysey in 1902, his only industrial building. It is faced in white glazed brick, with Staffordshire blue bricks forming horizontal bands; the plinth, door and window surrounds, and dressings are in Portland stone. It is considered an "important Arts and Crafts factory building". It faces Sandersons' more conventional 1893 red brick factory across a narrow street. Charles Holden's modernist station buildings for the London Underground freely combined cylinders with flat planes. An example is his "futuristic" 1933 Arnos Grove tube station, which has a brightly-lit circular ticket hall in brick with a flat concrete roof. === New types of construction === Alongside new styles of architecture came novel types of construction. William T. Walker's 1903–1904 Clément-Talbot car factory on Barlby Road, Ladbroke Grove, had a traditional-looking office entrance in William and Mary style, built of red brick with stone pilasters, cornice, the Talbot family crest, and Porte-cochère. The impressive frontage gave access to a vaulted marble-floored entrance hall that was used as a car showroom, while the main factory building behind it was an early reinforced concrete structure. The availability of new materials such as steel and concrete in industrial quantities enabled radically new designs, such as the Tees Transporter Bridge. It has concrete foundations, poured in shafts dug using caissons, down to bedrock far below the high tide mark; the bridge structure is of steel, with granite piers. == Between the wars, 1914 to 1945 == === "By-pass modern" === The "daylight factory" concept, with long sleek buildings and attractive grassed surroundings, was brought in from America, starting in Trafford Park. They often had large windows and were placed along major roads such as the Great West Road in Brentford, West London, earning them the name of "by-pass modern" factories. A well-known exemplar is Wallis, Gilbert and Partners' 1932–1935 Hoover Building in the Art Deco style; it was at the time derided for "its overtly commercial character", but is now Grade II-listed. The architectural historian Hubert Pragnell describes it as "the cathedral of modernism" and "an icon of 1930s design". === Art Deco Egyptian === A distinctively different inter-war building is the Carreras Cigarette Factory, built 1926–1928 on an inner-city site in Mornington Crescent, Camden. It was designed by the architects M. E. Collins, O. H. Collins, and A. G. Porri in a combination of Art Deco and Egyptian Revival styles. The factory has a frontage of 550 feet (170 m) under a continuous cornice with flute lines painted red and blue. Its construction is modern, a pioneer of pre-stressed concrete, but it is decorated to recall the glories of ancient Egypt, after the discovery of Tutankhamun's tomb in 1922. The company chose a black cat based on the Egyptian cat god Bastet to symbolise its brand, and placed a pair of large cat effigies beside the entrance stairs, as well as smaller cat roundels on the building. == Contemporary == === Post-war === Since the Second World War, architects have created impressive industrial buildings in a range of modern or post-modernist styles. One such is the Grade II* British Gas Engineering Research Station at Killingworth, which was built in 1967 to a design by Ryder and Yates. Historic England calls it a "tour de force of post-war architecture with deliberate references to continental examples in the transformation of service elements into sculptural forms". CZWG's Aztec West in the Bristol West Business Park uses horizontal stripes of brickwork interrupted by tall narrow windows and white concrete bevels to give a pilaster effect and, with its symmetrical concave-fronted buildings, an echo of Art Deco style. === 21st century === The partnership of architecture and engineering is seen in Heathrow Airport's Terminal 5 building, opened in 2008. It is 1,299 feet (396 m) long, 577 feet (176 m) wide and 130 feet (40 m) tall, making it the largest free-standing building in Britain. The roof is supported on exposed hinged trusses. The architects were Richard Rogers Partnership assisted by aviation architects Pascall+Watson, and the engineers were Arup for the above-ground works and Mott MacDonald for the substructures. == Notes == == References == == Sources == Cherry, Bridget; Pevsner, Nikolaus (1991). The Buildings of England. London 3: North West. London: Penguin Books. ISBN 978-0-14-071048-9. OCLC 24722942. Historic England (April 2011). "Historical Summary". Industrial Buildings: Listing Selection Guide. Historic England. pp. 2–6. Jackson, Alan (1984) [1969]. London's Termini (Revised ed.). London: David & Charles. ISBN 0-330-02747-6. Jones, Edgar (1985). Industrial Architecture in Britain: 1750–1939. Oxford: Facts on File. ISBN 978-0-8160-1295-4. OCLC 12286054. Parissien, Steven (1997). Station to Station. London: Phaidon Press. ISBN 978-0-71483-467-2. Pearson, Lynn (2016). Victorian and Edwardian British Industrial Architecture. Crowood Press. ISBN 978-1-78500-189-5. OCLC 959428302. Pragnell, Hubert J. (2021) [2000]. Industrial Britain: an Architectural History. Batsford. ISBN 978-1-84994-733-6. OCLC 1259509747. Thomas, Bruce (1992). "Merthyr Tydfil and Early Ironworks in South Wales". In Garner, John (ed.). The Company Town: architecture and society in the early industrial age. New York: Oxford University Press. pp. 17–42. ISBN 978-1-4294-0727-4. OCLC 252590032. Winter, John (1970). Industrial Architecture: A Survey of Factory Building. London: Studio Vista. OCLC 473557982.
Wikipedia/British_industrial_architecture
The production of renewable energy in Scotland is a topic that came to the fore in technical, economic, and political terms during the opening years of the 21st century. The natural resource base for renewable energy is high by European, and even global standards, with the most important potential sources being wind, wave, and tide. Renewables generate almost all of Scotland's electricity, mostly from the country's wind power. In 2020, Scotland had 12 gigawatts (GW) of renewable electricity capacity, which produced about a quarter of total UK renewable generation. In decreasing order of capacity, Scotland's renewable generation comes from onshore wind, hydropower, offshore wind, solar PV and biomass. Scotland exports much of this electricity. On 26 January 2024, the Scottish Government confirmed that Scotland generated the equivalent of 113% of Scotland's electricity consumption from renewable energy sources, making it the highest percentage figure ever recorded for renewable energy production in Scotland. It was hailed as "a significant milestone in Scotland's journey to net zero" by the Cabinet Secretary for Wellbeing Economy, Fair Work and Energy, Neil Gray. It becomes the first time that Scotland produced more renewable energy than it actually consumed, and demonstrates the "enormous potential of Scotland's green economy" as claimed by Gray. Continuing improvements in engineering and economics are enabling more of the renewable resources to be used. Fears regarding fuel poverty and climate change have driven the subject high up the political agenda. In 2020 a quarter of total energy consumption, including heat and transportation, was met from renewables, and the Scottish government target is half by 2030. Although the finances of some projects remain speculative or dependent on market incentives, there has been a significant—and, in all likelihood, long-term—change in the underpinning economics. In addition to planned increases in large-scale generating capacity using renewable sources, various related schemes to reduce carbon emissions are being researched. Although there is significant support from the public, private and community-led sectors, concerns about the effect of the technologies on the natural environment have been expressed. There is also a political debate about the relationship between the siting, and the ownership and control of these widely distributed resources. == Realisation of the potential == === Summary of Scotland's resource potential === === Targets === In 2005 the aim was for 18% of Scotland's electricity production to be generated by renewable sources by 2010, rising to 40% by 2020. In 2007 this was increased to 50% of electricity from renewables by 2020, with an interim target of 31% by 2011. The following year new targets to reduce overall greenhouse gas emissions by 80% by 2050 were announced and then confirmed in the 2009 Climate Change Delivery Plan. Maf Smith, director of the Sustainable Development Commission in Scotland said "Governments across the world are shying away from taking the necessary action. The Scottish Government must be commended for its intention to lead the way". Scotland aims to produce 50% of all energy (not just electricity) from renewable sources by 2030. An ambitious target has been set with a 7 year plan to build an extra 8GW of offshore wind power by 2030. It remains a policy of the Scottish Government to reduce emissions to net zero by 2045. === History === Electricity production is only part of the overall energy use budget. In 2002, Scotland consumed a total of 175 terawatt-hours (TWh) of energy in all forms, some 2% less than in 1990. Of this, only 20% was consumed in the form of electricity by end users, the great majority of energy utilised is from the burning of oil (41%) and gas (36%). Nonetheless, the renewable electricity generating capacity may be 60 GW or more, greater than required to provide the existing energy provided from all Scottish fuel sources of 157 TWh. 2002 figures used as a baseline in RSPB Scotland et al. (2006) for electricity production are: gas (34%), oil (28%), coal (18%) and nuclear (17%), with renewables 3% (principally hydro-electric), prior to the substantial growth in wind power output. In January 2006 the total installed electrical generating capacity from all forms of renewable energy was less than 2 GW, about a fifth of the total electrical production. Scotland also has significant quantities of fossil fuel deposits, including substantial proven reserves of oil and gas and 69% of UK coal reserves. Nonetheless, the Scottish Government has set ambitious targets for renewable energy production. Most electricity in Scotland is carried through the National Grid, with Scotland's renewable mix thus contributing to the electricity production of Great Britain as a whole. By 2012, over 40% of Scotland's electricity came from renewable energy, and Scotland contributed almost 40% of the UK's renewables output. At the end of that year there was 5,801 megawatts (MW) of installed renewables electricity capacity, an increase of 20.95% (1,005 MW) on the end of 2011. Renewable electricity generation in 2012 was a record high at 14,756 GWh – an increase of 7.3% on 2011, the previous record year for renewables output. In 2015, Scotland generated 59% of its electricity consumption through renewable sources, exceeding the country's goal of 50% renewable electricity by that year. In 2018, Scotland exported over 28% of electricity generation to the rest of the UK. By 2019 renewable electricity generation was 30,528 GWh, over 90% of Scotland's gross electricity consumption (33,914 GWh) and 21% of overall energy use was produced from renewable sources, against Scottish Government targets of 100% by 2020 and 50% by 2030 respectively. At the start of 2020, Scotland had 11.8 gigawatts (GW) of installed renewable electricity capacity which produced approximately 25% of total UK renewable generation (119,335 GWh). === Economic impact === The renewable energy industry supports more than 11,500 jobs in Scotland, according to a 2013 study by Scottish Renewables. With 13.9 GW of renewable energy projects in the pipeline, the sector has the potential to grow quickly in the years ahead creating more jobs in the region. Glasgow, Fife and Edinburgh are key centres of offshore wind power development, and the emerging wave power and tidal power industries are centred around the Highlands and Islands. Rural job creation is being supported by bioenergy systems in areas such as Lochaber, Moray and Dumfries and Galloway. Although the finances of some projects remain speculative or dependent on market incentives there has been a significant and in all likelihood long-term change, in the underpinning economics. An important reason for this ambition is growing international concern about human-induced climate change. The Royal Commission on Environmental Pollution's proposal that carbon dioxide emissions should be reduced by 60% was incorporated into the UK government's 2003 Energy White Paper. The 2006 Stern Review proposed a 55% reduction by 2030. Recent Intergovernmental Panel on Climate Change reports have further increased the profile of the issue. == Hydroelectricity == As of 2007, Scotland has 85% of the UK's hydroelectricity resource, much of it developed by the North of Scotland Hydro-Electric Board in the 1950s. The "Hydro Board", which brought "power from the glens", was then a nationalised industry, it was privatised in 1989 and is now part of Scottish and Southern Energy plc. As of 2021, installed capacity is 1.67 GW, this is 88% of total UK capacity and includes major developments such as the 120 MW Breadalbane Scheme and the 245 MW Tummel system. Several of Scotland's hydro-electric plants were built to power the aluminium smelting industry. These were built in several "schemes" of linked stations, each covering a catchment area, whereby the same water may generate power several times as it descends. Numerous remote straths were flooded by these schemes, many of the largest of which involved tunnelling through mountains as well as damming rivers. Emma Wood, the author of a study of these pioneers, described the men who risked their lives in these ventures as "tunnel tigers". As of 2010, it is estimated that as much as another 1.2 GW of capacity remains available to exploit, mostly in the form of micro and small-hydro developments such as those in Knoydart and Kingussie. The 100 MW Glendoe Project, which opened in 2009, was the first large-scale dam for almost fifty years. In April 2010 permission was granted for four new hydro schemes totalling 6.7 MW capacity in the Loch Lomond and The Trossachs National Park. There is also further potential for new pumped storage schemes that would work with intermittent sources of power such as wind and wave. Operational examples include the 440 MW Cruachan Dam and 300 MW Falls of Foyers schemes, while exploratory work for the 1.5 GW Coire Glas scheme commenced in early 2023. These schemes have the primary purpose of balancing peak demands on the electricity grid. == Wind power == Wind power is the country's fastest growing renewable energy technology, with 8,423 MW of installed capacity as of 2018. On 7 August 2016, a combination of high wind and low consumption caused more wind power generation (106%) than consumption. Scottish wind turbines provided 39,545 MWh during the 24 hours of that date, while consumption was 37,202 MWh. It was the first time that measurements were available to confirm that fact. Electricity generated by wind in November 2018 was enough to power nearly 6 million homes and wind production outstripped total electricity demand on twenty days during that month. This latter outcome was described by environmental group WWF Scotland as "truly momentous". The target for 2030, made in 2023, was for 11GW of offshore wind by 2030. This would represent an increase of 400% in offshore wind and a 60% increase in total wind generated power. === Onshore === The 54-turbine Black Law Wind Farm has a total capacity of 124 MW. It is located near Forth in South Lanarkshire and was built on an old opencast coalmine site, with an original capacity of 97 MW from 42 turbines. It employs seven permanent staff on site and created 200 jobs during construction. A second phase saw the installation of a further 12 turbines. The project has received wide recognition for its contribution to environmental objectives. The United Kingdom's largest onshore wind farm (539 MW) is at Whitelee in East Renfrewshire. There are many other onshore wind farms, including some—such as that on the Isle of Gigha—which are in community ownership. The Heritage Trust set up Gigha Renewable Energy to buy and operate three Vestas V27 wind turbines. They were commissioned on 21 January 2005 and are capable of generating up to 675 kW of power and profits are reinvested in the community. The island of Eigg in the Inner Hebrides is not connected to the National Grid, and has an integrated renewable power supply with wind, hydro and solar and battery storage, and a rarely used diesel backup. The siting of turbines is sometimes an issue, but surveys have generally shown high levels of community acceptance for wind power. Wind farm developers are encouraged to offer "community benefit funds" to help address any disadvantages faced by those living adjacent to wind farms. Nonetheless, Dumfries and Galloway's local development plan guidance concludes that "some areas are considered to have reached capacity for development, due to the significant cumulative effects already evident". === Offshore === The Robin Rigg Wind Farm is a 180 MW development completed in April 2010, which is Scotland's first offshore wind farm, sited on a sandbank in the Solway Firth. Eleven of the world's most powerful wind turbines (Vestas V164 – 8.4 MW each) are located in the European Offshore Wind Deployment Centre off the east coast of Aberdeenshire. It is estimated that 11.5 GW of onshore wind potential exists, enough to provide 45 TWh of energy. More than double this amount exists on offshore sites where mean wind speeds are greater than on land. The total offshore potential is estimated at 25 GW, which although more expensive to install, could be enough to provide almost half the total energy used. Plans to harness up to 4.8 GW of the potential in the inner Moray Firth and Firth of Forth were announced in January 2010. Moray Offshore Renewables and SeaGreen Wind Energy were awarded development contracts by the Crown Estate as part of a UK-wide initiative. Also in 2010, discussions were held between the Scottish Government and Statoil of Norway with a view to developing a 5-turbine floating windfarm, possibly to be located off Fraserburgh. In July 2016, RSPB challenged development in the Firth of Forth and Firth of Tay. Moray East Offshore Wind Farm was granted consent for a 1,116 MW development in 2014 by the Scottish Government. The 103rd and final jacket for the project was installed in December 2020. The Hywind Scotland array off the coast of Peterhead is the world's first floating wind farm. It consists of five 6 MW turbines which have a rotor diameter of 154m and is aimed at demonstrating the feasibility of larger systems of this type. == Wave power == Various systems have been developed since the 1970s, aimed at harnessing the enormous potential available for wave power off Scotland's coasts. Early development of wave power was led by Stephen Salter at the University of Edinburgh, on the Edinburgh or Salter's duck, although this was never commercialised.One of the first grid-connected wave power stations was the Islay LIMPET (Land Installed Marine Power Energy Transformer) energy converter. It was installed on the island of Islay by Wavegen Ltd, and opened in 2001 as the world's first commercial-scale wave-energy device. However, in March 2013, the new owners Voith Hydro decided to close down Wavegen choosing to concentrate on tidal power projects. The Siadar Wave Energy Project was announced in 2009. This 4 MW system was planned by npower Renewables and Wavegen for a site 400 metres off the shore of Siadar Bay, in Lewis. However, in July 2011 holding company RWE announced it was withdrawing from the scheme, and Wavegen was seeking new partners. Edinburgh based Ocean Power Delivery, later Pelamis Wave Power, developed the Pelamis Wave Energy Converter between 1998 and 2014. Both the P1 and P2 devices were tested at the European Marine Energy Centre in Orkney, and three P1 machines were installed in Portugal at the Aguçadoura Wave Farm in late 2008. In 2009, the Swedish power firm Vattenfall started development of the Aegir Wave Farm off the west coast of Shetland which would use Pelamis devices, however the project was cancelled after Pelamis went into administration. Following the demise of Pelamis and Aquamarine Power, Wave Energy Scotland was set up in 2014 to facilitate the development of wave energy. It was set up by the Scottish Government as a subsidiary of Highlands and Islands Enterprise. However, although Scotland has "more wave and tidal devices deployed in our waters than anywhere else in the world" commercial production from wave energy has been slow to develop. Between 2015 and 2022, the Wave Energy Scotland programmes helped fund the development and demonstration of parrt-scale devices by Mocean Energy and AWS Ocean Energy, which were then tested at EMEC. The Mocean device was redeployed within their Renewables for Subsea Power project, providing power for over a year to autonomous monitoring for oil and gas projects. == Tidal power == Unlike wind and wave, tidal power is an inherently predictable source, and there are many sites around Scotland where it could be harvested to generate power. The Pentland Firth between Orkney and mainland Scotland has been described as the "Saudi Arabia of tidal power" and may be capable of generating up to 10 GW, although a more recent estimate suggests an upper limit of 1.9 GW. In March 2010 a total of ten sites in the area, capable of providing an installed capacity of 1.2 GW of tidal and wave generation were leased out by the Crown Estate. Several other tidal sites with considerable potential exist in the Orkney archipelago. Tidal races on the west coast at Kylerhea between Skye and Lochalsh, the Grey Dog north of Scarba, the Dorus Mòr off Crinan and the Gulf of Corryvreckan also offer significant prospects. The "world's first community-owned tidal power generator" became operational in Bluemull Sound off Yell, Shetland, in early 2014. This 30 kW Nova Innovation device fed into the local grid, and was replaced by a 100 kW tidal turbine connected in August 2016. The array was expanded to six turbines in January 2023, although the three oldest turbines were removed a few months later. At the opposite end of the country a 2010 consultants' report into the possibility of a scheme involving the construction of a Solway Barrage, possibly south of Annan, concluded that the plans "would be expensive and environmentally sensitive." In 2013 an alternative scheme using the VerdErg Renewable Energy spectral marine energy converter was proposed for a plan involving the use of a bridge along the route of an abandoned railway line between Annan and Bowness-on-Solway. In October 2010 MeyGen, a consortium of Morgan Stanley, Atlantis Resources Corporation and International Power, received a 25-year operational lease from the Crown Estate for a 400 MW tidal power project in the Pentland Firth. In September 2013 the Scottish Government granted permission to Meygen for the commencement of the "largest tidal energy project in Europe" and the developer announced the installation of a 9 MW demonstration project of up to six turbines, expanding to an 86 MW array tidal array. Commercial production commenced in November 2016, with the four turbines of Phase 1 installed by February 2017. Current owners SIMEC Atlantis Energy (SAE) intend to develop the MeyGen site up to its current grid capacity of 252 MW. In 2022 and 2023 SAE was awarded Contracts for Difference to supply 28 MW and 22 MW of electricity, which will fund the next stage of the project's development. Scottish tidal developers Nova Innovation and Orbital Marine Power were each awarded €20m of Horizon Europe funding in 2023 towards developing tidal arrays in Scotland. Nova plan to install 16 turbines totalling 4 MW in Orkney, while Orbital plan four O2 turbines with a total capacity of 9.6 MW. == Bioenergy == === Biofuel === Various small-scale biofuel experiments have been undertaken. For example, in 2021 British Airways flew a 35% aviation biofuel demonstration flight from London to Glasgow. Some say that sustainable aviation fuel (not necessarily biofuel) for the UK should be produced in Scotland due to the high share of renewable energy. Due to the relatively short growing season for sugar producing crops, ethanol is not commercially produced as a fuel. === Biogas, anaerobic digestion and landfill gas === Biogas, or landfill gas, is a biofuel produced through the intermediary stage of anaerobic digestion consisting mainly of 45–90% biologically produced methane and carbon dioxide. In 2007 a thermophilic anaerobic digestion facility was commissioned in Stornoway in the Western Isles. The Scottish Environment Protection Agency (SEPA) has established a digestate standard to facilitate the use of solid outputs from digesters on land. It has been recognised that biogas (mainly methane) – produced from the anaerobic digestion of organic matter – is potentially a valuable and prolific feedstock. As of 2006, it is estimated that 0.4 GW of generating capacity might be available from agricultural waste. Landfill sites have the potential for a further 0.07 GW with sites such as the Avondale Landfill in Falkirk already utilising their potential. === Solid biomass === A 2007 report concluded that wood fuel exceeded hydro-electric and wind as the largest potential source of renewable energy. Scotland's forests, which made up 60% of the UK resource base, were forecast to be able to provide up to 1 million tonnes of wood fuel per annum. The biomass energy supply was forecast to reach 450 MW or higher, (predominantly from wood), with power stations requiring 4,500–5,000 oven-dry tonnes per annum per megawatt of generating capacity. However a 2011 Forestry Commission and Scottish government follow-up report concluded that: "...there is no capacity to support further large scale electricity generation biomass plants from the domestic wood fibre resource." A plan to build in Edinburgh a 200 MW biomass plant which would have imported 83% of its wood, was withdrawn by Forth Energy in 2012 but the energy company E.ON has constructed a 44 MW biomass power station at Lockerbie using locally sourced crops. A 2007 article by Renew Scotland claimed that automatic wood pellet boilers could be as convenient to use as conventional central heating systems. These boilers might be cheaper to run and, by using locally produced wood fuel, could try to be as carbon neutral as possible by using little energy for transportation. There is also local potential for energy crops such as short-rotation willow or poplar coppice, miscanthus energy grass, agricultural wastes such as straw and manure, and forestry residues. These crops could provide 0.8 GW of generating capacity. === Incineration === There is a successful waste-to-energy incineration plant at Lerwick in Shetland which burns 22,000 tonnes (24,250 tons) of waste every year and provides district heating to more than 600 customers. Although such plants generate carbon emissions through the combustion of the biological material and plastic wastes (which derive from fossil fuels), they also reduce the damage done to the atmosphere from the creation of methane in landfill sites. This is a much more damaging greenhouse gas than the carbon dioxide the burning process produces, although other systems which do not involve district heating may have a similar carbon footprint to straightforward landfill degradation. == Solar energy == Solar radiation has strong seasonality in Scotland as a result of its latitude. In 2015, solar PV contributed 0.2% to Scotland's final energy consumption. In a 100% renewable scenario for 2050, it is estimated that solar PV would provide 7% of electricity. The UK's practicable resource is estimated at 7.2 TWh per year. Despite Scotland's relatively low level of sunshine hours, solar thermal panels can work effectively as they are capable of producing hot water even in cloudy weather. The technology was developed in the 1970s and is well-established with various installers in place; for example, AES Solar based in Forres provided the panels for the Scottish Parliament building. In 2022 solar power capacity in Scotland had reached 420MW. Government grants became available to low income households for solar power installations from 2022. == Geothermal energy == Geothermal energy is obtained from thermal energy generated and stored in the Earth. The most common form of geothermal energy systems in Scotland provide heating through a ground source heat pump. These devices transfer energy from the thermal reservoir of the earth to the surface via shallow pipe works, utilising a heat exchanger. Ground source heat pumps generally achieve a Coefficient of performance of between 3–4, meaning for each unit of energy in, 3–4 units of useful heat energy is outputted. The carbon intensity of this energy is dependent on the carbon intensity of the electricity powering the pump. Installation costs can vary from £7,000 to £10,000, and grants may be available from the CARES initiative operated by Local Energy Scotland. Up to 7.6 TWh of energy is available on an annual basis from this source. Mine-water geothermal systems are also being explored, utilising the consistent ambient temperature of the earth to raise the temperature of water for heating by circulating it through unused mine tubes. The water will generally require further heating in order to reach a usable temperature. An example is the Glenalmond Street project in Shettleston, which uses a combination of solar and geothermal energy to heat 16 houses. Water in a coal mine 100 metres (328 ft) below ground level is heated by geothermal energy and maintained at a temperature of about 12 °C (54 °F) throughout the year. The warmed water is raised and passed through a heat pump, boosting the temperature to 55 °C (131 °F), and is then distributed to the houses providing heating to radiators. There is also potential for geothermal energy production from decommissioned oil and gas fields. == Complementary technologies == It is clear that if carbon emissions are to be reduced, a combination of increased production from renewables and decreased consumption of energy in general and fossil fuels in particular will be required. The Energy Technology Partnership provides a bridge between academic research in the energy sector and industry and aims to translate research into economic impact. Although also low-carbon, Torness – the only nuclear power station – is due to be closed in 2028 and no new nuclear power in Scotland built due to Scottish government opposition. === Grid management === Demand patterns are changing with the emergence of electric vehicles and the need to decarbonise heat. The Scottish Government has investigated various scenarios for energy supply in 2050 and in one called "An Electric Future", "electrical energy storage is widely integrated across the whole system" and "the EV fleet operates as a vast distributed energy store, capable of supporting local and national energy balancing" and "better insulated buildings mean that domestic energy demand has fallen significantly." In 2007 Scottish and Southern Energy plc in conjunction with the University of Strathclyde began the implementation of a "Regional Power Zone" in the Orkney archipelago. This ground-breaking scheme (that may be the first of its kind in the world) involves "active network management" that will make better use of the existing infrastructure and allow a further 15 MW of new 'non-firm generation' output from renewables onto the network. In 2013, Orkney generated 103% of its total electricity needs from renewable sources. This figure rose to 128% in 2020, and Orkney has been hailed as an example to follow in the green energy market. In January 2009 the government announced the launch of a "Marine Spatial Plan" to map the potential of the Pentland Firth and Orkney coasts and agreed to take part in a working group examining options for an offshore grid to connect renewable energy projects in the North Sea to on-shore national grids. The potential for such a scheme has been described as including acting as a "30 GW battery for Europe's clean energy". The initiative has been awarded a Scottish Awards for Quality in Planning in 2016. In August 2013 Scottish Hydro Electric Power Distribution connected a 2 MW lithium-ion battery at Kirkwall Power Station. This was the UK's first large-scale battery connected to a local electricity distribution network. There are other demand management initiatives being developed. For example, Sunamp, a company based in East Lothian, secured a £4.5 million investment in 2020 to develop its heat storage, which store energy that can then be used to heat water. A 50MW/100MWh battery is being built at Wishaw near Glasgow, and a 50 MW battery started in 2023. Much greater linkage to sell more electricity to England has been proposed, but this may not be viable if nodal electricity pricing is implemented in Britain. Norway has so far refused a Scotland-Norway interconnector. === Carbon sequestration === Also known as carbon capture and storage, this technology involves the storage of carbon dioxide (CO2) that is a by-product of industrial processes through its injection into oil fields. It is not a form of renewable energy production, but it may be a way to significantly reduce the effect of fossil fuels whilst renewables are commercialised. The technology has been successfully pioneered in Norway. No commercial-scale projects exist in Scotland as yet although in 2020 the UK government allocated 800 million pounds to attempt to create carbon sequestration clusters by 2030 aimed at capturing carbon dioxide emissions from heavy industry. === Hydrogen === Although hydrogen offers significant potential as an alternative to hydrocarbons as a carrier of energy, neither hydrogen itself nor the associated fuel cell technologies are sources of energy in themselves. Nevertheless, the combination of renewable technologies and hydrogen is of considerable interest to those seeking alternatives to fossil fuels. There are a number of Scottish projects involved in this research, supported by the Scottish Hydrogen & Fuel Cell Association (SHFCA). The PURE project on Unst in Shetland is a training and research centre that uses a combination of the ample supplies of wind power and fuel cells to create a wind hydrogen system. Two 15 kW turbines are attached to a 'Hypod' fuel cell, which in turn provides power for heating systems, the creation of stored liquid hydrogen and an innovative fuel-cell driven car. The project is community-owned and part of the Unst Partnership, the community's development trust. In July 2008 the SHFCA announced plans for a "hydrogen corridor" from Aberdeen to Peterhead. The proposal involves running hydrogen-powered buses along the A 90 and is supported by Aberdeenshire Council and the Royal Mail. The economics and practical application of hydrogen vehicles are being investigated by the University of Glasgow, among others. In 2015 the city of Aberdeen became the site of the UK's first hydrogen production and bus refuelling station and the council and announced the purchase of a further 10 hydrogen buses in 2020. The "Hydrogen Office" in Methil aims to demonstrate the benefits of improved energy efficiency and renewable and hydrogen energy systems. A status report on hydrogen production in Shetland, published in September 2020, stated that Shetland Islands Council (SIC) had "joined a number of organisations and projects to drive forward plans to establish hydrogen as a future energy source for the isles and beyond". For example, it was a member of the Scottish Hydrogen Fuel Cell Association (SHFCA). The Orion project, to create an energy hub planned to use clean electricity in the development of "new technologies such as blue and green hydrogen generation". Hydrogen production through electrolysis was well underway in early 2021 in Orkney where clean energy sources (wind, waves, tides) were producing excess electricity that could be used to create hydrogen which could be stored until needed. In November 2019, a spokesperson for the European Marine Energy Centre (EMEC) made this comment: "We're now looking towards the development of a hydrogen economy in Orkney". In late 2020, a plan was made to test the world's first hydrogen-fueled ferry here. One report suggested that, "if all goes well, hydrogen ferries could be sailing between Orkney's islands within six months". By that time, a plan was underway at Kirkwall Airport to add a hydrogen combustion engine system to the heating system in order to reduce the significant emissions that were created with older technology that heated buildings and water. This was part of the plan formulated by the Scottish government for the Highlands and Islands "to become the world's first net zero aviation region by 2040". In December 2020 the Scottish government released a hydrogen policy statement with plans for incorporating blue and green hydrogen for use in heating, transportation and industry. The Scottish government also planned an investment of £100 million in the hydrogen sector "for the £180 million Emerging Energy Technologies Fund". Shetland Islands Council planned to obtain further specifics about the availability of funding. The government had already agreed that the production of "green" hydrogen from wind power near Sullom Voe Terminal was a valid plan. A December 2020 update stated that "the extensive terminal could also be used for direct refuelling of hydrogen-powered ships" and suggested that the fourth jetty at Sullom Voe "could be suitable for ammonia export". == Local vs national concerns == A significant feature of Scotland's renewable potential is that the resources are largely distant from the main centres of population. This is by no means coincidental. The power of wind, wave and tide on the north and west coasts and for hydro in the mountains makes for dramatic scenery, but sometimes harsh living conditions. This happenstance of geography and climate has created various tensions. There is clearly a significant difference between a renewable energy production facility of modest size providing an island community with all its energy needs, and an industrial-scale power station in the same location that is designed to export power to far distant urban locations. Thus, plans for one of the world's largest onshore windfarms on the Hebridean Isle of Lewis have generated considerable debate. A related issue is the high-voltage Beauly–Denny power line which brings electricity from renewable projects in the north and west to the cities of the south. The matter went to a public inquiry and has been described by Ian Johnston of The Scotsman as a "battle that pitches environmentalists against conservationists and giant energy companies against aristocratic landowners and clan chiefs". In January 2010 Jim Mather, the Energy Minister, announced that the project would be going ahead, notwithstanding the more than 18,000 objections received. 53 km of the 132kV line inside the park was taken down and not replaced. The Beauly–Denny line was energized by Christmas 2015. There is considerable support for community-scale energy projects. For example, Alex Salmond, the then First Minister of Scotland, has stated that "we can think big by delivering small" and aspired to have a "million Scottish households with access to their own or community renewable generation within ten years". The John Muir Trust has also stated that "the best renewable energy options around wild land are small-scale, sensitively sited and adjacent to the communities directly benefiting from them", although even community-owned schemes can prove controversial. A related issue is the position of Scotland within the United Kingdom. It has been alleged that UK transmission pricing structures are weighted against the development of renewables, a debate which highlights the contrast between the sparsely populated north of Scotland and the highly urbanised south and east of England. Although the ecological footprints of Scotland and England are similar the relationship between this footprint and the biocapacities of the respective countries are not. Scotland's biocapacity (a measure of the biologically productive area) is 4.52 global hectares (gha) per head, some 15% less than the current ecological effect. In other words, with a 15% reduction in consumption, the Scottish population could live within the productive capacity of the land to support them. However, the UK ecological footprint is more than three times the biocapacity, which is only 1.6 gha, amongst the lowest in Europe. Thus, to achieve the same end in the UK context, consumption would have to be reduced by about 66%. The developed world's economy is very dependent on 'point-source' fossil fuels. Scotland, as a relatively sparsely populated country with significant renewable resources, is in a unique position to demonstrate how the transition to a low-carbon, widely distributed energy economy may be undertaken. A balance will need to be struck between supporting this transition and providing exports to the economies of densely populated regions in the Central Belt and elsewhere, as they seek their own solutions. The tension between local and national needs in the Scottish context may therefore also play out on the wider UK and European stage. == Promotion of renewables == Growing national concerns regarding peak oil and climate change have driven the subject of renewable energy high up the political agenda. Various public bodies and public-private partnerships have been created to develop the potential. The Forum for Renewable Energy Development in Scotland, (FREDS) is a partnership between industry, academia and government aimed at enabling Scotland to capitalise on its renewable energy resource. The Scottish Renewables Forum is an important intermediary organisation for the industry, hosting the annual Green Energy Awards. Community Energy Scotland provides advice, funding and finance for renewable energy projects developed by community groups. Aberdeen Renewable Energy Group (AREG) is a public-private partnership created to identify and promote renewable energy opportunities for businesses in the northeast. In 2009 AREG formed an alliance with North Scotland Industries Group to help promote the North of Scotland as an "international renewable energy hub". The Forestry Commission is active in promoting biomass potential. The Climate Change Business Delivery Group aims to act as a way for businesses to share best practices and address the climate change challenge. Numerous universities are playing a role in supporting energy research under the Supergen programme, including fuel cell research at St Andrews, marine technologies at Edinburgh, distributed power systems at Strathclyde and biomass crops at the UHI Millennium Institute's Orkney College. In 2010 the Scotcampus student Freshers' Festivals held in Edinburgh and Glasgow were powered entirely by renewable energy in a bid to raise awareness among young people. In July 2009 Friends of the Earth, the Royal Society for the Protection of Birds, World Development Movement and World Wildlife Fund published a study called "The Power of Scotland Renewed." This study argued that the country could meet all its electricity needs by 2030 without the requirement for either nuclear or fossil fuel powered installations. In 2013, a YouGov energy survey concluded that: New YouGov research for Scottish Renewables shows Scots are twice as likely to favour wind power over nuclear or shale gas. More than six in ten (62%) people in Scotland say they would support large-scale wind projects in their local area, more than double the number who said they would be generally for shale gas (24%) and almost twice as much as nuclear (32%). Hydro power is the most popular energy source for large-scale projects in Scotland, with an overwhelming majority (80%) being in favour. The Scottish Government's energy plans have called for 100% of electricity consumption to be generated through renewable sources and that by 2030 half of total energy consumption (including heat and transportation) will be met from renewables. === Political landscape === Energy policy in Scotland is a "reserved" issue, i.e. responsibility for it lies with the UK government. Former First Minister of Scotland and SNP leader Nicola Sturgeon has accused them of having a "complete lack of vision and ambition over the energy technologies of the future" and compared this with her view that the Scottish Government is "already a world leader" in tackling the issue. During the referendum on Scottish independence in 2014 Scotland's energy resources were a significant theme, and would likely be so again if there was another independence referendum. The Scottish Green Party are strongly supportive of "low carbon energy for all". Scottish Labour (which is a section of the UK Labour Party) also supports what they call a "Green Industrial Revolution". The Scottish Conservatives' (who are a branch of the UK Conservative Party) party policy is to aim to "ensure 50 per cent of Scotland's energy comes from renewables by 2030". They are also supportive of additional nuclear energy production, which the SNP government oppose. The Scottish Liberal Democrats have a "commitment to 100% of Scottish electricity to be from renewable sources." The 2021 United Nations Climate Change Conference (COP26) was held in Glasgow from 1 to 12 November 2021 under the presidency of the United Kingdom. == See also == List of power stations in Scotland Global == Notes and references == === Notes === === Citations === == External links == Scottish Renewables Forum European Marine Energy Centre – EMEC PURE Scottish Renewables News
Wikipedia/Renewable_energy_in_Scotland
In Karl Marx's critique of political economy and subsequent Marxian analyses, the capitalist mode of production (German: Produktionsweise) refers to the systems of organizing production and distribution within capitalist societies. Private money-making in various forms (renting, banking, merchant trade, production for profit and so on) preceded the development of the capitalist mode of production as such. The capitalist mode of production proper, based on wage-labour and private ownership of the means of production and on industrial technology, began to grow rapidly in Western Europe from the Industrial Revolution, later extending to most of the world. The capitalist mode of production is characterized by private ownership of the means of production, extraction of surplus value by the owning class for the purpose of capital accumulation, wage-based labour and—at least as far as commodities are concerned—being market-based. == Synopsis == A "mode of production" (German: Produktionsweise) means simply "the distinctive way of producing", which could be defined in terms of how it is socially organized and what kinds of technologies and tools are used. Under the capitalist mode of production: Both the inputs and outputs of production are privately owned, priced goods and services purchased in the market. Production is carried out for exchange and circulation in the market, aiming to obtain a net profit income from it. The owners of the means of production (capitalists) constitute the dominant class (bourgeoisie) who derive its income from the exploitation of the surplus value. Surplus value is a term within the Marxian theory which reveals the workers' unpaid work. A defining feature of capitalism is the dependency on wage-labor for a large segment of the population; specifically, the working class, that is a segment of the proletariat, which does not own means of production (type of capital) and are compelled to sell to the owners of the means of production their labour power in order to produce and thus have an income to provide for themselves and their families the necessities of life. The capitalist mode of production may exist within societies with differing political systems (e.g. liberal democracy, social democracy, fascism, Communist state and Czarism) and alongside different social structures such as tribalism, the caste system, an agrarian-based peasant society, urban industrial society and post-industrialism. Although capitalism has existed in the form of merchant activity, banking, renting land and small-scale manufactures in previous stages of history, it was usually a relatively minor activity and secondary to the dominant forms of social organization and production with the prevailing property system keeping commerce within clear limits. == Distinguishing characteristics == Capitalist society is epitomized by the so-called circuit of commodity production, M-C-M' and by renting money for that purpose where the aggregate of market actors determine the money price M, of the input labor and commodities and M' the struck price of C, the produced market commodity. It is centered on the process M → M', "making money" and the exchange of value that occurs at that point. M' > M is the condition of rationality in the capitalist system and a necessary condition for the next cycle of accumulation/production. For this reason, Capitalism is "production for exchange" driven by the desire for personal accumulation of money receipts in such exchanges, mediated by free markets. The markets themselves are driven by the needs and wants of consumers and those of society as a whole in the form of the bourgeois state. These wants and needs would (in the socialist or communist society envisioned by Marx, Engels and others) be the driving force; this would be "production for use". Contemporary mainstream (bourgeois) economics, particularly that associated with the right, holds that an "invisible hand", through little more than the freedom of the market, is able to match social production to these needs and desires. "Capitalism" as this money-making activity has existed in the shape of merchants and money-lenders who acted as intermediaries between consumers and producers engaging in simple commodity production (hence the reference to "merchant capitalism") since the beginnings of civilization. What is specific about the “capitalist mode of production” is that most of the inputs and outputs of production are supplied through the market (i.e. they are commodities) and essentially all production is in this mode. For example, in flourishing feudalism most or all of the factors of production including labor are owned by the feudal ruling class outright and the products may also be consumed without a market of any kind, it is production for use within the feudal social unit and for limited trade. This has the important consequence that the whole organization of the production process is reshaped and reorganized to conform with economic rationality as bounded by capitalism, which is expressed in price relationships between inputs and outputs (wages, non-labor factor costs, sales, profits) rather than the larger rational context faced by society overall. That is, the whole process is organized and reshaped in order to conform to "commercial logic". Another way of saying this is that capital accumulation defines economic rationality in capitalist production. In the flourishing period of capitalism, these are not operating at cross purposes and thus capitalism acts as a progressive force (e.g. against feudalism). In the final stages, capitalism as a mode of production achieves complete domination on a planetary basis and has nothing to overcome but itself, the final (for it, capitalism, viewed as a Hegelian process, not for historical development per se) negating of the negation posited by orthodox Marxism. In this context, Marx refers to a transition from the “formal subsumption”, the formal turning of workers into wage-laborers, to the “real subsumption” of production under the power of capital. In what he calls the "specifically capitalist mode of production", both the technology worked with and the social organization of labour have been completely refashioned and reshaped in a commercial (profit and market-oriented) way—the "old ways of producing" (for example, crafts and cottage industries) had been completely displaced by the then new industrialism. Some historians, such as Jairus Banaji and Nicholas Vrousalis have argued that capitalist relations of production predate the capitalist mode of production. === Summary of basic distinctions === In general, capitalism as an economic system and mode of production can be summarized by the following: Capital accumulation: production for profit and accumulation as the implicit purpose of all or most of production, constriction or elimination of production formerly carried out on a common social or private household basis. Commodity production: production for exchange on a market; to maximize exchange-value instead of use-value. Private ownership of the means of production: ownership of the means of production by a class of capital owners, either individually, collectively (see corporation) or through a state that serves the interests of the capitalist class (see state capitalism). Primacy of wage labor: near universality of wage labor, whether so-called or not, with coerced work for the masses in excess of what they would need to sustain themselves and a complete saturation of bourgeois values at all levels of society from the base reshaping and reorganization described above. == Origins == Marx argued that capital existed incipiently on a small scale for centuries in the form of merchant, renting and lending activities and occasionally also as small-scale industry with some wage labour (Marx was also well aware that wage labour existed for centuries on a modest scale before the advent of capitalist industry). Simple commodity exchange and consequently simple commodity production, which form the initial basis for the growth of capital from trade, have a very long history. The "capitalistic era" according to Marx dates from the 16th century, i.e. it began with merchant capitalism and relatively small urban workshops. For the capitalist mode of production to emerge as a distinctive mode of production dominating the whole production process of society, many different social, economic, cultural, technical and legal-political conditions had to come together. For most of human history, these did not come together. Capital existed and commercial trade existed, but it did not lead to industrialisation and large-scale capitalist industry. That required a whole series of new conditions, namely specific technologies of mass production, the ability to independently and privately own and trade in means of production, a class of workers compelled to sell their labor power for a living, a legal framework promoting commerce, a physical infrastructure making the circulation of goods on a large scale possible, security for private accumulation and so on. In many Third World countries, many of these conditions do not exist even today even though there is plenty of capital and labour available—the obstacles for the development of capitalist markets are less a technical matter and more a social, cultural and political problem. A society, a region or nation is “capitalist” if the predominant source of incomes and products being distributed is capitalist activity—even so, this does not yet mean necessarily that the capitalist mode of production is dominant in that society. == Defining structural criteria == Marx never provided a complete definition of the capitalist mode of production as a short summary, although in his manuscripts he sometimes attempted one. In a sense, it is Marx's three-volume work Capital (1867–1894; sometimes known by its German title, Das Kapital), as a whole that provides his "definition" of the capitalist mode of production. Nevertheless, it is possible to summarise the essential defining characteristics of the capitalist mode of production as follows: The means of production (or capital goods) and the means of consumption (or consumer goods) are mainly produced for market sale; output is produced with the intention of sale in an open market; and only through sale of output can the owner of capital claim part of the surplus-product of human labour and realize profits. Equally, the inputs of production are supplied through the market as commodities. The prices of both inputs and outputs are mainly governed by the market laws of supply and demand (and ultimately by the law of value). In short, a capitalist must use money to fuel both the means of production and labor in order to make commodities. These commodities are then sold to the market for a profit. The profit once again becomes part of a larger amount of capital which the capitalist reinvests to make more commodities and ultimately more and more capital. Private ownership of the means of production ("private enterprise") as effective private control and/or legally enforced ownership, with the consequence that investment and management decisions are made by private owners of capital who act autonomously from each other and—because of business secrecy and the constraints of competition—do not co-ordinate their activities according to collective, conscious planning. Enterprises are able to set their own output prices within the framework of the forces of supply and demand manifested through the market and the development of production technology is guided by profitability criteria. The corollary of that is wage labour ("employment") by the direct producers, who are compelled to sell their labour power because they lack access to alternative means of subsistence (other than being self-employed or employers of labour, if only they could acquire sufficient funds) and can obtain means of consumption only through market transactions. These wage earners are mostly "free" in a double sense: they are “freed” from ownership of productive assets and they are free to choose their employer. Being carried out for market on the basis of a proliferation of fragmented decision-making processes by owners and managers of private capital, social production is mediated by competition for asset-ownership, political or economic influence, costs, sales, prices and profits. Competition occurs between owners of capital for profits, assets and markets; between owners of capital and workers over wages and conditions; and between workers themselves over employment opportunities and civil rights. The overall aim of capitalist production under competitive pressure is (a) to maximise net profit income (or realise a net superprofit) as much as possible through cutting production costs, increasing sales and monopolisation of markets and supply; (b) capital accumulation, to acquire productive and non-productive assets; and (c) to privatize both the supply of goods and services and their consumption. The larger portion of the surplus product of labor must usually be reinvested in production since output growth and accumulation of capital mutually depend on each other. Out of preceding characteristics of the capitalist mode of production, the basic class structure of this mode of production society emerges: a class of owners and managers of private capital assets in industries and on the land, a class of wage and salary earners, a permanent reserve army of labour consisting of unemployed people and various intermediate classes such as the self-employed (small business and farmers) and the “new middle classes” (educated or skilled professionals on higher salaries). The finance of the capitalist state is heavily dependent on levying taxes from the population and on credit—that is, the capitalist state normally lacks any autonomous economic basis (such as state-owned industries or landholdings) that would guarantee sufficient income to sustain state activities. The capitalist state defines a legal framework for commerce, civil society and politics, which specifies public and private rights and duties as well as legitimate property relations. Capitalist development, occurring on private initiative in a socially unco-ordinated and unplanned way, features periodic crises of over-production (or excess capacity). This means that a critical fraction of output cannot be sold at all, or cannot be sold at prices realising the previously ruling rate of profit. The other side of over-production is the over-accumulation of productive capital: more capital is invested in production than can obtain a normal profit. The consequence is a recession (a reduced economic growth rate) or in severe cases, a depression (negative real growth, i.e. an absolute decline in output). As a corollary, mass unemployment occurs. In the history of capitalist development since 1820, there have been more than 20 of such crises—nowadays the under-utilisation of installed productive capacity is a permanent characteristic of capitalist production (average capacity utilisation rates nowadays normally range from about 60% to 85%). In examining particular manifestations of the capitalist mode of production in particular regions and epochs, it is possible to find exceptions to these main defining criteria, but the exceptions prove the rule in the sense that over time the exceptional circumstances tend to disappear. == State capitalist interpretation == As mentioned, Marx never explicitly summarised his definition of capitalism, beyond some suggestive comments in manuscripts which he did not publish himself. This has led to controversies among Marxists about how to evaluate the "capitalist" nature of society in particular countries. Supporters of theories of state capitalism such as the International Socialists reject the definition of the capitalist mode of production given above. In their view, claimed to be more revolutionary (in that true liberation from capitalism must be the self-emancipation of the working class—"socialism from below"), what really defines the capitalist mode of production is: Means of production which dominate the direct producers as an alien power. Generalized commodity production The existence of a wage-earning working class which does not hold or have power. The existence of an elite or ruling class which controls the country, exploiting the working population in the technical Marxist sense. This idea is based on passages from Marx, where Marx emphasized that capital cannot exist except within a power-relationship between social classes which governs the extraction of surplus-labour. == Heterodox views and polemics == Orthodox Marxist debate after 1917 has often been in Russian, other East European languages, Vietnamese, Korean or Chinese and dissidents seeking to analyze their own country independently were typically silenced in one way or another by the regime, therefore the political debate has been mainly from a Western point of view and based on secondary sources, rather than being based directly on the experiences of people living in "actually existing socialist countries". That debate has typically counterposed a socialist ideal to a poorly understood reality, i.e. using analysis which due to such party stultification and shortcomings of the various parties fails to apply the full rigor of the dialectical method to a well informed understanding of such actual conditions in situ and falls back on trite party approved formulae. In turn, this has led to the accusation that Marxists cannot satisfactorily specify what capitalism and socialism really are, nor how to get from one to the other—quite apart from failing to explain satisfactorily why socialist revolutions failed to produce the desirable kind of socialism. Behind this problem, it is argued the following: A kind of historicism according to which Marxists have a privileged insight into the "march of history"—the doctrine is thought to provide the truth, in advance of real research and experience. Evidence contrary to the doctrine is rejected or overlooked. A uni-linear view of history, according to which feudalism leads to capitalism and capitalism to socialism. An attempt to fit the histories of different societies into this schema of history on the basis that if they are not socialist, they must be capitalist (or vice versa), or if they are neither, that they must be in transition from one to the other. None of these stratagems, it is argued, are either warranted by the facts or scientifically sound and the result is that many socialists have abandoned the rigid constraints of Marxist orthodoxy in order to analyse capitalist and non-capitalist societies in a new way. From an orthodox Marxist perspective, the former is simple ignorance and or purposeful obfuscation of works such as Jean-Paul Sartre's Critique of Dialectical Reason and a broader literature which does in fact supply such specifications. The latter are partly superficial complaints which can easily be refuted as they are diametrically opposite of well known statements by Marx, Lenin, Trotsky and others, part pettifogging and redundant restatement of the same thing and partly true observations of inferior and simplistic presentations of Marxist thought (by those espousing some brand of Marxism). Neither historical or dialectical materialism assert or imply a "uni-linear" view of human development, although Marxism does claim a general and indeed accelerating secular trend of advancement, driven in the modern period by capitalism. Similarly, Marxists, especially in the period after 1917, have on the contrary been especially mindful of the so-called unequal and uneven development and its importance in the struggle to achieve socialism. Finally, in the wake of the disasters of socialism in the previous century most modern Marxists are at great pains to stipulate that only the independently acting working class can determine the nature of the society it creates for itself so the call for a prescriptive description of exactly what that society would be like and how it is to emerge from the existing class-ridden one, other than by the conscious struggle of the masses, is an unwitting expression of precisely the problem that is supposed to be being addressed (the imposition of social structure by elites). == See also == == Notes == == Further reading == Karl Marx. Grundrisse. Jairus Banaji. Theory as History. Nicholas Vrousalis. "Capital without Wage-Labour: Marx's Modes of Subsumption Revisited". Economics and Philosophy. Vol. 34. No. 3. 2018. Alex Callinicos. "Wage Labour and State Capitalism - A reply to Peter Binns and Mike Haynes". International Socialism. second series. 12. Spring 1979. Erich Farl. "The Genealogy of State Capitalism". in International (London, IMG). Vol. 2. No. 1. 1973. Anwar Shaikh, "Capital as a Social Relation" (New Palgrave article). Marcel van der Linden. Western Marxism and the Soviet Union. New York. Brill Publishers. 2007. Fernand Braudel. Civilization and Capitalism. Barbrook, Richard (2006). The Class of the New (paperback ed.). London: OpenMute. ISBN 978-0-9550664-7-4. Archived from the original on 2018-08-01. Retrieved 2018-11-19. == External links == The Marxist System. Charles Sackrey and Geoff Schneider.
Wikipedia/Capitalist_mode_of_production_(Marxist_theory)
The demography of England has since 1801 been measured by the decennial national census, and is marked by centuries of population growth and urbanization. Due to the lack of authoritative contemporary sources, estimates of the population of England for dates prior to the first census in 1801 vary considerably. The population of England at the 2021 census was about 56,489,800. == Population == The population of England in 2021 was estimated to be 56,489,800. This is the most recent census. In the previous census, in 2011, the population was 53,012,456. Data for the 2021 census: Female: 28,833,712 Male: 27,656,336 Total population: 56,489,800 Total Fertility Rate: 1.61 (2021) == Historical population == == Vital statistics == This is UK wide information. (c) = Census results. In 2023, the percentage of live births where either one or both parents were born outside of the UK was 38.2 per cent. 32.7 per cent of all live births in England were to mothers born outside of the UK (9.0% born in the EU, 23.7% born outside of the EU). === Current vital statistics === == Historical per cent distribution of the total population by age == == Country of birth == Country of birth given by respondents in the corresponding UK censuses were as follows: Below are the estimates of the largest foreign-born groups in England according to ONS estimates. == Age == The data below is based on the 2011 census. In 2001, the mean age of England's population was 38.60, and the median age was 37.00. In 2022, the median age was 40.5. Population pyramids of the regions of England == Ethnicity == Notes for table above === Population distribution === Population distribution of ethnic groups in 2011 Proportion of births in regions of each broad multi-ethnic group === Ethnicity of school pupils === The ethnicity of school pupils in England has been changing since the figures started to be collected in 2002, White British students proportionally have been in decline compared to other groups who have risen. Ethnicity of school pupils in the school year of 2021/2022 == Languages == The most common main languages spoken in England according to the 2011 census are shown below. == Religion == Respondents to the 2001, 2011 and 2021 censuses gave their religions as follows: == See also == Demographics of the United Kingdom Demographics of Scotland Demographics of Wales Demographics of Northern Ireland Demographics of London Demographics of Birmingham Demographics of Greater Manchester United Kingdom Census 2011 National Statistics Socio-economic Classification Census 2001 Ethnic Codes List of English districts by population List of urban areas in England by population List of towns and cities in England by historical population List of major settlements by population in English counties == Notes == == References == == External links == National Statistics Populstat population figure site – main source for 1801–1991 Genealogical documents
Wikipedia/Demographics_of_England
Industrial warfare is a period in the history of warfare ranging roughly from the early 19th century and the start of the Industrial Revolution to the beginning of the Atomic Age, which saw the rise of nation-states, capable of creating and equipping large armies, navies, and air forces, through the process of industrialization. The era featured mass-conscripted armies, rapid transportation (first on railroads, then by sea and air), telegraph and wireless communications, and the concept of total war. In terms of technology, this era saw the rise of rifled breech-loading infantry weapons capable of high rates of fire, high-velocity breech-loading artillery, chemical weapons, armoured warfare, metal warships, submarines, and aircraft. == Total war == One of the main features of industrial warfare is the concept of "total war". The term was coined during World War I by Erich Ludendorff (and again in his 1935 book Total War), which called for the complete mobilization and subordination of all resources, including policy and social systems, to the German war effort. It has also come to mean waging warfare with absolute ruthlessness, and its most identifiable legacy today has been the reintroduction of civilians and civilian infrastructure as targets in destroying the enemy's ability to engage in war. There are several reasons for the rise of total warfare in the 19th century. The main one is industrialization. As countries' capital and natural resources grew, it became clear that some forms of warfare demanded more resources than others. Consequently, the greater cost of warfare became evident. An industrialized nation could distinguish and then choose the intensity of warfare that it wished to engage in. Additionally, warfare was becoming more mechanized and required greater infrastructure. Combatants could no longer live off the land, but required an extensive support network of people behind the lines to keep them fed and armed. This required the mobilization of the home front. Modern concepts like propaganda were first used to boost production and maintain morale, while rationing took place to provide more war material. The earliest modern example of total war was the American Civil War. Union generals Ulysses S. Grant and William Tecumseh Sherman were convinced that, if the North was to be victorious, the Confederacy's strategic, economic, and psychological ability to wage war had to be definitively crushed. They believed that to break the backbone of the South, the North had to employ scorched earth tactics, or as Sherman called it, "Hard War". Sherman's advance through Georgia and the Carolinas was characterized by the widespread destruction of civilian supplies and infrastructure. In contrast to later conflicts, the damage done by Sherman was almost entirely limited to property destruction. In Georgia alone, Sherman claimed he and his men had caused $100,000,000 in damages. == Conscription == Conscription is the compulsory enrollment of civilians into military service. Conscription allowed the French Republic to form La Grande Armée, what Napoleon Bonaparte called "the nation in arms", which successfully battled smaller, professional European armies. Conscription, particularly when the conscripts are being sent to foreign wars that do not directly affect the security of the nation, has historically been highly politically contentious in democracies. For instance, during World War I, bitter political disputes broke out in Canada (see Conscription Crisis of 1917), Newfoundland, Australia and New Zealand (see Compulsory Military Training) over conscription. Canada also had a political dispute over conscription during World War II (see Conscription Crisis of 1944). Both South Africa and Australia put limits on where conscripts could fight in WWII. Similarly, mass protests against conscription to fight the Vietnam War occurred in several countries in the late 1960s. In developed nations, the increasing emphasis on technological firepower and better-trained fighting forces, the sheer unlikelihood of a conventional military assault on most developed nations, as well as memories of widespread controversies over the Vietnam War, make mass conscription less likely, but still possible, in the future. Russia, as well as many smaller nations such as Switzerland, retain mainly conscript armies. == Transportation == === Land === Prior to the invention of the motorized transport, combatants were transported by wagons, horses and by marching. With the advent of locomotives, large groups of combatants, supplies, and equipment were able to be transported faster and in larger numbers. To counter this, an opposing force would destroy rail lines to hinder their enemies' movements. General Sherman's men during the American Civil War, would destroy tracks, heat the rails, and wrap them around trees. The mass transportation of combatants was further revolutionized with the advent of the internal combustion engine and the automobile. Combined with the widespread use of the machine gun, the horse, after millennia of use, was finally supplanted in its war time role. During both WWI and WWII, trucks were used to carry combatants and materiel, while cars and jeeps were used to scout enemy positions. The mechanization of infantry occurred during WWII. The tank, a product of World War I independently invented by the British and French to break through trenches while withstanding machine gun fire, while discounted by many, came into its own. Tanks evolved from thin-skinned, lumbering vehicles into fast, powerful war machines of various types that dominated the battlefield and allowed the Germans to conquer most of Europe. As a result of the tank's evolution, a number of armored transport vehicles appeared, such as armoured personnel carriers and amphibious vehicles. After the war ended, armored transports continued to evolve. The armored car and train declined in use, largely becoming relegated to military and civilian use as transportation for VIPs. Infantry fighting vehicles rose to prominence with the creation of the Soviet BMP-1. IFVs are a more combat capable version of the APC, with heavier armaments (such as autocannons), while still retaining the ability to transport combatants into and out of battles. === Sea === Sealift is a military logistics term referring to the use of cargo ships for the deployment of military assets, such as weaponry, military personnel, and materiel supplies. It complements other means of transport, such as strategic airlifters, in order to enhance a state's ability to project power. A state's sealift capabilities may include civilian-operated ships that normally operate by contract, but which can be chartered or commandeered during times of military necessity to supplement government-owned naval fleets. During WWI, the United States bought, borrowed or commandeered vessels of various types, ranging from pleasure craft to ocean liners to transport the American Expeditionary Force to Europe. Many of these ships were scrapped, sold or returned to their owners after the war. === Air === There are two different kinds of airlifts in warfare, a strategic airlift and a tactical airlift. A strategic airlift is the use transporting of weapons, supplies and personnel over long distances (from a base in one country to a base in another country for example) using large cargo aircraft. This contrasts with tactical airlifts, which involves transporting the same above items within a theater of operations. This usually involves cargo planes with shorter ranges and slower speeds, but higher maneuverability. == Communications == Cryptography Homing pigeon/War pigeon Joint Army/Navy Phonetic Alphabet Message precedence Semaphore (communication) Signal Corps Smoke signal Telegraphy === Equipment === Aldis lamp International maritime signal flags == Land warfare == Land warfare, as the name implies, takes place on land. The most common type of warfare, it can encompass several modes and locales, including urban, arctic, and mountain warfare. The early part of the 19th century from 1815 to 1848 saw a long period of peace in Europe, accompanied by extraordinary industrial expansion. The industrial age brought about various technological advancements, each with their own implication. Land warfare moved from visual-range and semi person-to-person combat of the previous era, to indiscriminate and impersonal, "beyond visual range" warfare. The Crimean War (1853–1856) saw the introduction of trench warfare, long-range artillery, railroads, the telegraph, and the rifle. The mechanized mass-destruction of enemy combatants grew ever more deadly. In WWI (1914–1918) machine-guns, barbed wire, chemical weapons, and land-mines entered the battlefield. The deadly stalemated trench-warfare stage was finally passed with the advent of the modern armored tank late in WWI. One major trend involved the transition away massed infantry fire and human waves to more refined tactics. This became possible with the superseding of earlier weapons like the highly inaccurate musket. === Technological advances === Rifling refers to the act of adding spiral grooves to the inside of the barrel of a firearm. The grooves would cause a projectile to spin as it traveled down the barrel, improving range and accuracy. Once rifling became easier and practical, a new type of firearm was introduced, the rifle. It gave combatants the ability to specifically target an enemy combatant, rather than have large numbers of combatants fire in a general direction. It effectively broke up groups of combatants into smaller more maneuverable units. Artillery are large guns designed to fire large projectiles a great distance. Early artillery pieces were large and cumbersome with slow rates of fire. This reduced their use to sieges, by both defenders and attackers. With the advent of the industrial age and various technological advancements, lighter, yet powerful and accurate artillery pieces were produced. This gave rise to field artillery which were used on a tactical level to support troops. Machine guns are fully automatic guns. In this era of warfare they only existed as mounted support weapons, as automatic firearms were not yet developed. Early machine guns as invented by Richard Gatling, were hand cranked but evolved into truly automatic machine guns by Maxim at the end of the era. Machine guns were valued for their ability to smash infantry formations, especially attacking enemy formations when they were dense. This, along with effective field artillery, changed tactics drastically. === Static defense === Static defenses evolved from the use of permanent fortifications that were direct descendants of medieval castles. As artillery improved in destructive power and penetrative ability, more modern fortifications were developed, using first thicker layers of stone, then concrete and steel. After naval artillery developed the turret – a moving cannon platform – land fortifications started to use this method as well. Between the World Wars, France built an "impregnable" underground steel and concrete fortification that ran the length of the German-French border. This Maginot Line failed to stop German tanks in 1940: they bypassed the fortifications by invading through neighboring Belgium. === Temporary fortifications === As artillery and rifles allowed the killing of enemy personnel at a longer effective range, soldiers started to dig into temporary fortifications. These included massive trenches as used in WWI, and individual soldier-sized "fox holes" which became more common in WWII. === Maneuver warfare === Maneuver had existed throughout military history – from soldiers marching on the field to using horses in cavalry formations. It was not until the advent of mechanized transport over unprepared terrain, such as fields and deserts, using tanks and armored vehicles, that "maneuver warfare" became feasible. First used by the German army in Poland and France in WWII, Blitzkrieg or "lightning war" saw whole armies moved rapidly on tracked and armored fighting vehicles. During the war airborne movement was used, with soldiers dropped to the battlefield by parachute by both the Germans and the Allies. After WWII, developments in helicopters brought a more practical way to transport troops by air. Armoured warfare Blitzkrieg Deep operations == Naval warfare == === Ironclads and Dreadnoughts === The period after the Napoleonic Wars was one of intensive experimentation with new technology; steam power for ships appeared in the 1810s, improved metallurgy and machining technique produced larger and deadlier guns, and the development of explosive shells, capable of demolishing a wooden ship at a single blow, in turn required the addition of iron armor, which led to ironclads. The famous battle of the CSS Virginia and USS Monitor in the American Civil War was the duel of ironclads that symbolized the changing times. Although the battle was inconclusive, nations around the world subsequently raced to convert their fleets to iron, as ironclads had shown themselves to be clearly superior to wooden ships in their ability to withstand enemy fire. In the late 19th century, naval warfare was revolutionized by Alfred Thayer Mahan's book The Influence of Sea Power upon History. Mahan argued that in the Anglo-French wars of the 18th and 19th centuries, domination of the sea was the deciding factor in the outcome, and therefore control of seaborne commerce was critical to military victory. Mahan argued that the best way to achieve naval domination was through large fleets of concentrated capital ships, as opposed to commerce raiders. His books were closely studied in all the Great Powers, influencing their naval arms race in the years prior to WWI. As the century came to a close, the familiar modern battleship began to emerge; a steel-armored ship, entirely dependent on steam turbines, and sporting a number of large shell guns mounted in turrets arranged along the centerline of the main deck. The ultimate design was reached in 1906 with HMS Dreadnought, which entirely dispensed with smaller guns, her main guns being sufficient to sink any existing ship of the time. The Russo-Japanese War and particularly the Battle of Tsushima in 1905 was the first test of the new concepts, resulting in a stunning Japanese victory and the destruction of dozens of Russian ships. World War I pitted the old Royal Navy against the new navy of Imperial Germany, culminating in the 1916 Battle of Jutland. Following the war, many nations agreed to limit the size of their fleets in the Washington Naval Treaty and scrapped many of their battleships and cruisers. Growing tensions of the 1930s restarted the building programs, with even larger ships than before: the Japanese battleship Yamato, launched in 1941, displaced 72,000 tons and mounted 18-inch (46 cm) guns. This marked the climax of "big gun" warfare, as aircraft would gradually play a larger role in warfare. By the 1960s, battleships had all-but vanished from the fleets of the world. === Aircraft carriers === Between the world wars, the first aircraft carriers appeared, initially as a way to circumvent the tonnage limits of the Washington Naval Treaty (many of the first carriers were converted battlecruisers). Though several ships had previously been designed to launch aircraft, the first true "flat-top" carrier was HMS Argus, launched in December 1917. By the start of WWII, aircraft carriers typically carried three types of aircraft: torpedo bombers, which could also be used for conventional horizontal bombing and reconnaissance; dive bombers, also used for reconnaissance; and fighters for fleet defence and bomber escort duties. Because of the restricted space on aircraft carriers, these aircraft were almost always small, single-engined warplanes. The first true demonstration of naval air power was the victory of the Royal Navy at the Battle of Taranto in 1940, which set the stage for Japan's much larger and more famous attack on Pearl Harbor the following year. Two days after Pearl Harbor, the sinking of HMS Prince of Wales and HMS Repulse, marked the beginning of the end for the battleship era. Following WWII, aircraft carriers continued to remain key to navies throughout the latter 20th century, moving in the 1950s to jets launched from Supercarriers, behemoths which could displace as much as 100,000 tons. === Submarines === Just as important was the development of submarines to travel underneath the sea, at first for short dives, then later to be able to spend weeks or months underwater powered by a nuclear reactor. The first successful submarine attack in wartime was in 1864 by the Confederate submarine H.L. Hunley which sank the frigate USS Housatonic. In both World Wars, submarines primarily exerted their power by sinking merchant ships using torpedoes, in addition to attacks on warships. All nations practiced unrestricted submarine warfare in which submarines sank merchant ships without warning, but the only successful campaign during this period was America's submarine war against Japan during the Pacific War. In the 1950s the Cold War inspired the development of ballistic missile submarines, each one loaded with dozens of nuclear-armed missiles and with orders to launch them from sea should the other nation attack. == Aerial warfare == The first use of airplanes in war was the Italo-Turkish War of 1911, when the Italians carried out several reconnaissance and bombing missions. During WWI both sides made use of balloons and airplanes for reconnaissance and directing artillery fire. To prevent enemy reconnaissance, some airplane pilots began attacking other airplanes and balloons, first with small arms carried in the cockpit, and later with machine guns mounted on the aircraft. Both sides also made use of aircraft for bombing, strafing and dropping of propaganda leaflets. The German air force carried out the first terror bombing raids, using Zeppelins to drop bombs on Britain. By the end of the war airplanes had become specialised into bombers, fighters, and surveillance aircraft. Most of these airplanes were biplanes with wooden frames, canvas skins, wire rigging and air-cooled engines. Between 1918 and 1939, aircraft technology developed very rapidly. By 1939 military biplanes were in the process of being replaced with metal framed monoplanes, often with stressed skins and liquid cooled engines. Top speeds had tripled; altitudes doubled (and oxygen masks become commonplace); ranges and payloads of bombers increased enormously. Some theorists, most famously Hugh Trenchard and Giulio Douhet, believed that aircraft would become the dominant military arm in the future, and argued that future wars would be won entirely by the destruction of the enemy's military and industrial capability from the air. This concept was called strategic bombing. Douhet also argued in The Command of the Air (1921) that future military leaders could avoid falling into bloody World War I-style trench stalemates by using aviation to strike past the enemy's forces directly at their vulnerable civilian population, which Douhet believed would cause these populations to rise up in revolt to stop the bombing. Others, such as Billy Mitchell, saw the potential of air power to neutralize the striking power of naval surface fleets. Mitchell himself proved the vulnerability of capital ships to aircraft was finally in 1921 when he commanded a squadron of bombers that sank the ex-German battleship SMS Ostfriesland with aerial bombs. (See Industrial warfare#Naval warfare) During WWII, there was a debate between strategic bombing and tactical bombing. Strategic bombing focused on targets such as factories, railroads, oil refineries, and heavily populated areas such as cities and towns, and required heavy four-engine bombers carrying large payloads of ordnance or a single heavy four-engine bomber carrying a nuclear weapon flying deep into enemy territory. Tactical bombing focused on concentration of combatants, command and control centers, airfields, and ammunition dumps, and required attack aircraft, dive bombers, and fighter bombers that could fly low over the battlefield. In the early years of WWII, the German Luftwaffe focused on tactical bombing, using large numbers of Ju 87 Stukas as "flying artillery" for land offensives. Artillery was slow and required time to set up a firing position, whereas aircraft were better able keep up with the fast advances of the German panzer columns. Close air support greatly assisted in the successes of the German Army in the Battle of France. It was also important in amphibious warfare, where aircraft carriers could provide support for soldiers landing on the beaches. Strategic bombing, by contrast, was unlike anything the world has seen before or since. In 1940, the Germans attempted to force Britain to surrender through attacks on its airfields and factories, and then on its cities in The Blitz in what became the Battle of Britain, the first major battle whose outcome was determined primarily in the air. The campaigns conducted in Europe and Asia could involve thousands of aircraft dropping tens of thousands of tons of munitions over a single city. Military aviation in the post-war years was dominated by the needs of the Cold War. The postwar years saw a rapid conversion to jet power, which resulted in enormous increases in speeds and altitudes of aircraft. Until the advent of the intercontinental ballistic missile, major powers relied on high-altitude bombers to deliver their newly developed nuclear deterrent. Each country strove to develop the technology of bombers and the high-altitude fighters that could intercept them. The concept of air superiority began to play a heavy role in aircraft designs for both the United States and the Soviet Union. == Post-World War II == With the invention of nuclear weapons, the concept of full-scale war carries the prospect of global annihilation, and as such conflicts since WWII have been "low intensity" conflicts, typically in the form of proxy wars fought within local regional confines, using what are now referred to as "conventional weapons", typically combined with the use of asymmetric warfare tactics and applied use of intelligence. === Nuclear warfare === The use of nuclear weapons first came into being during the last months of WWII, with the dropping of atomic bombs on Hiroshima and Nagasaki. This was the only use of nuclear weapons in combat. For a decade after World War II, the United States and later the Soviet Union (and to a lesser extent the United Kingdom and France) developed and maintained a strategic force of bombers that would be able to attack any potential aggressor from bases inside their countries. Before the development of a capable strategic missile force in the Soviet Union, much of the war-fighting doctrine held by western nations revolved around the use of a large number of smaller nuclear weapons used in a tactical role. It is arguable if such use could be considered "limited" however, because it was believed that the US would use their own strategic weapons (mainly bombers at the time) should the USSR deploy any kind of nuclear weapon against civilian targets. A revolution in thinking occurred with the introduction of the intercontinental ballistic missile (ICBM), which the Soviet Union first successfully tested in the late 1950s. To deliver a warhead to a target, a missile was far less expensive than a bomber that could do the same job. Moreover, at the time it was impossible to intercept ICBMs due to their high altitude and speed. In the 1960s, another major shift in nuclear doctrine occurred with the development of the submarine-based nuclear missile (SLBM). It was hailed by military theorists as a weapon that would assure a surprise attack would not destroy the capability to retaliate, and therefore would make nuclear war less likely. === Cold War === Since the end of WWII, no industrial nations have fought such a large, decisive war, due to the availability of weapons that are so destructive that their use would offset the advantages of victory. The fighting of a total war where nuclear weapons are used is something that instead of taking years and the full mobilisation of a country's resources such as in WWII, would take tens of minutes. Such weapons are developed and maintained with relatively modest peace time defence budgets. By the end of the 1950s, the ideological stand-off of the Cold War between the Western World and the Soviet Union involved thousands of nuclear weapons being aimed at each side by the other. Strategically, the equal balance of destructive power possessed by each side situation came to be known as Mutually Assured Destruction (MAD), the idea that a nuclear attack by one superpower would result in nuclear counter-strike by the other. This would result in hundreds of millions of deaths in a world where, in words widely attributed to Nikita Khrushchev, "The living will envy the dead". During the Cold War, the superpowers sought to avoid open conflict between their respective forces, as both sides recognized that such a clash could very easily escalate, and quickly involve nuclear weapons. Instead, the superpowers fought each other through their involvement in proxy wars, military buildups, and diplomatic standoffs. In the case of proxy wars, each superpower supported its respective allies in conflicts with forces aligned with the other superpower, such as in the Korean War, the Vietnam War, and the Soviet invasion of Afghanistan. Non nuclear nations still fought industrialized warfare as the long Iran-Iraq war that ended in a trench warfare stalemate. === 21st century === The Royal United Services Institute stated that the Russo-Ukrainian War has proven that the age of industrial warfare is still here and that massive consumption of equipment, vehicles and ammunition requires a large industrial base for resupply. == Milestones == == See also == Mobilization Trench warfare Unconditional surrender World war Material aspects: Arms race Economic warfare Home front Mass production Total war War economy War effort Specific: Cold War Curtis LeMay Technology during World War I Technology during World War II Technological escalation during World War II Unrestricted Warfare (China) == References == == External links == Modern Tendencies in Strategy and Tactics as shown in Campaigns in the Far East (1906) by Lieutenant Colonel Yoda, Imperial Japanese Army.
Wikipedia/Industrial_warfare
Industrial waste is the waste produced by industrial activity which includes any material that is rendered useless during a manufacturing process such as that of factories, mills, and mining operations. Types of industrial waste include dirt and gravel, masonry and concrete, scrap metal, oil, solvents, chemicals, scrap lumber, even vegetable matter from restaurants. Industrial waste may be solid, semi-solid or liquid in form. It may be hazardous waste (some types of which are toxic) or non-hazardous waste. Industrial waste may pollute the nearby soil or adjacent water bodies, and can contaminate groundwater, lakes, streams, rivers or coastal waters. Industrial waste is often mixed into municipal waste, making accurate assessments difficult. An estimate for the US goes as high as 7.6 billion tons of industrial waste produced annually, as of 2017. Most countries have enacted legislation to deal with the problem of industrial waste, but strictness and compliance regimes vary. Enforcement is always an issue. == Classification of industrial waste and its treatment == Hazardous waste, chemical waste, industrial solid waste and municipal solid waste are classifications of wastes used by governments in different countries. Sewage treatment plants can treat some industrial wastes, i.e. those consisting of conventional pollutants such as biochemical oxygen demand (BOD). Industrial wastes containing toxic pollutants or high concentrations of other pollutants (such as ammonia) require specialized treatment systems. (See Industrial wastewater treatment). Industrial wastes can be classified on the basis of their characteristics: Waste in solid form, but some pollutants within are in liquid or fluid form, e.g. crockery industry or washing of minerals or coal Waste in dissolved and the pollutant is in liquid form, e.g. the dairy industry. == Environmental impact == Many factories and most power plants are located near bodies of water to obtain large amounts of water for manufacturing processes or for equipment cooling. In the US, electric power plants are the largest water users. Other industries using large amounts of water are pulp and paper mills, chemical plants, iron and steel mills, petroleum refineries, food processing plants and aluminum smelters. Many less-developed countries that are becoming industrialized do not yet have the resources or technology to dispose their wastes with minimal impacts on the environment. Both untreated and partially treated wastewater are commonly fed back into a near lying body of water. Metals, chemicals and sewage released into bodies of water directly affect marine ecosystems and the health of those who depend on the waters as food or drinking water sources. Toxins from the wastewater can kill off marine life or cause varying degrees of illness to those who consume these marine animals, depending on the contaminant. Metals and chemicals released into bodies of water affect the marine ecosystems. Wastewater containing nutrients (nitrates and phosphates) often causes eutrophication which can kill off existing life in water bodies. A Thailand study focusing on water pollution origins found that the highest concentrations of water contamination in the U-tapao river had a direct correlation to industrial wastewater discharges. Thermal pollution—discharges of water at elevated temperature after being used for cooling—can also lead to polluted water. Elevated water temperatures decrease oxygen levels, which can kill fish and alter food chain composition, reduce species biodiversity, and foster invasion by new thermophilic species. === Solid and hazardous waste === Solid waste, often called municipal solid waste, typically refers to material that is not hazardous. This category includes trash, rubbish and refuse; and may include materials such as construction debris and yard waste. Hazardous waste typically has specific definitions, due to the more careful and complex handling required of such wastes. Under US law, waste may be classified as hazardous based on certain characteristics: ignitability, reactivity, corrosivity and toxicity. Some types of hazardous waste are specifically listed in regulations. === Water pollution === One of the most devastating effects of industrial waste is water pollution. For many industrial processes, water is used which comes in contact with harmful chemicals. These chemicals may include organic compounds (such as solvents), metals, nutrients or radioactive material. If the wastewater is discharged without treatment, groundwater and surface water bodies—lakes, streams, rivers and coastal waters—can become polluted, with serious impacts on human health and the environment. Drinking water sources and irrigation water used for farming may be affected. The pollutants may degrade or destroy habitat for animals and plants. In coastal areas, fish and other aquatic life can be contaminated by untreated waste; beaches and other recreational areas can be damaged or closed.: 273–309  == Management == === Hungary === Hungary's first waste prevention program was their 2014-2020 national waste management plan. Their current program (2021-2027) is financed by European Union and international grants, domestic co-financing, product charges, and landfill taxes. === Thailand === In Thailand the roles in municipal solid waste (MSW) management and industrial waste management are organized by the Royal Thai Government, which is organized as central (national) government, regional government, and local government. Each government is responsible for different tasks. The central government is responsible for stimulating regulation, policies, and standards. The regional governments are responsible for coordinating the central and local governments. The local governments are responsible for waste management in their governed area. However, the local governments do not dispose of the waste by themselves but instead hire private companies that have been granted the right from the Pollution Control Department (PCD) in Thailand. The main companies are Bangpoo Industrial Waste Management Center, General Environmental Conservation Public Company Limited (GENCO), SGS Thailand, Waste Management Siam LTD (WMS), and Better World Green Public Company Limited (BWG). These companies are responsible for the waste they have received from their customers before releasing it to the environment, burying it. === United States === The 1976 Resource Conservation and Recovery Act (RCRA) provides for federal regulation of industrial, household, and manufacturing solid and hazardous wastes in the United States. RCRA aims to conserve natural resources and energy, protect human health, eliminate or reduce waste, and to clean up waste when needed. RCRA first began as an amendment to the Solid Waste Disposal Act of 1965, and in 1984, Congress passed the Hazardous and Solid Waste Amendments (HSWA) which strengthened RCRA by: Eliminating land disposal—land disposal means placing waste on or in land (e.g. injection wells, landfills, etc.), and the Land Disposal Restrictions (LDR) program (under HSWA) forbids untreated hazardous waste from land disposals, and requires the U.S. Environmental Protection Agency (EPA) to set specific treatment standards that must be met before hazardous waste can be subject to land disposals. The LDR program also has a dilution prohibition, which asserts that hazardous waste cannot be diluted down by the handler as a means to avoid satisfying the treatment. Waste minimization—the goal of waste minimization is to make sure that the amount of hazardous waste that is produced, and its toxicity levels, is as diminished as possible, and the EPA does this through source reduction and recycling. Source reduction (or pollution prevention (P2)) trims production of hazardous wastes right at its source, and is the EPA's first step in material management with recycling being second. Amplifying the EPA's authority regarding corrective action—corrective action is when treatment, storage, and disposal facilities (TSDFs) must oblige with inquiring hazardous releases into ground and surface water, soil, and air, and clearing it up. Under the HSWA, the EPA can necessitate corrective action at permitted and non-permitted TSDFs. Furthermore, the EPA uses Superfund to find sites of contamination, identify the parties responsible, and in the occurrences where said parties are not known or able to, the program funds cleanups. Superfund also works on figuring out and applying final remedies for cleanups. The Superfund process is to: 1) collect necessary information (known as the Remedial Investigation (RI) phase); 2) assess alternatives to deal with any potential risks to the environmental and human health (known as the Feasibility Study (FS) stage); 3) determine the most suitable remedies that could lower the risks to more adequate levels. Some sites are so contaminated because of past waste disposals that it takes decades to clean them up, or bring the contamination down to acceptable levels, thus requiring long-term management over those sites. Hence, sometimes figuring out a final remedy is not possible, and so, the EPA has developed the Adaptive Management plan. The EPA has issued national regulations regarding the handling, treatment and disposal of wastes. EPA has authorized individual state environmental agencies to implement and enforce the RCRA regulations through approved waste management programs. State compliance is monitored by EPA inspections. In the case that waste management guideline standards are not met, action against the site will be taken. Compliance errors may be corrected by enforced cleanup directly by the site responsible for the waste or by a third party hired by that site. Prior to the enactment of the Clean Water Act (1972) and RCRA, open dumping or releasing wastewater into nearby bodies of water were common waste disposal methods. The negative effects on human health and environmental health led to the need for such regulations. The RCRA framework provides specified subsections defining nonhazardous and hazardous waste materials and how each should be properly managed and disposed of. Guidelines for the disposal of nonhazardous solid waste includes the banning of open dumping. Hazardous waste is monitored in a "cradle to grave" fashion; each step in the process of waste generation, transport and disposal is tracked. The EPA now manages 2.96 million tons of solid, hazardous and industrial waste. Since establishment, the RCRA program has undergone reforms as inefficiencies arise and as waste management processes evolve. The 1972 Clean Water Act is a broad legislative mandate to protect surface waters (rivers, lakes and coastal water bodies). A 1948 law had authorized research and development of voluntary water standards, and had provided limited financing for state and local government efforts. The 1972 law prohibited, for the first time, uncontrolled discharges of industrial waste, as well as municipal sewage, into waters of the United States. EPA was required to develop national standards for industrial facilities and standards for municipal sewage treatment plants. States were required to develop water quality standards for individual water bodies. Enforcement is mainly delegated to state agencies. Major amendments to the law were passed in 1977 and 1987. == See also == Chemical waste Environmental remediation Environmental racism Hazardous waste List of solid waste treatment technologies List of waste management companies List of waste management topics Recycling Soil pollution Tailings (mining waste) == References ==
Wikipedia/Industrial_waste
Digital transformation (DT) is the process of adoption and implementation of digital technology by an organization in order to create new or modify existing products, services and operations by the means of translating business processes into a digital format. The goal for its implementation is to increase value through innovation, invention, improved customer experience and efficiency. Focusing on efficiency and costs, the Chartered Institute of Procurement & Supply (CIPS) defines "digitalisation" asthe practice of redefining models, functions, operations, processes and activities by leveraging technological advancements to build an efficient digital business environment – one where gains (operational and financial) are maximised, and costs and risks are minimised. However, since there are no comprehensive data sets on digital transformation at the macro level, the overall effect of digital transformation is still (as of 2020), too early to comment. While there are approaches which see digital transformation as an opportunity to be seized quickly if the dangers of delay are to be avoided, a useful incremental approach to transformation called discovery-driven planning (DDP) has been proven to help solve digital challenges, especially for traditional firms. This approach focuses on step-by-step transformation instead of the all-or-nothing approach. A few benefits of DDP are risk mitigation, quick response to changing market conditions, and increased success rate to digital transformations. == Benefits, barriers and enablers == === Benefits === Adopting digital technology can bring various benefits to a business. CIPS has also observed that digital capability can be used to support supply chain transparency and remote working. === Barriers === There are multiple common barriers that digital transformation initiatives, projects and strategies face. One of the main barriers is change management, because changes in processes may face active resistance from workers. Related to change management is the miscommunication between workers, which can lead to implementation delays or even complete project failure. Some companies are unable to develop a realistic cost projection due to a too optimistic view of the process. Companies may have legacy systems in place, which can lead to integration difficulties with new systems. Within organizations there may also be a lack of resources, top management support, workers' skills, commitment, collaboration and vision. Some company cultures can struggle with the changes required by digital transformation. === Enablers === In addition to the several barriers to digital transformation, there are also numerous enablers of digital transformation. The primary enablers are organizations' resources and capabilities, workers' skills, technologies and culture. The aforementioned enabler "Organizations resources and capabilities" refers to the ability of an organization to adapt to contemporary issues arising in the business environment, as well as their capabilities in the field of data analytics. In regards to "Workers' skills", workers must be able to develop valuable insights with the use of data, have significant emotional intelligence and effectively part-take in the development of new products. Thirdly, technology is also a vital enabler of digital transformation. Companies can benefit from having access to artificial intelligence, data analytics softwares and effective usage of social media. Lastly, "Culture" pertains to the extent the organizational culture is data-driven and the quality of the top management support and engagement within the corporation. == History == Digitization is the process of converting analog information into digital form using an analog-to-digital converter, such as in an image scanner or for digital audio recordings. As usage of the internet has increased since the 1990s, the usage of digitization has also increased. Digital transformation, however, is broader than just the digitization of existing processes. Digital transformation entails considering how products, processes and organizations can be changed through the use of new digital technologies. A 2019 review proposes a definition of digital transformation as "a process that aims to improve an entity by triggering significant changes to its properties through combinations of information, computing, communication, and connectivity technologies". Digital transformation can be seen as a socio-technical programme. A 2015 report stated that maturing digital companies were using cloud hosting, social media, mobile devices and data analytics, while other companies were using individual technologies for specific problems. By 2017, one study found that less than 40% of industries had become digitized (although usage was high in the media, retail and technology industries). As of 2020, 37% of European companies and 27% of American companies had not embraced digital technology. Over the period of 2017 to 2020, 70% of European municipalities have increased their spending on digital technologies. By 2019, the Chartered Institute of Procurement & Supply found in a survey of 700 managers, representing over 20 industries and 55 countries, that over 90% of the businesses represented had adopted at least one new form of information technology, and 90% stated that their digitalisation strategies aimed to secure decreased operational costs and increased efficiency. In a 2021 survey, 55% of European companies stated the COVID-19 pandemic has increased the demand for digital technology, and 46% of companies reported that they have grown more digital. Half of these companies anticipate an increase in the usage of digital technologies in the future, with a greater proportion being companies that have previously used digital technology. A lack of digital infrastructure was viewed as a key barrier to investment by 16% of EU businesses, compared to 5% in the US. In a survey conducted in 2021, 89% of African banks polled claimed that the pandemic had hastened the digital transformation of their internal operations. In 2022, 53% of businesses in the EU reported taking action or making investments in becoming more digital. 71% of companies in the US reported using at least one advanced digital technology, similar to the average usage of 69% across EU organizations. == TOP Framework == Digital transformation plays a crucial role in alleviating the adverse effects of simultaneous and interconnected challenges, while also strengthening the resilience and adaptability of both organizations and supply chains. Represented by the TOP framework, digital transformation acts as a catalyst for generating and leveraging benefits. These benefits hold the potential to bolster resilience across not only individual organizations but also throughout the entire supply chain. === Technology === It does so by leveraging cutting-edge technologies to enhance predictive power and responsiveness. Technologies include the Internet of Things, big data analytics, artificial intelligence, simulation, additive manufacturing, blockchain, and digital twins. === Organization === Digital transformation embedded within the organizational culture empowers managers to take decisive actions, enables seamless collaboration among workers across diverse departments, and fortifies supply chains. Additionally, it empowers top management to respond swiftly and proactively, effectively reducing and mitigating adverse impacts. === People === Effective communication within an organization is essential among departments and employees. Furthermore, possessing robust skills in communication, leadership, and strategy is imperative for overcoming any adverse effects of the challenges. == Role of resources and capabilities == According to the resource-based view theory, successful firms' resources should be valuable, rare, non-imitable, and non-substitutable in order for capabilities such as responsiveness, flexibility, or even agility to be developed. "A capability is a concept that refers to an organization's use of a set of resources to carry out its routine and strategic activities". The digital transformation capability (DTC) framework is a direct application of this theory, stating that resources can be either tangible, intangible or human. The tangible side of the DTC framework gathers physical assets like the organization's IT infrastructure, whereas the intangible side focuses on its digital transformation strategy, knowledge, and reputational capital. Human resources are broader and include technical skills, continuous training, leadership, and social skills. == Sustainability and digital transformation == Digital transformation is often perceived as a reactive measure to address customer demands, competition, and regulatory compliance. However, it can be a proactive opportunity for organizations to achieve sustainable business practices and facilitate a circular economy. By building sensing, smart, sustainable, and social capabilities, enterprises can capture valuable information, make faster and smarter decisions, and adapt to changing environments. Aligning digital transformation with sustainability can enhance performance by engaging stakeholders, optimizing resource allocation, and reducing risks. A comprehensive business case that prioritizes sustainability benefits and multidimensional returns can secure the necessary resources for successful implementation. To achieve sustainability goals, effective governance, integration, change management, and stakeholder involvement are critical factors. == See also == E-learning Electronic medical record Government Digital Service Online shopping Real estate Supply chain == References ==
Wikipedia/Digital_transformation
Improvements to the steam engine were some of the most important technologies of the Industrial Revolution, although steam did not replace water power in importance in Britain until after the Industrial Revolution. From Englishman Thomas Newcomen's atmospheric engine, of 1712, through major developments by Scottish inventor and mechanical engineer James Watt, the steam engine began to be used in many industrial settings, not just in mining, where the first engines had been used to pump water from deep workings. Early mills had run successfully with water power, but by using a steam engine a factory could be located anywhere, not just close to a water source. Water power varied with the seasons and was not always available. In 1776 Watt formed an engine-building and engineering partnership with manufacturer Matthew Boulton. The partnership of Boulton & Watt became one of the most important businesses of the Industrial Revolution and served as a kind of creative technical centre for much of the British economy. The partners solved technical problems and spread the solutions to other companies. Similar firms did the same thing in other industries and were especially important in the machine tool industry. These interactions between companies were important because they reduced the amount of research time and expense that each business had to spend working with its own resources. The technological advances of the Industrial Revolution happened more quickly because firms often shared information, which they then could use to create new techniques or products. The development of the stationary steam engine was a very important early element of the Industrial Revolution. However, it should be remembered that for most of the period of the Industrial Revolution, the majority of industries still relied on wind and water power as well as horse and man-power for driving small machines. == Thomas Savery's steam pump == The industrial use of steam power started with Thomas Savery in 1698. He constructed and patented in London the first engine, which he called the "Miner's Friend" since he intended it to pump water from mines. Early versions used a soldered copper boiler which burst easily at low steam pressures. Later versions with iron boiler were capable of raising water about 46 meters (150 feet). The Savery engine had no moving parts other than hand-operated valves. The steam once admitted into the cylinder was first condensed by an external cold water spray, thus creating a partial vacuum which drew water up through a pipe from a lower level; then valves were opened and closed and a fresh charge of steam applied directly on to the surface of the water now in the cylinder, forcing it up an outlet pipe discharging at higher level. The engine was used as a low-lift water pump in a few mines and numerous water works, but it was not a success since it was limited in pumping height and prone to boiler explosions. == Thomas Newcomen's steam engine == The first practical mechanical steam engine was introduced by Thomas Newcomen in 1712. Newcomen apparently conceived his machine independently of Savery, but as the latter had taken out a wide-ranging patent, Newcomen and his associates were obliged to come to an arrangement with him, marketing the engine until 1733 under a joint patent. Newcomen's engine appears to have been based on Papin's experiments carried out 30 years earlier, and employed a piston and cylinder, one end of which was open to the atmosphere above the piston. Steam just above atmospheric pressure (all that the boiler could stand) was introduced into the lower half of the cylinder beneath the piston during the gravity-induced upstroke; the steam was then condensed by a jet of cold water injected into the steam space to produce a partial vacuum; the pressure differential between the atmosphere and the vacuum on either side of the piston displaced it downwards into the cylinder, raising the opposite end of a rocking beam to which was attached a gang of gravity-actuated reciprocating force pumps housed in the mineshaft. The engine's downward power stroke raised the pump, priming it and preparing the pumping stroke. At first the phases were controlled by hand, but within ten years an escapement mechanism had been devised worked by of a vertical plug tree suspended from the rocking beam which rendered the engine self-acting. A number of Newcomen engines were successfully put to use in Britain for draining hitherto unworkable deep mines, with the engine on the surface; these were large machines, requiring a lot of capital to build, and produced about 5 hp. They were extremely inefficient by modern standards, but when located where coal was cheap at pit heads, opened up a great expansion in coal mining by allowing mines to go deeper. Despite their disadvantages, Newcomen engines were reliable and easy to maintain and continued to be used in the coalfields until the early decades of the nineteenth century. By 1729, when Newcomen died, his engines had spread to France, Germany, Austria, Hungary and Sweden. A total of 110 are known to have been built by 1733 when the joint patent expired, of which 14 were abroad. In the 1770s, the engineer John Smeaton built some very large examples and introduced a number of improvements. A total of 1,454 engines had been built by 1800. == James Watt's steam engines == A fundamental change in working principles was brought about by James Watt. With the close collaboration of Matthew Boulton, by 1778 he had succeeded in perfecting his steam engine which incorporated a series of radical improvements; notably, the use of a steam jacket around the cylinder to keep it at the temperature of the steam and, most importantly, a steam condenser chamber separate from the piston chamber. These improvements increased engine efficiency by a factor of about five, saving 75% on coal costs. The Newcomen engine could not, at the time, be easily adapted to drive a rotating wheel, although Wasborough and Pickard did succeed in doing so in about 1780. However, by 1783 the more economical Watt steam engine had been fully developed into a double-acting rotative type with a centrifugal governor, parallel motion and flywheel, which meant that it could be used to directly drive the rotary machinery of a factory or mill. Both of Watt's basic engine types were commercially very successful. By 1800, the firm Boulton & Watt had constructed 496 engines, with 164 driving reciprocating pumps, 24 serving blast furnaces, and 308 powering mill machinery; most of the engines generated from 5 to 10 hp. An estimate of the total power that could be produced by all these engines was about 11,200 hp. This was still only a fraction of the total power-generating capacity in Britain by waterwheels (120,000 hp) and by windmills (15,000 hp); however, water and wind power were seasonably variable. Newcomen and other steam engines generated at the same time about 24,000 hp. == Development after Watt == The development of machine tools, such as the lathe, planing and shaping machines powered by these engines, enabled all the metal parts of the engines to be easily and accurately cut and in turn made it possible to build larger and more powerful engines. In the early 19th century, after the expiration of the Boulton & Watt patent in 1800, the steam engine underwent great increases in power due to the use of higher-pressure steam, which Watt had always avoided because of the danger of exploding boilers, which were in a very primitive state of development. Until about 1800, the most common pattern of steam engine was the beam engine, built as an integral part of a stone or brick engine-house, but soon various patterns of self-contained portative engines (readily removable, but not on wheels) were developed, such as the table engine. Further decrease in size due to use of higher pressure came towards the end of the 18th century when Cornish engineer Richard Trevithick, and American engineer Oliver Evans, independently began to construct higher-pressure (about 40 pounds per square inch (2.7 atm)) engines that exhausted into the atmosphere, although Arthur Wolf working at the Meux Brewery in London was already experimenting with higher-pressure steam in his efforts to save coal. This allowed an engine and boiler to be combined into a single unit compact and light enough to be used on mobile road and rail locomotives and steam boats. Trevithick was a man of versatile talents, and his activities were not confined to small applications. Trevithick developed his large Cornish boiler with an internal flue from about 1812. These were also employed when upgrading a number of Watt pumping engines; by this time Arthur Wolf had already produced high-pressure engines whilst working at Meux Brewery, in his efforts to improve efficiency, thus saving coal, as he had been trained by Joseph Bramah in the art of quality control, which resulted in him becoming chief engineer at Harveys of Hayle in Cornwall, by far the largest and leading manufacturer of steam engines in the world. The Cornish engine was developed in the 1810s for pumping mines in Cornwall. It was the result of using the exhaust of a high-pressure engine to power a condensing engine. The Cornish engine was notable for its relatively high efficiency. === The Corliss Engine === The last major improvement to the steam engine was the Corliss engine. Named after its inventor, George Henry Corliss, this stationary steam engine was introduced to the world in 1849. The engine boasted a number of desired features, including fuel efficiency (lowering cost of fuel by a third or more), low maintenance costs, 30% higher rate of power production, high thermal efficiency, and the ability to operate under light, heavy, or varying loads while maintaining high velocity and constant speed. While the engine was loosely based on existing steam engines, keeping the simple piston-flywheel design, the majority of these features were brought about by the engine's unique valves and valve gears. Unlike most engines employed during the era that used mainly slide-valve gears, Corliss created his own system that used a wrist plate to control a number of different valves. Each cylinder was equipped with four valves, with exhaust and inlet valves at both ends of the cylinder. Through a precisely tuned series of events opening and closing these valves, steam is admitted and released at a precise rate, allowing for linear piston motion. This provided the engine's most notable feature, the automatic variable cut-off mechanism. This mechanism is what allowed the engine to maintain a set speed in response to varying loads without losing efficiency, stalling, or being damaged. Using a series of cam gears, which could adjust valve timing (essentially acting as a throttle), the engine's speed and horsepower was adjusted. This proved extremely useful for most of the engine's applications. In the textile industry, it allowed for production at much higher speeds while lowering the likelihood that threads would break. In metallurgy, the extreme and abrupt variations of load experienced in rolling mills were also countered by the technology. These examples demonstrate that the Corliss engine was able to lead to much higher rates of production, while preventing costly damages to machinery and materials. It was referred to as “the most perfect regulation of speed.” Corliss kept a detailed record of the production, collective horsepower, and sales of his engines up until the patent expired. He did this for a number of reasons, including tracking those who infringed on the patent rights, maintenance and upgrade details, and especially as data used to extend the patent. With this data, a more clear understanding of the engine's influence is provided. By 1869, nearly 1200 engines had been sold, totaling 118,500 horsepower. Another estimated 60,000 horsepower was being utilized by engines that were created by manufacturers infringing on Corliss's patent, bringing the total horsepower to roughly 180,000. This relatively small number of engines produced 15% of the United States’ total 1.2 million horsepower. The mean horsepower for all Corliss engines in 1870 was 100, while the mean for all steam engines (including Corliss engines) was 30. Some very large engines even allowed for applications as large as 1,400 horsepower. Many were convinced of the Corliss engine's benefits, but adoption was slow due to patent protection. When Corliss was denied a patent extension in 1870, it became a prevalent model for stationary engines in the industrial sector. By the end of the 19th century, the engine was already having a major influence on the manufacturing sector, where it made up only 10% of the sector's engines, but produced 46% of the horsepower. The engine also became a model of efficiency outside of the textile industry, as it was used for pumping the waterways of Pawtucket, Rhode Island in 1878, and played an essential role in the expansion of the railroad by allowing for very large-scale operations in rolling mills. Many steam engines of the 19th century have been replaced, destroyed, or repurposed, but the longevity of the Corliss engine is apparent today in select distilleries, where they are still used as a power source. == Major Applications == === Blast furnace power === In the mid-1750s, the steam engine was applied to the water power-constrained iron, copper and lead industries for powering blast bellows. These industries were located near the mines, some of which were using steam engines for mine pumping. Steam engines were too powerful for leather bellows, so cast iron blowing cylinders were developed in 1768. Steam-powered blast furnaces achieved higher temperatures, allowing the use of more lime in iron blast furnace feed. (Lime-rich slag was not free-flowing at the previously used temperatures.) With a sufficient lime ratio, sulfur from coal or coke fuel reacts with the slag so that the sulfur does not contaminate the iron. Coal and coke were cheaper and more abundant fuel. As a result, iron production rose significantly during the last decades of the 18th century. === Moving from water to steam power === Water power, the world's preceding supply of power, continued to be an essential power source even during the height of steam engine popularity. The steam engine, however, provided many benefits that couldn't be realized by relying solely on water power, allowing it to quickly become industrialised nations' dominant power source (rising from 5% to 80% of the total power in the US from 1838-1860). While many consider the potential for an increase in power generated to be the dominant benefit (with the average horsepower of steam-powered mills producing four times the power of water-powered mills), others favor the potential for agglomeration. Steam engines made it possible to easily work, produce, market, specialize, viably expand westward without having to worry about the less abundant presence of waterways, and live in communities that weren't geographically isolated in proximity to rivers and streams. Cities and towns were now built around factories where steam engines served as the foundation for the livelihood of many of the citizens. By promoting the agglomeration of individuals, local markets were established and often met with impressive success, cities quickly grew and were eventually urbanized, the quality of living increased as infrastructure was put in place, finer goods could be produced as acquisition of materials became less difficult and expensive, direct local competition led to higher degrees of specialization, and labor and capital were in rich supply. In some counties where the establishments utilized steam power, population growths were even seen to increase. These steam-powered towns encouraged growth locally and on the national scale, further validating the economic importance of the steam engine. === The steamboat === This period of economic growth, which was ushered in by the introduction and adoption of the steamboat, was one of the greatest ever experienced in the United States. Robert Fulton, Robert Livingston and Henry Shreve were all big contributors to the introduction of the steamboat to the American public. Around 1815, steamboats began to replace barges and flatboats in the transport of goods around the United States. Prior to the steamboat, rivers were generally only used in transporting goods from east to west, and from north to south as fighting the current was very difficult and often impossible. Non-powered boats and rafts were assembled up-stream, would carry their cargo down stream, and would often be disassembled at the end of their journey; with their remains being used to construct homes and commercial buildings. Following the advent of the steamboat, the United States saw an incredible growth in the transportation of goods and people, which was key in westward expansion. Prior to the steamboat, it could take between three and four months to make the passage from New Orleans to Louisville, averaging twenty miles a day. With the steamboat this time was reduced drastically with trips ranging from twenty-five to thirty-five days. This was especially beneficial to farmers as their crops could now be transported elsewhere to be sold. The steamboat also allowed for increased specialization. Sugar and cotton were shipped up north while goods like poultry, grain, and pork were shipped south. Unfortunately, the steamboat also aided in the internal slave trade. With the steamboat came the need for an improved river system. The natural river system had features that either wasn't compatible with steamboat travel or was only available during certain months when rivers were higher. Some obstacles included rapids, sand bars, shallow waters and waterfalls. To overcome these natural obstacles, a network of canals, locks and dams were constructed. This increased demand for labor spurred tremendous job growth along the rivers. The economic benefits of the steamboat extended far beyond the construction of the ships themselves, and the goods they transported. These ships led directly to growth in the coal and insurance industries, along with creating demand for repair facilities along the rivers. Additionally the demand for goods in general increased as the steamboat made transport to new destinations both wide reaching and efficient. ==== The steamboat and water transport ==== After the steamboat was invented and achieved a number of successful trials, it was quickly adopted and led to an even quicker change in the way of water transport. In 1814, the city of New Orleans recorded 21 steamboat arrivals, but over the course of the following 20 years that number exploded to more than 1200. The steamboat's role as a major transportation source was secured. The transport sector saw enormous growth following the steam engine's application, leading to major innovations in canals, steamboats, and railroads. The steamboat and canal system revolutionized trade of the United States. As the steamboats gained popularity, enthusiasm grew for the building of canals. In 1816, the US had only 100 miles of canals. This needed to change, however, as the potential increase in traded goods from east to west convinced many that canals were a necessary connection between the Mississippi–Ohio waterways with the Great Lakes. === Railroad === The use of steam engines on railroads proved to be extraordinary in the fact that now you could have large amounts of goods and raw materials delivered to cities and factories alike. Trains could deliver these to places far away at a fraction of the cost of traveling by wagon. Railroad tracks, which were already in use in mines and various other situations, became the new means of transportation after the first locomotive was invented. == See also == History portal Technology portal == References == General The Growth of the Steam-engine. Robert H. Thurston, A. M., C. E., New York: D. Appleton and Comithcmpany, 1878. Burstall, Aubrey F. (1965). A History of Mechanical Engineering. The MIT Press. ISBN 0-262-52001-X. Hills, Richard L. (1989). Power from Steam. Cambridge University Press. ISBN 0-521-45834-X.
Wikipedia/Steam_power_during_the_Industrial_Revolution
The Rainhill trials were a competition run from 6 to 14 October 1829, to test George Stephenson's argument that locomotives would have the best motive power for the then nearly-completed Liverpool and Manchester Railway (L&MR). Ten locomotives were entered, of which five were able to compete, running along a 1 mile (1.6 km) length of level track at Rainhill, in Lancashire (now Merseyside). Stephenson's Rocket was the only locomotive to complete the trials, and was declared the winner. The directors of the L&MR accepted that locomotives should operate services on their new line, and George and Robert Stephenson were given the contract to produce locomotives for the railway. == Background == The directors of the Liverpool and Manchester Railway had originally intended to use stationary steam engines to haul trains along the railway using cables. They had appointed George Stephenson as their engineer of the line in 1826, and he strongly advocated for the use of steam locomotives instead. As the railway was approaching completion, the directors decided to hold a competition to decide whether locomotives could be used to pull the trains; these became the Rainhill trials. A prize of £500 (equal to £55,577 today) was offered to the winner of the trials. Three engineers were selected as judges: John Urpeth Rastrick, a locomotive engineer of Stourbridge; Nicholas Wood, a mining engineer from Killingworth with considerable locomotive design experience; and John Kennedy, a Manchester cotton spinner and a major proponent of the railway. == Rules == The L&MR company set the rules for the trials. The rules went through several revisions; the final set, under which the competition was held, was: "The weight of the Locomotive Engine, with its full complement of water in the boiler, shall be ascertained at the Weighing Machine, by eight o'clock in the morning, and the load assigned to it shall be three times the weight thereof. The water in the boiler shall be cold, and there shall be no fuel in the fireplace. As much fuel shall be weighed, and as much water shall be measured and delivered into the Tender Carriage, as the owner of the Engine may consider sufficient for the supply of the Engine for a journey of thirty-five miles. The fire in the boiler shall then be lighted, and the quantity of fuel consumed for getting up the steam shall be determined, and the time noted." "The Tender Carriage, with the fuel and water, shall be considered to be, and taken as a part of the load assigned to the Engine." "Those engines which carry their own fuel and water, shall be allowed a proportionate deduction from their load, according to the weight of the Engine." "The Engine, with the carriages attached to it, shall be run by hand up to the Starting Post, and as soon as the steam is got up to fifty pounds per square inch (3.4 bar), the engine shall set out upon its journey." "The distance the Engine shall perform each trip shall be one mile and three quarters (2.8 km) each way, including one-eighth of a mile (200 m) at each end for getting up the speed and for stopping the train; by this means the Engine, with its load, will travel one and a-half mile (2.4 km) each way at full speed." "The Engines shall make ten trips, which will be equal to a journey of 35 miles (56 km); thirty miles (48 km) whereof shall be performed at full speed, and the average rate of travelling shall not be less than ten miles per hour (16 km/h)." "As soon as the Engine has performed this task (which will be equal to the travelling from Liverpool to Manchester), there shall be a fresh supply of fuel and water delivered to her; and, as soon as she can be got ready to set out again, she shall go up to the Starting Post, and make ten trips more, which will be equal to the journey from Manchester back again to Liverpool." "The time of performing every trip shall be accurately noted, as well as the time occupied in getting ready to set out on the second journey." "The gauge of the railway to be 4 ft 8+1⁄2 in (1,435 mm)." == Entries == Ten locomotives were officially entered for the trials, but on the day the competition began – 6 October 1829 – only five were available to run: Cycloped, a horse-powered locomotive built by Thomas Shaw Brandreth. Novelty, the world's first tank locomotive, built by John Ericsson and John Braithwaite. Perseverance, a vertical boilered locomotive, built by Timothy Burstall. Rocket, designed by George and Robert Stephenson; built by Robert Stephenson and Company. Sans Pareil, built by Timothy Hackworth. == Competition == The length of the L&MR that ran past Rainhill village was straight and level for over 1 mile (1.6 km), and was chosen as the site for the trials. The locomotives were to run at Kenrick's Cross, on the mile east from the Manchester side of Rainhill Bridge. Two or three locomotives ran each day, and several tests for each locomotive were performed over the course of six days. Between 10,000 and 15,000 people turned up to watch the trials, and bands provided musical entertainment. Cycloped was the first to drop out of the competition. It used a horse walking on a drive belt for power and was withdrawn after an accident caused the horse to burst through the floor of the engine. The next locomotive to retire was Perseverance, which was damaged in transit to the competition. Burstall spent the first five days of the trials repairing his locomotive, and though it ran on the sixth day, it failed to reach the required 10 miles per hour (16 km/h) speed and was withdrawn from the trial. It was granted a £25 consolation prize (equal to £2,779 today). Sans Pareil nearly completed the trials, though at first there was some doubt as to whether it would be allowed to compete as it was 300 pounds (140 kg) overweight. It completed eight trips before cracking a cylinder. Despite the failure it was purchased by the L&MR, where it ran for two years before being leased to the Bolton and Leigh Railway. The last locomotive to drop out was Novelty which used advanced technology for the time, and was lighter and considerably faster than the other locomotives in the competition. It was the crowd favourite and reached a then-astonishing 28 miles per hour (45 km/h) on the first day of competition. It later suffered damage to a boiler pipe which could not be fixed properly on site. Nevertheless, it ran the next day and reached 15 miles per hour (24 km/h) before the repaired pipe failed and damaged the engine severely enough that it had to be withdrawn. The Rocket was the only locomotive that completed the trials. It averaged 12 miles per hour (19 km/h) and achieved a top speed of 30 miles per hour (48 km/h)) hauling 13 tons, and was declared the winner of the £500 prize (equal to £55,577 today). The Stephensons were given the contract to produce locomotives for the L&MR. The Times carried a full report of the trials on 12 October 1829 from which the following extract are taken: THURSDAY – THIRD DAY: Mr. Stephenson's engine, "The Rocket," weighing 4 tons 3 cwt., performed, to-day, the work required by the original conditions. The following is a correct account of the performance: The engine, with its complement of water, weighed 4 tons 5 cwt., and the load attached to it was 12 tons 15 cwt., and, with a few persons who rode, made it about 13 tons. The Journey was 1.21 mile each way, with an additional length of 220 yards at each end to stop the engine in, making in one Journey 3[?] miles. The first experiment was of 35 miles, which is exactly ten journeys, and, including all the stoppages at the ends, was performed in 3 hours and 10 minutes, being upwards of 11 miles an hour. After this a fresh supply of water was taken in, which occupied 16 minutes, when the engine again started, and ran 35 miles in 2 hours and 52 minutes, which is upwards of 12 miles an hour, including all stoppages. The speed of the engine, with its load when in full motion, was from 14 to 17 miles an hour; and had the whole distance been in one continuing direction, there is no doubt but the result would have been 16 miles an hour. The consumption of coke was very moderate, not exceeding half a ton in the whole 70 miles. At several parts of the journey the engine moved at 18 miles an hour. SATURDAY – FIFTH DAY: In the expectation of witnessing the Novelty perform its appointed task, the attendance of company on the ground was more numerous today than it had been on several of the preceding days. Three times its own weight having been attached to the engine, the machine commenced its task, and performed it at the rate of 16 miles in the hour. Mr. Stephenson's engine, the Rocket, also exhibited today. Its tender was completely detached from it, and the engine alone shot along the road at the almost incredible rate of 32 miles in the hour. So astonishing was the celerity with which the engine, without its apparatus, darted past the spectators, that it could be compared to nothing but the rapidity with which the swallow darts through the air. Their astonishment was complete, every one exclaiming involuntarily, "The power of steam". == Additional trials == After the Rainhill trials, Rocket was tested on the Whiston Incline and was able to haul eight tons at 16 miles per hour (26 km/h) and 12 tons at 12+1⁄2 miles per hour (20.1 km/h) up the 1:96 gradient. == Re-enactments == === Rocket 150 === In May 1980, the Rocket 150 celebration was held to mark the 150th anniversary of the opening of the Liverpool and Manchester Railway and the trials the year before. A replica of Novelty was built for the event, which was also attended by replicas of Sans Pareil and Rocket (plus coach). On the first day of the Trials, the Rocket came off the rails as it was exiting the Bold Colliery sidings and buckled the rim of one of its large drive wheels. That evening, senior staff from a St Helens road transport company met a former colleague of the builder of the Rocket replica at a Liverpool hotel and agreed that, in the early hours of the following morning, they would urgently manufacture some steel parts (wedges) in their nearby workshops, to fix the bent drive wheel before the second day's parade commenced. At the same time, British Rail agreed to put a team of staff into the sidings at Bold to straighten the bent rails. Both activities were achieved on time and the Rocket ran successfully on the following two days of the Trials, though Sans Pareil was pushed by Lion and Novelty was on a wagon hauled by LMS Stanier Class 5 4-6-0 5000. As the line was then not electrified, the Advanced Passenger Train was also pushed, but by the latest diesel, Class 56, 077. The 'Grand Cavalcade' on each of the three days featured up to 40 steam and diesel locomotives and other examples of modern traction, including: Lion, at the time of Rocket 150 the oldest operable steam locomotive in existence(The British-built US locomotive John Bull, seven years older, was steamed again in 1981) LNER Class A3 4-6-2 No. 4472 Flying Scotsman LMS 5XP Jubilee Class 4-6-0 No. 5690 Leander LNER Class A4 4-6-2 No. 4498 Sir Nigel Gresley LNER Class V2 2-6-2 No. 4471 Green Arrow GWR 2551 Collett Goods 0-No. 3205 LMS Ivatt Class 4 2-6-0 No. 43106 BR Standard Class 9F 2-10-0 No. 92220 'Evening Star', the last steam locomotive to be built by British Railways LMS Princess Royal Class 4-6-2 No. 6201 Princess Elizabeth Two Class 86 locomotives 86214 Sans Pareil and 86235 Novelty were painted in a variation of the Large Logo Rail Blue livery where the BR logo was replaced by the Rocket 150 motif on a yellow background. === 2002 Restaging === In a 2002 restaging of the Rainhill trials using replica engines, neither Sans Pareil (11 out of 20 runs) nor Novelty (10 out of 20 runs) completed the course. In calculating the speeds and fuel efficiencies, it was found that Rocket would still have won, as its relatively modern technology made it a much more reliable locomotive than the others. Novelty almost matched it in terms of efficiency, but its firebox design caused it to gradually slow to a halt due to a buildup of molten ash (called "clinker") cutting off the air supply. The restaged trials were run over the Llangollen Railway, Wales, and were the subject of a 2003 BBC Timewatch documentary. For the restaging, major compromises were made for television and because of the differences in crew experience, the fuel used, the modifications made to the replicas for modern safety rules, modern materials and construction methods, and following operating experience. Comparisons were made between the engines only after calculations took into account the differences. == References == === Notes === === Footnotes === === Sources === Carlson, Robert Eugene (1969). The Liverpool and Manchester Railway Project 1821–1831. Newton Abbot: David and Charles. ISBN 0-7153-4646-6. OCLC 832435892. Dawson, Anthony (2018). The Rainhill Trials. Stroud: Amberley Publishing. ISBN 9781445669755. OCLC 1020621317. Cleveland-Stevens, Edward Carnegie (1915). English Railways: Their Development and Their Relation to the State. Routledge. OCLC 1044623771. OL 24183356M. Dendy Marshall, CF (1929). "The Rainhill Locomotive Trials of 1829". Transactions of the Newcomen Society. 9. Archived from the original on 19 March 2006. Ferneyhough, Frank (1980). Liverpool & Manchester Railway 1830–1980. England: Book Club Associates. OCLC 656128257. Wolmar, Christian (2008) [2007]. Fire and Steam: how the railways transformed Britain. London: Atlantic Books. ISBN 9781843546306. OCLC 1149031665. OL 32099184M. == External links == Rocket and its Rivals – details of 2003 Timewatch episode at Highbeam.com, archived in 2014
Wikipedia/Rainhill_trials
Transportation of goods to factories, and of finished products from them, was limited by high transport costs along roads to their destinations. This was not too severe in the case of light valuable materials such as textiles (woolen and linen cloth) but in the case of dense materials such as coal, it could be a limiting factor on the viability of an industry. In contrast, freighting goods by water, whether on rivers or coastwise was much cheaper. Canals brought the first major change to transportation, and were usually built directly from the mines to city centres, such as the famous Bridgewater Canal in Manchester. Tramways were also common using horses locomotion. == River navigations == Some rivers, such as the Thames, Severn and Trent were naturally navigable, at least in their lower reaches. Other rivers were improved during the 17th and early 18th centuries, improving the transport links of towns such as Manchester, Wigan, Hereford, and Newbury in England. However, these only provided links towards the coast, not across the heart of England. It was the canals which were to provide the vital links in the transport network. This however changed the perspective on how people viewed the world. == Turnpike trusts == In England, the roads of each parish were maintained by compulsory labour from the parishioners, six days per year. This proved inadequate in the case of certain heavily used roads, and from the 18th century (and in a few cases slightly earlier), statutory bodies of trustees began to be set up with power to borrow money to repair and improve roads, the loans being repaid from tolls collected from road users. In the 1750s there was a boom in creating new turnpike trusts with the result that by the end of the 18th century almost all main roads were turnpike roads. Each trust required an act of parliament, both on its initial creation and to renew it when the term granted by the Act expired. Cars were invented in the 1800s. == Railways == These railways were all horse-drawn, though in many cases their slope meant that the horse was not required to draw the wagon downhill; instead, it was necessary to apply a brake to slow the descent. The wagon was emptied into a river barge (or keel or trow), and the horse drew the empty wagon back to the coal pit. Steam engine haulage was tried by Richard Trevithick on the Merthyr Tramroad from Penydarren to Abercynon in 1804, but proved unsatisfactory, partly because the engine was too heavy for the rails. It was only after the development of stronger rails made of rolled wrought iron in the 1820s that steam engine-hauled long-distance railways became feasible. Like the early wagonways, these were (indeed are) edge railways, where the wheels of the engine and wagons (or carriages) are flanged. Thus followed the Stockton and Darlington Railway, the Liverpool and Manchester Railway and many more. People were able to use railways to get to factory work and jobs much more effectively than previous options. - In 1829 George Stephenson and Robert Stephenson built a locomotive called "the rocket" == See also == George Stephenson Robert Stephenson Stockton and Darlington Railway Isambard Kingdom Brunel Stephenson's Rocket Charlotte Dundas James Watt George Washington == References == The Industrial Revolution 1760-1830, T.S.Ashton,Oxford University Press 1972
Wikipedia/Transport_during_the_British_Industrial_Revolution
The Industrial Revolution in Wales was the adoption and developments of new technologies in Wales in the 18th and 19th centuries as part of the Industrial Revolution, resulting in increases in the scale of industry in Wales. == North-East Wales == Flintshire in North-East Wales developed the largest variety of industry in Wales. By the end of the 18th century there were 19 working metalworks at Holywell and 14 pottery works in Buckley. There were cotton mills in Holywell and Mold and there was a growth in the lead and coal industry. The Wrexham area in the 19th century was highly industrialised. At the peak there were 38 different collieries operating in the area, each producing coal totalling over 2.5 million tonnes annually to the numerous brickworks and steelworks in the area, including Brymbo Steel Works and Shotton Steel Works. In Bersham, near Wrexham there was the Bersham Colliery and Bersham Ironworks. Coke was pioneered for smelting iron rather than charcoal, and the site was a leading ironworks in Europe. Greenfield, also in Flintshire, is best known for its history of papermaking. A paper mill has been on this site since 1770. The site was chosen due to the constant water flow from the stream which comes from the St Winefride's Well. The speed this site developed was one of the reasons that Greenfield is still linked with the start of the Industrial Revolution. In the mid-19th century, up to 80 businesses had set up in the mile stretch between Holywell and Greenfield. The remains of some can now be seen as conservation and industrial archeological projects have been undertaken in recent years. Among the businesses were a copper mill, a flannel mill, a flour mill, shirt-makers and soft drink works, W Hall & Son (which still exists today). Greenfield was also home to two Courtaulds rayon factories and a sulphuric acid plant from 1936 to 1985. == Dolgellau gold == Gold was found in the Dolgellau area in the 1850s and a mining rush developed. The first gold was discovered at Gwynfynydd in 1863, but it was not until 1887 that the mine was developed commercially. By this time the mine had been acquired by William Pritchard Morgan, who was to become known as the "Welsh gold king", and who paid for two police constables to protect the mine. By 1888, two hundred people were employed at the site, the gold being extracted by driving horizontal tunnels (adits) into the mountainside, with the miners working deep underground by candlelight. The machinery was powered by water wheels and water turbines. In contrast to other mines in the area where the gold was found in shallow deposits, the Gwynfynydd gold is extracted from large quartz veins deep underground. The Clogau Gold Mine was opened to exploit the copper and lead veins in the area north of Bontddu. In 1854, gold was discovered at the mine in a vein of quartz. The main gold-bearing vein was named the "St. David's lode", and in 1860 arrangements were made with the Crown Estate to work the gold commercially. Operations started on 28 August 1860. Clogau produced significant amounts of gold in the 1890s. In 1899, it produced £60,000 worth of gold (equivalent to £8,531,168 in 2023). In 1919, exploration of the mine found new gold veins. A new crushing plant was installed and the mine was re-opened. In 1989 the Clogau Gold Mine was re-opened by William Roberts, founder of Clogau Gold of Wales Ltd. Gold extraction re-commenced between 1992 and 1998, with small-scale mining providing the gold for Clogau Gold jewellery. Mining eventually ceased in 1998 due to high cost of mining and diminishing quantities of gold being found. == South Wales Valleys == In the early 19th century parts of Wales became heavily industrialised. Ironworks were set up in the South Wales Valleys, running south from the Brecon Beacons particularly around the new town of Merthyr Tydfil, with iron production later spreading westwards to the hinterlands of Neath and Swansea where anthracite coal was already being mined. From the 1840s coal mining spread to the Cynon and Rhondda valleys. This led to a rapid increase in the population of these areas. === Glamorgan === ==== Metal industry ==== From the mid-18th century onwards, Glamorgan's uplands underwent large-scale industrialisation and several coastal towns, in particular Swansea and later Cardiff, became significant ports. From the late 18th century until the early 20th century Glamorgan produced 70 per cent of the British output of copper. The industry was developed by English entrepreneurs and investors such as John Henry Vivian and largely based in the west of the county, where coal could be purchased cheaply and ores imported from Cornwall, Devon and later much further afield. The industry was of immense importance to Swansea in particular; in 1823 the smelting works on the River Tawe, and the collieries and shipping dependent on them, supported between 8,000 and 10,000 people. Imports of copper ores reached a peak in the 1880s, after which there was a steep fall until the virtual end of the trade in the 1920s. The cost of shipping ores from distant countries, and the growth of foreign competitors, ended Glamorgan's dominance of the industry. Some of the works converted to the production of zinc and the Tawe valley also became a location for the manufacture of nickel after Ludwig Mond established a works at Clydach in 1902. Even at its peak, copper smelting was never as significant as iron smelting, which was the major industrial employer of men and capital in south Wales before the rise of the sale-coal industry. Ironmaking developed in locations where ironstone, coal and limestone were found in close proximity – primarily the northern and south-western parts of the South Wales coalfield. In the second half of the 18th century four ironworks were built in Merthyr Tydfil. In 1759 the Dowlais Ironworks were established by a partnership of nine men. This was followed by the Plymouth Ironworks in 1763, which was formed by Isaac Wilkinson and John Guest, then in 1765 Anthony Bacon established the Cyfarthfa Ironworks. The fourth of the great ironworks, Penydarren Ironworks, was built in 1784. These works made Merthyr Tydfil the main centre of the industry in Wales. As well as copper and iron, Glamorgan became an important centre for the tinplate industry. Although not as famous as the Llanelli or Pontypool works, a concentrated number of works emerged around Swansea, Aberavon and Neath towards the late 19th century. Glamorgan became the most populous and industrialised county in Wales and was known as the 'crucible of the Industrial Revolution'. Other areas to house heavy industries include ironworks in Maesteg (1826), tinplate works in Llwydarth and Pontyclun and an iron ore mine in Llanharry. Alongside the metalworks, industries appeared throughout Glamorgan that made use of the works' output. Pontypridd was well known for the Brown Lenox Chainworks, which during the 19th century was the town's main industrial employer. ===== Coal industry ===== The largest change to industrial Glamorgan was the opening up of the South Wales coalfield, the largest continuous coalfield in Britain, which occupied the greater part of Glamorgan, mostly north of the Vale. The coalfield provided a vast range in quality and type, but prior to 1750 the only real access to the seams was through bell pits or digging horizontally into a level where the seam was exposed at a river bank or mountainside. Although initially excavated for export, coal was soon also needed for the smelting process in Britain's expanding metallurgical industries. Developments in coal mining began in the north-eastern rim of Glamorgan around the ironworks of Merthyr and in the south-west around the copper plants of Swansea. In 1828 the South Wales coalfield was producing an estimated 3 million tons of coal, by 1840 that had risen to 4.5 million, with about 70 percent consumed by local commercial and domestic usage. The 1840s saw the start of a dramatic increase in the amount of coal excavated within Glamorgan. Several events took place to precipitate the growth in coal mining, including the discovery of steam coal in the Cynon Valley, the building of a large masonry dock at Cardiff and the construction of the Taff Vale Railway. In 1845, after trials by the British Admiralty, Welsh steam coal replaced coal from Newcastle-upon-Tyne as the preferred fuel for the ships of the Royal Navy. Glamorgan steam coal quickly became a sought-after commodity for navies all over the world and its production increased to meet the demand. The richest source for steam coal was the Rhondda Valleys, and by 1856 the Taff Vale Railway had reached the heads of both valleys. Over the next fifty years the Rhondda would grow to become the largest producer of coal of the age. In 1874, the Rhondda produced 2.13 million tons of coal, which rose to 5.8 million tons by 1884. The coal now produced in Glamorgan far exceeded the interior demand, and in the later half of the 19th century the area became a mass exporter for its product. In the 1890s the docks of South Wales accounted for 38 percent of British coal exports and a quarter of global trade. Along with the increase in coal production came a very large increase in the population, as people emigrated to the area to seek employment. In Aberdare the population grew from 6,471 in 1841 to 32,299 in 1851 while the Rhondda grew from 3,035 in 1861 to 55,632 in 1881, peaking in 1921 at 162,729. Much of this population growth was driven by immigration. In the ten years from 1881 to 1891, net migration to Glamorgan was over 76,000, 63 percent of which was from the non-border counties of England – a proportion that increased in the following decade. === Lower Swansea Valley === ==== Coal and metals ==== Over a period of about 150 years up until the 1920s, the open valley of the River Tawe became one of the most heavily industrialised areas of the developed world. There were a number of reasons that favoured the great expansion of industry in this particular location. The general exploitation of coal in the South Wales coalfield of the South Wales valleys had revealed seams of steam coal and anthracite close to the surface in the Upper Swansea valley and these were easily exploited by shallow drift mining or open cast mining. Smelting metals required more than three parts of coal to every one part of metal ore, so it was of major economic benefit to have easily available, high quality coal. Swansea also had a good port and safe anchorage. The combination of these two factors meant that it was financially more viable to bring the ore to Swansea's coal than take the coal to the ore. In addition, the very high tidal ranges at Swansea allowed deep draught ships to access the river mouth. This allowed large quantities of raw materials to be brought in (allowing further profit through economies of scale) and, more importantly, the finished products, such as sheet copper, tinplate, alum, porcelain and coal to be exported. The technologies involved in iron making had already been developed and refined, and skilled craftsmen were readily available to extend the newly developing industry. Swansea was already a town of significant size which could provide the required workforce. The growth of the industry in the Lower Swansea valley itself caused a great expansion in the population of Swansea and nearby Neath. A number of wealthy entrepreneurs, scientists and engineers of considerable ability were drawn to Swansea during this period, which in turn, promoted great innovation in the industrial processes. Initially the smelting works concentrated on copper. Coal was brought down to them by waggonways and tramways; copper ore was brought on ships which could sail right up to the works; and the resulting copper was exported out again the same way. Swansea became known as Copperopolis; and the lower Tawe valley became a mass of industry. In the wake of the copper and coal industry followed pottery-making (another industry which requires large amounts of coal, together with clay and flint, which could be shipped in from the West Country); the alum industry (based on pyrites found with coal); and the manufacture of fire-clay, which was used to line furnaces. ==== Copper ==== The first copper smelter directly associated was established at Landore in 1717 by John Lane and John Pollard. Pollard later went on to build the Llangyfelach copper works. In 1720 the Cambrian Works was set up near the mouth of the river and continued in production until 1745. (It reopened as a pottery in 1764.) In 1737, the White Rock copper works at Pentrechwyth was established. By 1780 there were three copper works on the east bank of the river: White Rock, Middle and Upper Bank. On the west bank there was also one at Forest. By 1800 nine copper smelters were in production in the valley. By 1860 the lower Swansea valley was smelting two thirds of the copper ores imported to Britain, and changes in the output and economy of the Swansea valley had a significant effect on global copper prices. == Post-War industry == The period following the Second World War saw a decline in several of the traditional industries, in particular the coal industry. The numbers employed in the South Wales coalfield, which at its peak around 1913 employed over 250,000 men, fell to around 75,000 in the mid-1960s and 30,000 in 1979. The coal mining industry in Britain was nationalised in 1947, meaning that Welsh collieries were controlled by the National Coal Board (NCB) and regulated by HM Inspectorate of Mines. This period also saw the Aberfan disaster in 1966, when a tip of coal slurry slid down to engulf a school with 144 dead, most of them children. By the early 1990s there was only one deep pit still working in Wales. Tower Colliery, Hirwaun remained open until it was last worked in 2008 after being a co-operative since 1994. There was a similar decline in the steel industry, and the Welsh economy, like that of other developed societies, became increasingly based on the expanding service sector. == Society == === Uprisings === The social effects of industrialisation led to bitter social conflict between the Welsh workers and predominantly English factory and mine owners. During the 1830s there were two armed uprisings, in Merthyr Tydfil in 1831, and the Chartist uprising in Newport in 1839, led by John Frost. The Rebecca riots, which took place between 1839 and 1844 in South Wales and Mid Wales were rural in origin. They were a protest not only against the high tolls which had to be paid on the local turnpike roads but against rural deprivation. === Treason of the Blue Books === Partly as a result of these disturbances, a government inquiry was carried out into the state of education in Wales. The inquiry was carried out by three English commissioners who spoke no Welsh and relied on information from witnesses, many of them Anglican clergymen. Their report, published in 1847 as Reports of the Commissioners of Inquiry into the State of Education in Wales concluded that the Welsh were ignorant, lazy and immoral, and that this was caused by the Welsh language and nonconformity. This resulted in a furious reaction in Wales, where the affair was commonly named the "Treason of the Blue Books". === Socialism === Socialism gained ground rapidly in the industrial areas of South Wales in the latter part of the century, accompanied by the increasing politicisation of religious Nonconformism. The first Labour MP, Keir Hardie, was elected as junior member for the Welsh constituency of Merthyr Tydfil and Aberdare in 1900. In common with many European nations, the first movements for national autonomy began in the 1880s and 1890s with the formation of Cymru Fydd, led by Liberal Party politicians such as T. E. Ellis and David Lloyd George. == See also == History of the Welsh economy == References ==
Wikipedia/Industrial_Revolution_in_Wales
In 2019, Wales generated 27% of its electricity consumption as renewable electricity, an increase from 19% in 2014. The Welsh Government set a target of 70% by 2030. In 2019, Wales was a net exporter of electricity. It produced 27.9 TWh of electricity while only consuming 14.7 TWh. The natural resource base for renewable energy is high by European standards, with the core sources being wind, wave, and tidal. Wales has a long history of renewable energy: in the 1880s, the first house in Wales with electric lighting powered from its own hydro-electric power station was in Plas Tan y Bwlch, Gwynedd. In 1963, the Ffestiniog Power Station was constructed, providing a large scale generation of hydroelectricity, and in November 1973, the Centre for Alternative Technology was opened in Machynlleth. == Government policy == In April 2019, a Climate Emergency was declared by the Welsh Government, and on 1 May the Senedd became the first parliament in the world to pass a climate emergency declaration. Current Welsh Government policy advocates for an increase in the percentage that renewable energy accounts for in Wales' energy sector, launching projects such as 'Prosperity for All: A Low Carbon Wales' to achieve this goal. 'The Climate Change Strategy for Wales' describes how the government will decrease greenhouse gas emissions. The reports suggest that energy generation from renewable sources is key to achieving a low carbon economy. The Welsh Government expects that all new energy projects should have an element of local ownership and this was the case for 825 MW of installed renewable energy capacity in 2019. In 2016, the low carbon economy was estimated to consist of 9,000 businesses, employing 13,000 people and generating a £2.4 billion turnover. == By principal area == Percentage of electricity consumption which from local renewables in 2019 (5 highest): Ceredigion: 110% Denbighshire: 100% Powys: 91% Rhondda Cynon Taf: 66% Neath Port Talbot: 65% == Hydropower == === List of hydropower stations === List of active hydropower stations in order of energy output: In 2019, there were 363 hydroelectric projects in Wales, with the capacity of 182MW, annually generating over 347 GWh.: 17  Since 2014 Natural Resources Wales (NRW) have enabled developers and small community groups to build 15 small scale hydro schemes in Wales, which can produce 1300kW of energy annually. NRW have also finished building the 17kW Garwnant small scale hydro scheme in 2017. Dinorwig Power Station, which lies on the boundary of the Snowdonia National Park, was fully commissioned in 1984. It has six generators placed inside Europe's largest man-made cavern, deep inside the Elidir mountain. Maximum electricity generation is achieved in less than 16 seconds, and is the largest quick response hydropower plant in Europe. The scheme supplies a maximum power of 1,728-megawatt (2,317,000 hp) and has a storage capacity of approximately 9.1 GWh (33 TJ). The Rheidol hydropower plant is the largest hydropowerplant of its kind in Wales. It has generated renewable energy since 1962, using rainfall from the nearby mountains. The plant includes a combination of reservoirs, dams, pipelines, aqueducts and power stations, over 162 square kilometres producing around 85 GWh annually which can power around 12,350 homes. The Ffestiniog power station opened in Gwynedd in 1963 and produces 360 MW. == Tidal power == Wales has a vast untapped potential for tidal power. Gerallt Llewelyn Jones of social enterprise Menter Môn said, “We have strong tidal resources around Wales and they have huge potential.” He added that tidal power is more predictable than wind and solar power. The tidal range round the west coast of Britain is one of the largest in the world. In January 2025, Huw Irranca-Davies the introduction of Strategic Resource Areas for tidal stream energy. These indicate areas in marine spatial planning where particular sectors may have priority. Four areas have been designated for tidal stream energy: north and west of Anglesey; around Ynys Enlli and the Llŷn peninsula; north-west of St Davids Head, Pembrokeshire; and off the coast of South Wales, south-west of Cardiff. === Swansea tidal lagoon === In 2015 the idea was raised the UK government to fund a £1 billion development of a sea wall around Swansea Bay to harness the power of the tide. The power generated could provide energy for 120,000 homes for 120 years. Other sites mooted were Cardiff Bay, Newport and Colwyn Bay. In June 2018 the UK government abandoned their support for the scheme, saying other forms of energy generation were cheaper. The decision was condemned by the green energy industry, environmental groups, the Labour Party and Plaid Cymru. In January 2023, plans of a new Swansea tidal lagoon project called "Blue Eden" emerged but this time the multi-billion pound project would be fully funded by the private sector. Phase SA1 of the project is said to include an electric battery manufacturing plant, battery storage facility, a tidal lagoon in Swansea Bay with a floating solar farm, data storage centre, a green hydrogen production facility, an oceanic and climate change research centre, and hundreds of waterfront homes. Claimed it would be a worldwide first, the project could start within 18 months but would take more than a decade to complete. === North Wales Tidal Lagoon === The North Wales Tidal Lagoon project is a proposal to build lagoons with large sea walls and turbines powered by rising and falling tides, it could power 180,000 homes. The proposed North Wales Tidal Lagoon would involve a sea wall over 19 miles (31 km) long from Llandudno to Prestatyn. Supporters say the £7bn project could power more than over a million homes and create more than 20,000 jobs. Denbighshire County Council unanimously voted to back the scheme in February 2023, claiming it would support 5000 construction jobs. === Morlais tidal stream === The Morlais tidal stream project is set to cover 35km2 of the Irish Sea on the west coast of Anglesey (Ynys Môn). It is hoped that investors and developers will build early-scale tidal energy projects which could deliver a combined 120MW of renewable clean energy. In 2022, £31million was secured for the first phase of construction from the EU's European Regional Development fund via the Welsh Government, which is likely to be the last large grant Wales receives from the EU. Jones Bros Civil Engineering has been given a £23.5m contract to build onshore infrastructure and Magallanes Tidal Energy has secured a guaranteed price for the energy it produces. The Government has agreed a £178.54 per MWh price via the Contracts for Difference (CfD) scheme. == Wind power == === Offshore wind === In 2021, three offshore wind farms off the north coast had a capacity of 726 MW: Rhyl Flats and North Hoyle have a capacity of 150 MW, and Gwynt y Môr was commissioned in 2015 and in 2019 had a capacity of 576 MW with 160 turbines, making it the fifth largest operating offshore windfarm in the world. Blue Gem Wind, a joint venture between TotalEnergies and Simply Blue Energy, is a Celtic Sea project developer that has secured rights to develop Wales’ first floating offshore wind farm, located 45 km south of the Pembrokeshire Coast. This will be Wales' first floating wind farm and could start generating electricity by 2027. === Onshore wind === One of the main problems facing developers in Wales are peat bogs, a naturally-occurring carbon sink. Ornithological issues, especially in ecologically rich sites can also increase developing costs. Some of the main terrestrial wind energy farms include: Brechfa Forest West (up to 28 turbines, 57.4MW) - ‘Renewable Energy Project of the Year’ winner Clocaenog Forest, Denbighshire (27 turbines, 96MW), Llandinam Windfarm, Powys (103 turbines, 31MW) - the oldest ScottishPower Renewables' windfarm, operated as CeltPower Ltd. Constructed in 1992. Pen y Cymoedd (76 turbine, 228MW) - Wales's largest onshore wind farm; a battery storage scheme will also be built onsite. In 2019, the capacity was 1.25 GW, according to the Welsh Government, which was an increase of 12% from the previous year. Neath Port Talbot, with its 230 MW capacity in 2019 was the highest in Wales. Onshore wind is relatively strong in Wales, due to its mountainous and coastal nature. == Solar PV == Solar PV (for electricity) and thermal panels (for hot water) are used throughout the country, for both domestic and non-domestic use. From 2012, roof-mounted solar panels above 50 kW needed full planning permission; anything below that fell under 'permitted rights'. Nearly 20% of Wales' total solar power (989 MW) is generated in Pembrokeshire. In 2019, 26% of the national, total capacity solar PV in Wales was locally owned. Welsh ministers have approved a 32MW solar farm project near Abergavenny and construction is expected to start in 2024. == Heat pumps == In 2019, the total capacity from water, air and ground source heat pumps totalled 86 MW, from 7,817 projects. Most of these were domestic installations, and around 80% were air source heat pumps. == See also == National Global == References == == External links == The Welsh Government's commissioned surveys and assessments of renewable energy installations in Wales. Currently available: 2019 2018 2017 2016 2015 Blue Gem Wind
Wikipedia/Renewable_energy_in_Wales
The Hungarian Academy of Sciences (Hungarian: Magyar Tudományos Akadémia [ˈmɒɟɒr ˈtudomaːɲoʃ ˈɒkɒdeːmijɒ], MTA) is Hungary’s foremost and most prestigious learned society. Its headquarters are located along the banks of the Danube in Budapest, between Széchenyi rakpart and Akadémia utca. The Academy's primary functions include the advancement of scientific knowledge, the dissemination of research findings, the support of research and development, and the representation of science in Hungary both domestically and around the world. == History == The origins of the Hungarian Academy of Sciences date back to 1825, when Count István Széchenyi offered one year's income from his estate to establish a Learned Society. He made this offer during a session of the Diet in Pressburg (Pozsony, now Bratislava), then the seat of the Hungarian Parliament. Inspired by his gesture, other delegates soon followed suit. The Society’s mission was defined as the development of the Hungarian language and the promotion of sciences and the arts in the Hungarian language. It was officially named the Hungarian Academy of Sciences in 1845. The Academy's central building, designed in the Renaissance Revival architecture by architect Friedrich August Stüler, was inaugurated in 1865. == Sections == Within the Academy, scientific sections are organized according to individual disciplines or closely related fields. Each section monitors, promotes, and evaluates scientific activities within its domain. It provides expert opinions on scientific matters, science policy, and research organization. Additionally, the sections assess the work of the Academy’s research institutes, university departments, and other affiliated research units. They also play a key role in the process of awarding the Doctor of the Hungarian Academy of Sciences (D.Sc.) degree, Hungary’s post-Ph.D. academic qualification. Today, the Academy is composed of eleven main scientific sections: Linguistics and Literary Scholarship Philosophy and Historical Sciences Mathematics Agricultural Sciences Medical Sciences Engineering Sciences Chemical Sciences Biological Sciences Economics and Law Earth Sciences Physical Sciences == Research institutes until 2019 == MTA Agricultural Research Centre MTA Chemical Research Center MTA Research Centre for Astronomy and Earth Sciences (involved with Konkoly Observatory) MTA Szeged Research Centre for Biology MTA Centre for Ecological Research MTA Research Centre for Economic and Regional Studies MTA Centre for Energy Research MTA Research Centre for the Humanities MTA Research Institute for Linguistics MTA Rényi Institute of Mathematics MTA Institute of Experimental Medicine MTA Research Centre for Natural Sciences MTA Institute of Nuclear Research MTA Wigner Research Centre for Physics MTA Centre for Social Sciences == Presidents of the Hungarian Academy of Sciences == === Széchenyi Academy of Literature and Arts === The Széchenyi Academy of Literature and Arts (Hungarian: Széchenyi Irodalmi és Művészeti Akadémia) was created in 1992 as an academy associated yet independent from the MTA. Some of the known members are György Konrád, Magda Szabó, Péter Nádas writers, Zoltán Kocsis pianist, Miklós Jancsó, István Szabó film directors. The last president was Károly Makk, film director, who succeeded László Dobszay (resigned on 20 April 2011). == See also == Open access in Hungary HUN-REN Wigner Research Centre for Physics == References == == External links == Official website Brief history of the Hungarian Academy of Sciences (in English) homepage of the Széchenyi Academy Picture of its central building – additional picture The palace of the Hungarian Academy of Sciences
Wikipedia/Hungarian_Academy_of_Sciences
Demography (from Ancient Greek δῆμος (dêmos) 'people, society' and -γραφία (-graphía) 'writing, drawing, description') is the statistical study of human populations: their size, composition (e.g., ethnic group, age), and how they change through the interplay of fertility (births), mortality (deaths), and migration. Demographic analysis examines and measures the dimensions and dynamics of populations; it can cover whole societies or groups defined by criteria such as education, nationality, religion, and ethnicity. Educational institutions usually treat demography as a field of sociology, though there are a number of independent demography departments. These methods have primarily been developed to study human populations, but are extended to a variety of areas where researchers want to know how populations of social actors can change across time through processes of birth, death, and migration. In the context of human biological populations, demographic analysis uses administrative records to develop an independent estimate of the population. Demographic analysis estimates are often considered a reliable standard for judging the accuracy of the census information gathered at any time. In the labor force, demographic analysis is used to estimate sizes and flows of populations of workers. In population ecology the focus is on the birth, death, migration and immigration of individuals in a population of living organisms, alternatively, in social human sciences could involve movement of firms and institutional forms. Demographic analysis is used in a wide variety of contexts. For example, it is often used in business plans, to describe the population connected to the geographic location of the business. Demographic analysis is usually abbreviated as DA. For the 2010 U.S. Census, The U.S. Census Bureau has expanded its DA categories. Also as part of the 2010 U.S. Census, DA now also includes comparative analysis between independent housing estimates, and census address lists at different key time points. Patient demographics form the core of the data for any medical institution, such as patient and emergency contact information and patient medical record data. They allow for the identification of a patient and their categorization into categories for the purpose of statistical analysis. Patient demographics include: date of birth, gender, date of death, postal code, ethnicity, blood type, emergency contact information, family doctor, insurance provider data, allergies, major diagnoses and major medical history. Formal demography limits its object of study to the measurement of population processes, while the broader field of social demography or population studies also analyses the relationships between economic, social, institutional, cultural, and biological processes influencing a population. == History == Demographic thoughts traced back to antiquity, and were present in many civilisations and cultures, like Ancient Greece, Ancient Rome, China and India. Made up of the prefix demo- and the suffix -graphy, the term demography refers to the overall study of population. In ancient Greece, this can be found in the writings of Herodotus, Thucydides, Hippocrates, Epicurus, Protagoras, Polus, Plato and Aristotle. In Rome, writers and philosophers like Cicero, Seneca, Pliny the Elder, Marcus Aurelius, Epictetus, Cato, and Columella also expressed important ideas on this ground. In the Middle Ages, Christian thinkers devoted much time in refuting the Classical ideas on demography. Important contributors to the field were William of Conches, Bartholomew of Lucca, William of Auvergne, William of Pagula, and Muslim sociologists like Ibn Khaldun. One of the earliest demographic studies in the modern period was Natural and Political Observations Made upon the Bills of Mortality (1662) by John Graunt, which contains a primitive form of life table. Among the study's findings were that one-third of the children in London died before their sixteenth birthday. Mathematicians, such as Edmond Halley, developed the life table as the basis for life insurance mathematics. Richard Price was credited with the first textbook on life contingencies published in 1771, followed later by Augustus De Morgan, On the Application of Probabilities to Life Contingencies (1838). In 1755, Benjamin Franklin published his essay Observations Concerning the Increase of Mankind, Peopling of Countries, etc., projecting exponential growth in British colonies. His work influenced Thomas Robert Malthus, who, writing at the end of the 18th century, feared that, if unchecked, population growth would tend to outstrip growth in food production, leading to ever-increasing famine and poverty (see Malthusian catastrophe). Malthus is seen as the intellectual father of ideas of overpopulation and the limits to growth. Later, more sophisticated and realistic models were presented by Benjamin Gompertz and Verhulst. In 1855, a Belgian scholar Achille Guillard defined demography as the natural and social history of human species or the mathematical knowledge of populations, of their general changes, and of their physical, civil, intellectual, and moral condition. The period 1860–1910 can be characterized as a period of transition where in demography emerged from statistics as a separate field of interest. This period included a panoply of international 'great demographers' like Adolphe Quetelet (1796–1874), William Farr (1807–1883), Louis-Adolphe Bertillon (1821–1883) and his son Jacques (1851–1922), Joseph Körösi (1844–1906), Anders Nicolas Kaier (1838–1919), Richard Böckh (1824–1907), Émile Durkheim (1858–1917), Wilhelm Lexis (1837–1914), and Luigi Bodio (1840–1920) contributed to the development of demography and to the toolkit of methods and techniques of demographic analysis. == Methods == Demography is the statistical and mathematical study of the size, composition, and spatial distribution of human populations and how these features change over time. Data are obtained from a census of the population and from registries: records of events like birth, deaths, migrations, marriages, divorces, diseases, and employment. To do this, there needs to be an understanding of how they are calculated and the questions they answer which are included in these four concepts: population change, standardization of population numbers, the demographic bookkeeping equation, and population composition. There are two types of data collection—direct and indirect—with several methods of each type. === Direct methods === Direct data comes from vital statistics registries that track all births and deaths as well as certain changes in legal status such as marriage, divorce, and migration (registration of place of residence). In developed countries with good registration systems (such as the United States and much of Europe), registry statistics are the best method for estimating the number of births and deaths. A census is the other common direct method of collecting demographic data. A census is usually conducted by a national government and attempts to enumerate every person in a country. In contrast to vital statistics data, which are typically collected continuously and summarized on an annual basis, censuses typically occur only every 10 years or so, and thus are not usually the best source of data on births and deaths. Analyses are conducted after a census to estimate how much over or undercounting took place. These compare the sex ratios from the census data to those estimated from natural values and mortality data. Censuses do more than just count people. They typically collect information about families or households in addition to individual characteristics such as age, sex, marital status, literacy/education, employment status, and occupation, and geographical location. They may also collect data on migration (or place of birth or of previous residence), language, religion, nationality (or ethnicity or race), and citizenship. In countries in which the vital registration system may be incomplete, the censuses are also used as a direct source of information about fertility and mortality; for example, the censuses of the People's Republic of China gather information on births and deaths that occurred in the 18 months immediately preceding the census. === Indirect methods === Indirect methods of collecting data are required in countries and periods where full data are not available, such as is the case in much of the developing world, and most of historical demography. One of these techniques in contemporary demography is the sister method, where survey researchers ask women how many of their sisters have died or had children and at what age. With these surveys, researchers can then indirectly estimate birth or death rates for the entire population. Other indirect methods in contemporary demography include asking people about siblings, parents, and children. Other indirect methods are necessary in historical demography. There are a variety of demographic methods for modelling population processes. They include models of mortality (including the life table, Gompertz models, hazards models, Cox proportional hazards models, multiple decrement life tables, Brass relational logits), fertility (Hermes model, Coale-Trussell models, parity progression ratios), marriage (Singulate Mean at Marriage, Page model), disability (Sullivan's method, multistate life tables), population projections (Lee-Carter model, the Leslie Matrix), and population momentum (Keyfitz). The United Kingdom has a series of four national birth cohort studies, the first three spaced apart by 12 years: the 1946 National Survey of Health and Development, the 1958 National Child Development Study, the 1970 British Cohort Study, and the Millennium Cohort Study, begun much more recently in 2000. These have followed the lives of samples of people (typically beginning with around 17,000 in each study) for many years, and are still continuing. As the samples have been drawn in a nationally representative way, inferences can be drawn from these studies about the differences between four distinct generations of British people in terms of their health, education, attitudes, childbearing and employment patterns. Indirect standardization is used when a population is small enough that the number of events (births, deaths, etc.) are also small. In this case, methods must be used to produce a standardized mortality rate (SMR) or standardized incidence rate (SIR). == Population change == Population change is analyzed by measuring the change between one population size to another. Global population continues to rise, which makes population change an essential component to demographics. This is calculated by taking one population size minus the population size in an earlier census. The best way of measuring population change is using the intercensal percentage change. The intercensal percentage change is the absolute change in population between the censuses divided by the population size in the earlier census. Next, multiply this a hundredfold to receive a percentage. When this statistic is achieved, the population growth between two or more nations that differ in size, can be accurately measured and examined. == Standardization of population numbers == For there to be a significant comparison, numbers must be altered for the size of the population that is under study. For example, the fertility rate is calculated as the ratio of the number of births to women of childbearing age to the total number of women in this age range. If these adjustments were not made, we would not know if a nation with a higher rate of births or deaths has a population with more women of childbearing age or more births per eligible woman. Within the category of standardization, there are two major approaches: direct standardization and indirect standardization. == Common rates and ratios == The crude birth rate, the annual number of live births per 1,000 people. The general fertility rate, the annual number of live births per 1,000 women of childbearing age (often taken to be from 15 to 49 years old, but sometimes from 15 to 44). The age-specific fertility rates, the annual number of live births per 1,000 women in particular age groups (usually age 15–19, 20–24 etc.) The crude death rate, the annual number of deaths per 1,000 people. The infant mortality rate, the annual number of deaths of children less than 1 year old per 1,000 live births. The expectation of life (or life expectancy), the number of years that an individual at a given age could expect to live at present mortality levels. The total fertility rate, the number of live births per woman completing her reproductive life, if her childbearing at each age reflected current age-specific fertility rates. The replacement level fertility, the average number of children women must have in order to replace the population for the next generation. For example, the replacement level fertility in the US is 2.11. The gross reproduction rate, the number of daughters who would be born to a woman completing her reproductive life at current age-specific fertility rates. The net reproduction ratio is the expected number of daughters, per newborn prospective mother, who may or may not survive to and through the ages of childbearing. A stable population, one that has had constant crude birth and death rates for such a long period of time that the percentage of people in every age class remains constant, or equivalently, the population pyramid has an unchanging structure. A stationary population, one that is both stable and unchanging in size (the difference between crude birth rate and crude death rate is zero). Measures of centralisation are concerned with the extent to which an area's population is concentrated in its urban centres. A stable population does not necessarily remain fixed in size. It can be expanding or shrinking. The crude death rate as defined above and applied to a whole population can give a misleading impression. For example, the number of deaths per 1,000 people can be higher in developed nations than in less-developed countries, despite standards of health being better in developed countries. This is because developed countries have proportionally more older people, who are more likely to die in a given year, so that the overall mortality rate can be higher even if the mortality rate at any given age is lower. A more complete picture of mortality is given by a life table, which summarizes mortality separately at each age. A life table is necessary to give a good estimate of life expectancy. == Basic equations for regional populations == Suppose that a country (or other entity) contains Populationt persons at time t. What is the size of the population at time t + 1 ? Population t + 1 = Population t + Natural Increase t + Net Migration t {\displaystyle {\text{Population}}_{t+1}={\text{Population}}_{t}+{\text{Natural Increase}}_{t}+{\text{Net Migration}}_{t}} Natural increase from time t to t + 1: Natural Increase t = Births t − Deaths t {\displaystyle {\text{Natural Increase}}_{t}={\text{Births}}_{t}-{\text{Deaths}}_{t}} Net migration from time t to t + 1: Net Migration t = Immigration t − Emigration t {\displaystyle {\text{Net Migration}}_{t}={\text{Immigration}}_{t}-{\text{Emigration}}_{t}} These basic equations can also be applied to subpopulations. For example, the population size of ethnic groups or nationalities within a given society or country is subject to the same sources of change. When dealing with ethnic groups, however, "net migration" might have to be subdivided into physical migration and ethnic reidentification (assimilation). Individuals who change their ethnic self-labels or whose ethnic classification in government statistics changes over time may be thought of as migrating or moving from one population subcategory to another. More generally, while the basic demographic equation holds true by definition, in practice the recording and counting of events (births, deaths, immigration, emigration) and the enumeration of the total population size are subject to error. So allowance needs to be made for error in the underlying statistics when any accounting of population size or change is made. The figure in this section shows the latest (2004) UN (United Nations) WHO projections of world population out to the year 2150 (red = high, orange = medium, green = low). The UN "medium" projection shows world population reaching an approximate equilibrium at 9 billion by 2075. Working independently, demographers at the International Institute for Applied Systems Analysis in Austria expect world population to peak at 9 billion by 2070. Throughout the 21st century, the average age of the population is likely to continue to rise. == The doomsday equation for the Earth's population == A 1960 issue of Science magazine included an article by Heinz von Foerster and his colleagues, P. M. Mora and L. W. Amiot, proposing an equation representing the best fit to the historical data on the Earth's population available in 1958: Fifty years ago, Science published a study with the provocative title “Doomsday: Friday, 13 November, A.D. 2026”. It fitted world population during the previous two millennia with P = 179 × 109/(2026.9 − t)0.99. This “quasi-hyperbolic” equation (hyperbolic having exponent 1.00 in the denominator) projected to infinite population in 2026—and to an imaginary one thereafter. —Taagepera, Rein. A world population growth model: Interaction with Earth's carrying capacity and technology in limited space Technological Forecasting and Social Change, vol. 82, February 2014, pp. 34–41 In 1975, von Hoerner suggested that von Foerster's doomsday equation can be written, without a significant loss of accuracy, in a simplified hyperbolic form (i.e. with the exponent in the denominator assumed to be 1.00): Global population = 179000000000 2026.9 − t , {\displaystyle {\text{Global population}}={\frac {179000000000}{2026.9-t}},} where 2026.9 is 13 November 2026 AD—the date of the so-called "demographic singularity" and von Foerster's 115th anniversary; t is the number of a year of the Gregorian calendar. Despite its simplicity, von Foerster's equation is very accurate in the range from 4,000,000 BP to 1997 AD. For example, the doomsday equation (developed in 1958, when the Earth's population was 2,911,249,671) predicts a population of 5,986,622,074 for the beginning of the year 1997: 179000000000 2026.9 − 1997 = 5986622074. {\displaystyle {\frac {179000000000}{2026.9-1997}}=5986622074.} The actual figure was 5,924,787,816. The doomsday equation is called so because it predicts that the number of people living on the planet Earth will become maximally positive by 13 November 2026, and on the next moment will become negative. Said otherwise, the equation predicts that on 13 November 2026 all humans will instantaneously disappear. == Science of population == Populations can change through three processes: fertility, mortality, and migration. Fertility involves the number of children that women have and is to be contrasted with fecundity (a woman's childbearing potential). Mortality is the study of the causes, consequences, and measurement of processes affecting death to members of the population. Demographers most commonly study mortality using the life table, a statistical device that provides information about the mortality conditions (most notably the life expectancy) in the population. Migration refers to the movement of persons from a locality of origin to a destination place across some predefined, political boundary. Migration researchers do not designate movements 'migrations' unless they are somewhat permanent. Thus, demographers do not consider tourists and travellers to be migrating. While demographers who study migration typically do so through census data on place of residence, indirect sources of data including tax forms and labour force surveys are also important. Demography is today widely taught in many universities across the world, attracting students with initial training in social sciences, statistics or health studies. Being at the crossroads of several disciplines such as sociology, economics, epidemiology, geography, anthropology and history, demography offers tools to approach a large range of population issues by combining a more technical quantitative approach that represents the core of the discipline with many other methods borrowed from social or other sciences. Demographic research is conducted in universities, in research institutes, as well as in statistical departments and in several international agencies. Population institutions are part of the CICRED (International Committee for Coordination of Demographic Research) network while most individual scientists engaged in demographic research are members of the International Union for the Scientific Study of Population, or a national association such as the Population Association of America in the United States, or affiliates of the Federation of Canadian Demographers in Canada. == Population composition == Population composition is the description of population defined by characteristics such as age, race, sex or marital status. These descriptions can be necessary for understanding the social dynamics from historical and comparative research. This data is often compared using a population pyramid. Population composition is also a very important part of historical research. Information ranging back hundreds of years is not always worthwhile, because the numbers of people for which data are available may not provide the information that is important (such as population size). Lack of information on the original data-collection procedures may prevent accurate evaluation of data quality. == Demographic analysis in institutions and organizations == === Labor market === The demographic analysis of labor markets can be used to show slow population growth, population ageing, and the increased importance of immigration. The U.S. Census Bureau projects that in the next 100 years, the United States will face some dramatic demographic changes. The population is expected to grow more slowly and age more rapidly than ever before and the nation will become a nation of immigrants. This influx is projected to rise over the next century as new immigrants and their children will account for over half the U.S. population. These demographic shifts could ignite major adjustments in the economy, more specifically, in labor markets. === Turnover and in internal labor markets === People decide to exit organizations for many reasons, such as, better jobs, dissatisfaction, and concerns within the family. The causes of turnover can be split into two separate factors, one linked with the culture of the organization, and the other relating to all other factors. People who do not fully accept a culture might leave voluntarily. Or, some individuals might leave because they fail to fit in and fail to change within a particular organization. === Population ecology of organizations === A basic definition of population ecology is a study of the distribution and abundance of organisms. As it relates to organizations and demography, organizations go through various liabilities to their continued survival. Hospitals, like all other large and complex organizations are impacted in the environment they work. For example, a study was done on the closure of acute care hospitals in Florida between a particular time. The study examined effect size, age, and niche density of these particular hospitals. A population theory says that organizational outcomes are mostly determined by environmental factors. Among several factors of the theory, there are four that apply to the hospital closure example: size, age, density of niches in which organizations operate, and density of niches in which organizations are established. ==== Business organizations ==== Problems in which demographers may be called upon to assist business organizations are when determining the best prospective location in an area of a branch store or service outlet, predicting the demand for a new product, and to analyze certain dynamics of a company's workforce. Choosing a new location for a branch of a bank, choosing the area in which to start a new supermarket, consulting a bank loan officer that a particular location would be a beneficial site to start a car wash, and determining what shopping area would be best to buy and be redeveloped in metropolis area are types of problems in which demographers can be called upon. Standardization is a useful demographic technique used in the analysis of a business. It can be used as an interpretive and analytic tool for the comparison of different markets. ==== Nonprofit organizations ==== These organizations have interests about the number and characteristics of their clients so they can maximize the sale of their products, their outlook on their influence, or the ends of their power, services, and beneficial works. == See also == == References == == Further reading == Josef Ehmer, Jens Ehrhardt, Martin Kohli (Eds.): Fertility in the History of the 20th Century: Trends, Theories, Policies, Discourses. Historical Social Research 36 (2), 2011. Glad, John. 2008. Future Human Evolution: Eugenics in the Twenty-First Century. Hermitage Publishers, ISBN 1-55779-154-6 Gavrilova N.S., Gavrilov L.A. 2011. Ageing and Longevity: Mortality Laws and Mortality Forecasts for Ageing Populations [In Czech: Stárnutí a dlouhověkost: Zákony a prognózy úmrtnosti pro stárnoucí populace]. Demografie, 53(2): 109–128. Preston, Samuel, Patrick Heuveline, and Michel Guillot. 2000. Demography: Measuring and Modeling Population Processes. Blackwell Publishing. Gavrilov L.A., Gavrilova N.S. 2010. Demographic Consequences of Defeating Aging. Rejuvenation Research, 13(2-3): 329–334. Paul R. Ehrlich (1968), The Population Bomb Controversial Neo-Malthusianist pamphlet Leonid A. Gavrilov & Natalia S. Gavrilova (1991), The Biology of Life Span: A Quantitative Approach. New York: Harwood Academic Publisher, ISBN 3-7186-4983-7 Andrey Korotayev & Daria Khaltourina (2006). Introduction to Social Macrodynamics: Compact Macromodels of the World System Growth. Moscow: URSS ISBN 5-484-00414-4 [2] Uhlenberg P. (Editor), (2009) International Handbook of the Demography of Aging, New York: Springer-Verlag, pp. 113–131. Paul Demeny and Geoffrey McNicoll (Eds.). 2003. The Encyclopedia of Population. New York, Macmillan Reference USA, vol.1, 32-37 Phillip Longman (2004), The Empty Cradle: how falling birth rates threaten global prosperity and what to do about it Sven Kunisch, Stephan A. Boehm, Michael Boppel (eds) (2011). From Grey to Silver: Managing the Demographic Change Successfully, Springer-Verlag, Berlin Heidelberg, ISBN 978-3-642-15593-2 Joe McFalls (2007), Population: A Lively Introduction, Population Reference Bureau [3] Archived 1 June 2013 at the Wayback Machine Ben J. Wattenberg (2004), How the New Demography of Depopulation Will Shape Our Future. Chicago: R. Dee, ISBN 1-56663-606-X Perry, Marc J. & Mackun, Paul J. Population Change & Distribution: Census 2000 Brief. (2001) Preston, Samuel; Heuveline, Patrick; and Guillot Michel. 2000. Demography: Measuring and Modeling Population Processes. Blackwell Publishing. Schutt, Russell K. 2006. "Investigating the Social World: The Process and Practice of Research". SAGE Publications. Siegal, Jacob S. (2002), Applied Demography: Applications to Business, Government, Law, and Public Policy. San Diego: Academic Press. Wattenberg, Ben J. (2004), How the New Demography of Depopulation Will Shape Our Future. Chicago: R. Dee, ISBN 1-56663-606-X == External links == Quick demography data lookup (archived 4 March 2016) Historicalstatistics.org Links to historical demographic and economic statistics United Nations Population Division: Homepage World Population Prospects, the 2012 Revision, Population estimates and projections for 230 countries and areas (archived 6 May 2011) World Urbanization Prospects, the 2011 Revision, Estimates and projections of urban and rural populations and urban agglomerations Probabilistic Population Projections, the 2nd Revision, Probabilistic Population Projections, based on the 2010 Revision of the World Population Prospects (archived 13 December 2012) Java Simulation of Population Dynamics. Basic Guide to the World: Population changes and trends, 1960–2003 Brief review of world basic demographic trends Family and Fertility Surveys (FFS)
Wikipedia/Demographics
John Cockerill (3 August 1790 – 9 June 1840) was an English-born industrialist who became a prominent businessman in Belgium. Born at Haslingden, Lancashire, England, he was brought by his father (British entrepreneur William Cockerill) to the Liège region, where he continued the family tradition of building wool-processing machinery. He founded an ironworks named John Cockerill & Cie. (English: John Cockerill & Company). == Life and career == At the age of twelve, John Cockerill was brought to Verviers (subsequently part of Belgium) by his father William Cockerill, who was successful as a machine builder there. In 1807, aged 17, he and his brother Charles James Cockerill took over the management of a factory in Liege. Their father retired in 1813, leaving the management of his business to his sons. In September 1813, he married Jeanne Frédérique Pastor, the same day her sister Caroline married Charles James Cockerill. After the victory over Napoleon at the Battle of Waterloo in 1815, the Prussian Minister of Finance, Peter Beuth, invited the Cockerill brothers to set up a woollens factory in Berlin. In 1814, the brothers bought the former palace of the Prince Bishops of Liege at Seraing. The chateau became the plant headquarters and the ground behind it the factory site (founded 1817); it was to become a vertically integrated iron foundry and machine manufacturing factory. William I of the Netherlands was joint owner of the plant. A machine manufacturing plant was added in 1819, and in 1826 (begun 1823) a coke fired blast furnace. By 1840, the plant had sixteen steam engines producing total power 900 hp (670 kW) in continual work and employed 3000 persons. In 1823, his brother Charles James retired, having been bought out by John in 1822. After the Belgian Revolution of 1830, the new Kingdom of Belgium claimed the property of William I, and in 1835, John Cockerill made himself the sole owner of the works. He also was a founder of the Banque de Belgique, in 1835. During John Cockerill's lifetime, the factories produced not only spinning engines and steel, but steam engines (including air-blowers, traction engines, and engines for ships); in 1835, Belgium's first steam locomotive Le Belge was made. He also had interests in collieries and mines, as well as factories producing cloth, linen and paper. In 1838/9, military tensions between Belgium and the Netherlands caused a rush on the banks for hard currency; as a result of the crisis, John Cockerill's company became bankrupt. With debts of 26 million francs on assets of 15 million, he travelled to St. Petersburg to make arrangements with Nicholas I of Russia, with the hope of raising funds. On his return, he contracted typhoid and died in Warsaw on 19 June 1840, leaving no heirs. == Legacy == On his death, he had a reputation as a humanitarian employer and as the founder of the Belgian manufacturing industry. His body was returned to Seraing in 1867, and a memorial was unveiled there in 1871. His company became the Société pour l'Exploitation des Etablissements John Cockerill (1842) and later Societe Anonyme Cockerill-Ougree (1955). The steel-making activities of the firm continued through various mergers, eventually becoming part of Cockerill-Sambre in 1981; the Cockerill name was retained until a 1998 merger with Usinor. Some mechanical engineering activities continued as Cockerill Maintenance & Ingénierie, which was split off as a separate company in the late 20th century. A monument to him and the industrial workers of Belgium stands in the centre of the Place du Luxembourg/Luxemburgplein in Brussels. On 1 February 2024, this monument was vandalised during a farmers' protest that took place in front of the European Parliament. == Honours == Knight of the Order of Leopold. == References == === Sources === Robert Chambers; William Chambers (1840). "The Cockerills". Chambers's Edinburgh Journal. 8. W. Orr: 165–166. Similar biography also at either: Nursey, Perry Fairfax (1839). "The Cockerills of Liege". Iron: An Illustrated Weekly Journal for Iron and Steel Manufacturers, Metallurgists, Mine Proprietors, Engineers, Shipbuilders, Scientists, Capitalists... 31: 335–336. "The Cockerills of Liege". The Mechanics' Magazine, Museum, Register, Journal and Gazette. 31: 335–336. 6 April – 28 September 1839. Adriaan Linters (1986). Industria: architecture industrielle en Belgique (in French, Dutch, and English). Mauad Editora Ltda. ISBN 9782870092842. Albert Gieseler. "Société Anonyme John Cockerill". abert-gieseler.de (in German). John Ramsay M'Culloch (1866). A dictionary, geographical, statistical, and historical, of the various countries, places, and principal natural objects in the world. Liege, pp.158-159. John P. McKay (1970). "9. A Pioneering Inventor: The John Cockerill Company in Southern Russia 1185-1905". Pioneers for profit; foreign entrepreneurship and Russian industrialization, 1885-1913. University of Chicago Press. pp. 297–317. ISBN 9780226559926. George Ripley; Charles Anderson Dana (1869). The new American cyclopædia: a popular dictionary of general knowledge. Vol. 5. D. Appleton. COCKERILL John, p.420. == Further reading == Fremdling, Rainer (1981). "John Cockerill: Pionierunternehmer der Belgische-Niederländische Industrialisierung". Zeitschrift für Unternehmensgeschichte (in German). 26 (3): 179–193. doi:10.1515/zug-1981-0303. ISSN 2367-2293. S2CID 168721137. == External links == "Die Region Lüttich". industriemuseen-emr.de (in German). "John Cockerill (1790-1840)". erih.net. European Route of Industrial Heritage. (in English) Hidden Monuments: John Cockerill Monument in the European District. (in English) Hidden Monuments: John Cockerill Monument in Seraing.
Wikipedia/John_Cockerill_(industrialist)
Electrical telegraphy is point-to-point distance communicating via sending electric signals over wire, a system primarily used from the 1840s until the late 20th century. It was the first electrical telecommunications system and the most widely used of a number of early messaging systems called telegraphs, that were devised to send text messages more quickly than physically carrying them. Electrical telegraphy can be considered the first example of electrical engineering. Electrical telegraphy consisted of two or more geographically separated stations, called telegraph offices. The offices were connected by wires, usually supported overhead on utility poles. Many electrical telegraph systems were invented that operated in different ways, but the ones that became widespread fit into two broad categories. First are the needle telegraphs, in which electric current sent down the telegraph line produces electromagnetic force to move a needle-shaped pointer into position over a printed list. Early needle telegraph models used multiple needles, thus requiring multiple wires to be installed between stations. The first commercial needle telegraph system and the most widely used of its type was the Cooke and Wheatstone telegraph, invented in 1837. The second category are armature systems, in which the current activates a telegraph sounder that makes a click; communication on this type of system relies on sending clicks in coded rhythmic patterns. The archetype of this category was the Morse system and the code associated with it, both invented by Samuel Morse in 1838. In 1865, the Morse system became the standard for international communication, using a modified form of Morse's code that had been developed for German railways. Electrical telegraphs were used by the emerging railway companies to provide signals for train control systems, minimizing the chances of trains colliding with each other. This was built around the signalling block system in which signal boxes along the line communicate with neighbouring boxes by telegraphic sounding of single-stroke bells and three-position needle telegraph instruments. In the 1840s, the electrical telegraph superseded optical telegraph systems such as semaphores, becoming the standard way to send urgent messages. By the latter half of the century, most developed nations had commercial telegraph networks with local telegraph offices in most cities and towns, allowing the public to send messages (called telegrams) addressed to any person in the country, for a fee. Beginning in 1850, submarine telegraph cables allowed for the first rapid communication between people on different continents. The telegraph's nearly-instant transmission of messages across continents – and between continents – had widespread social and economic impacts. The electric telegraph led to Guglielmo Marconi's invention of wireless telegraphy, the first means of radiowave telecommunication, which he began in 1894. In the early 20th century, manual operation of telegraph machines was slowly replaced by teleprinter networks. Increasing use of the telephone pushed telegraphy into only a few specialist uses; its use by the general public dwindled to greetings for special occasions. The rise of the Internet and email in the 1990s largely made dedicated telegraphy networks obsolete. == History == === Precursors === Prior to the electric telegraph, visual systems were used, including beacons, smoke signals, flag semaphore, and optical telegraphs for visual signals to communicate over distances of land. An auditory predecessor was West African talking drums. In the 19th century, Yoruba drummers used talking drums to mimic human tonal language to communicate complex messages – usually regarding news of birth, ceremonies, and military conflict – over 4–5 mile distances. Possibly the earliest design and conceptualization for a telegraph system was by the British polymath Robert Hooke, who gave a vivid and comprehensive outline of visual telegraphy to the Royal Society in a 1684 submission in which he outlined many practical details. The system was largely motivated by military concerns, following the Battle of Vienna in 1683. The first official optical telegraph was invented in France in the 18th century by Claude Chappe and his brothers. The Chappe system would stretch nearly 5,000 km with 556 stations and was used until the 1850s. === Early work === From early studies of electricity, electrical phenomena were known to travel with great speed, and many experimenters worked on the application of electricity to communications at a distance. All the known effects of electricity – such as sparks, electrostatic attraction, chemical changes, electric shocks, and later electromagnetism – were applied to the problems of detecting controlled transmissions of electricity at various distances. In 1753, an anonymous writer in the Scots Magazine suggested an electrostatic telegraph. Using one wire for each letter of the alphabet, a message could be transmitted by connecting the wire terminals in turn to an electrostatic machine, and observing the deflection of pith balls at the far end. The writer has never been positively identified, but the letter was signed C.M. and posted from Renfrew leading to a Charles Marshall of Renfrew being suggested. Telegraphs employing electrostatic attraction were the basis of early experiments in electrical telegraphy in Europe, but were abandoned as being impractical and were never developed into a useful communication system. In 1774, Georges-Louis Le Sage realised an early electric telegraph. The telegraph had a separate wire for each of the 26 letters of the alphabet and its range was only between two rooms of his home. In 1800, Alessandro Volta invented the voltaic pile, providing a continuous current of electricity for experimentation. This became a source of a low-voltage current that could be used to produce more distinct effects, and which was far less limited than the momentary discharge of an electrostatic machine, which with Leyden jars were the only previously known human-made sources of electricity. Another very early experiment in electrical telegraphy was an "electrochemical telegraph" created by the German physician, anatomist and inventor Samuel Thomas von Sömmering in 1809, based on an earlier 1804 design by Spanish polymath and scientist Francisco Salva Campillo. Both their designs employed multiple wires (up to 35) to represent almost all Latin letters and numerals. Thus, messages could be conveyed electrically up to a few kilometers (in von Sömmering's design), with each of the telegraph receiver's wires immersed in a separate glass tube of acid. An electric current was sequentially applied by the sender through the various wires representing each letter of a message; at the recipient's end, the currents electrolysed the acid in the tubes in sequence, releasing streams of hydrogen bubbles next to each associated letter or numeral. The telegraph receiver's operator would watch the bubbles and could then record the transmitted message. This is in contrast to later telegraphs that used a single wire (with ground return). Hans Christian Ørsted discovered in 1820 that an electric current produces a magnetic field that will deflect a compass needle. In the same year Johann Schweigger invented the galvanometer, with a coil of wire around a compass, that could be used as a sensitive indicator for an electric current. Also that year, André-Marie Ampère suggested that telegraphy could be achieved by placing small magnets under the ends of a set of wires, one pair of wires for each letter of the alphabet. He was apparently unaware of Schweigger's invention at the time, which would have made his system much more sensitive. In 1825, Peter Barlow tried Ampère's idea but only got it to work over 200 feet (61 m) and declared it impractical. In 1830 William Ritchie improved on Ampère's design by placing the magnetic needles inside a coil of wire connected to each pair of conductors. He successfully demonstrated it, showing the feasibility of the electromagnetic telegraph, but only within a lecture hall. In 1825, William Sturgeon invented the electromagnet, with a single winding of uninsulated wire on a piece of varnished iron, which increased the magnetic force produced by electric current. Joseph Henry improved it in 1828 by placing several windings of insulated wire around the bar, creating a much more powerful electromagnet which could operate a telegraph through the high resistance of long telegraph wires. During his tenure at The Albany Academy from 1826 to 1832, Henry first demonstrated the theory of the 'magnetic telegraph' by ringing a bell through one-mile (1.6 km) of wire strung around the room in 1831. In 1835, Joseph Henry and Edward Davy independently invented the mercury dipping electrical relay, in which a magnetic needle is dipped into a pot of mercury when an electric current passes through the surrounding coil. In 1837, Davy invented the much more practical metallic make-and-break relay which became the relay of choice in telegraph systems and a key component for periodically renewing weak signals. Davy demonstrated his telegraph system in Regent's Park in 1837 and was granted a patent on 4 July 1838. Davy also invented a printing telegraph which used the electric current from the telegraph signal to mark a ribbon of calico infused with potassium iodide and calcium hypochlorite. === First working systems === The first working telegraph was built by the English inventor Francis Ronalds in 1816 and used static electricity. At the family home on Hammersmith Mall, he set up a complete subterranean system in a 175-yard (160 m) long trench as well as an eight-mile (13 km) long overhead telegraph. The lines were connected at both ends to revolving dials marked with the letters of the alphabet and electrical impulses sent along the wire were used to transmit messages. Offering his invention to the Admiralty in July 1816, it was rejected as "wholly unnecessary". His account of the scheme and the possibilities of rapid global communication in Descriptions of an Electrical Telegraph and of some other Electrical Apparatus was the first published work on electric telegraphy and even described the risk of signal retardation due to induction. Elements of Ronalds' design were utilised in the subsequent commercialisation of the telegraph over 20 years later. The Schilling telegraph, invented by Baron Schilling von Canstatt in 1832, was an early needle telegraph. It had a transmitting device that consisted of a keyboard with 16 black-and-white keys. These served for switching the electric current. The receiving instrument consisted of six galvanometers with magnetic needles, suspended from silk threads. The two stations of Schilling's telegraph were connected by eight wires; six were connected with the galvanometers, one served for the return current and one for a signal bell. When at the starting station the operator pressed a key, the corresponding pointer was deflected at the receiving station. Different positions of black and white flags on different disks gave combinations which corresponded to the letters or numbers. Pavel Schilling subsequently improved its apparatus by reducing the number of connecting wires from eight to two. On 21 October 1832, Schilling managed a short-distance transmission of signals between two telegraphs in different rooms of his apartment. In 1836, the British government attempted to buy the design but Schilling instead accepted overtures from Nicholas I of Russia. Schilling's telegraph was tested on a 5-kilometre-long (3.1 mi) experimental underground and underwater cable, laid around the building of the main Admiralty in Saint Petersburg and was approved for a telegraph between the imperial palace at Peterhof and the naval base at Kronstadt. However, the project was cancelled following Schilling's death in 1837. Schilling was also one of the first to put into practice the idea of the binary system of signal transmission. His work was taken over and developed by Moritz von Jacobi who invented telegraph equipment that was used by Tsar Alexander III to connect the Imperial palace at Tsarskoye Selo and Kronstadt Naval Base. In 1833, Carl Friedrich Gauss, together with the physics professor Wilhelm Weber in Göttingen, installed a 1,200-metre-long (3,900 ft) wire above the town's roofs. Gauss combined the Poggendorff-Schweigger multiplicator with his magnetometer to build a more sensitive device, the galvanometer. To change the direction of the electric current, he constructed a commutator of his own. As a result, he was able to make the distant needle move in the direction set by the commutator on the other end of the line. At first, Gauss and Weber used the telegraph to coordinate time, but soon they developed other signals and finally, their own alphabet. The alphabet was encoded in a binary code that was transmitted by positive or negative voltage pulses which were generated by means of moving an induction coil up and down over a permanent magnet and connecting the coil with the transmission wires by means of the commutator. The page of Gauss's laboratory notebook containing both his code and the first message transmitted, as well as a replica of the telegraph made in the 1850s under the instructions of Weber are kept in the faculty of physics at the University of Göttingen, in Germany. Gauss was convinced that this communication would be of help to his kingdom's towns. Later in the same year, instead of a voltaic pile, Gauss used an induction pulse, enabling him to transmit seven letters a minute instead of two. The inventors and university did not have the funds to develop the telegraph on their own, but they received funding from Alexander von Humboldt. Carl August Steinheil in Munich was able to build a telegraph network within the city in 1835–1836. In 1838, Steinheil installed a telegraph along the Nuremberg–Fürth railway line, built in 1835 as the first German railroad, which was the first earth-return telegraph put into service. By 1837, William Fothergill Cooke and Charles Wheatstone had co-developed a telegraph system which used a number of needles on a board that could be moved to point to letters of the alphabet. Any number of needles could be used, depending on the number of characters it was required to code. In May 1837 they patented their system. The patent recommended five needles, which coded twenty of the alphabet's 26 letters. Samuel Morse independently developed and patented a recording electric telegraph in 1837. Morse's assistant Alfred Vail developed an instrument that was called the register for recording the received messages. It embossed dots and dashes on a moving paper tape by a stylus which was operated by an electromagnet. Morse and Vail developed the Morse code signalling alphabet. On 24 May 1844, Morse sent to Vail the historic first message “WHAT HATH GOD WROUGHT" from the Capitol in Washington to the old Mt. Clare Depot in Baltimore. == Commercial telegraphy == === Cooke and Wheatstone system === The first commercial electrical telegraph was the Cooke and Wheatstone system. A demonstration four-needle system was installed on the Euston to Camden Town section of Robert Stephenson's London and Birmingham Railway in 1837 for signalling rope-hauling of locomotives. It was rejected in favour of pneumatic whistles. Cooke and Wheatstone had their first commercial success with a system installed on the Great Western Railway over the 13 miles (21 km) from Paddington station to West Drayton in 1838. This was a five-needle, six-wire system, and had the major advantage of displaying the letter being sent so operators did not need to learn a code. The insulation failed on the underground cables between Paddington and West Drayton, and when the line was extended to Slough in 1843, the system was converted to a one-needle, two-wire configuration with uninsulated wires on poles. The cost of installing wires was ultimately more economically significant than the cost of training operators. The one-needle telegraph proved highly successful on British railways, and 15,000 sets were in use at the end of the nineteenth century; some remained in service in the 1930s. The Electric Telegraph Company, the world's first public telegraphy company, was formed in 1845 by financier John Lewis Ricardo and Cooke. === Wheatstone ABC telegraph === Wheatstone developed a practical alphabetical system in 1840 called the A.B.C. System, used mostly on private wires. This consisted of a "communicator" at the sending end and an "indicator" at the receiving end. The communicator consisted of a circular dial with a pointer and the 26 letters of the alphabet (and four punctuation marks) around its circumference. Against each letter was a key that could be pressed. A transmission would begin with the pointers on the dials at both ends set to the start position. The transmitting operator would then press down the key corresponding to the letter to be transmitted. In the base of the communicator was a magneto actuated by a handle on the front. This would be turned to apply an alternating voltage to the line. Each half cycle of the current would advance the pointers at both ends by one position. When the pointer reached the position of the depressed key, it would stop and the magneto would be disconnected from the line. The communicator's pointer was geared to the magneto mechanism. The indicator's pointer was moved by a polarised electromagnet whose armature was coupled to it through an escapement. Thus the alternating line voltage moved the indicator's pointer on to the position of the depressed key on the communicator. Pressing another key would then release the pointer and the previous key, and re-connect the magneto to the line. These machines were very robust and simple to operate, and they stayed in use in Britain until well into the 20th century. === Morse system === The Morse system uses a single wire between offices. At the sending station, an operator taps on a switch called a telegraph key, spelling out text messages in Morse code. Originally, the armature was intended to make marks on paper tape, but operators learned to interpret the clicks and it was more efficient to write down the message directly. In 1851, a conference in Vienna of countries in the German-Austrian Telegraph Union (which included many central European countries) adopted the Morse telegraph as the system for international communications. The international Morse code adopted was considerably modified from the original American Morse code, and was based on a code used on Hamburg railways (Gerke, 1848). A common code was a necessary step to allow direct telegraph connection between countries. With different codes, additional operators were required to translate and retransmit the message. In 1865, a conference in Paris adopted Gerke's code as the International Morse code and was henceforth the international standard. The US, however, continued to use American Morse code internally for some time, hence international messages required retransmission in both directions. In the United States, the Morse/Vail telegraph was quickly deployed in the two decades following the first demonstration in 1844. The overland telegraph connected the west coast of the continent to the east coast by 24 October 1861, bringing an end to the Pony Express. === Foy–Breguet system === France was slow to adopt the electrical telegraph, because of the extensive optical telegraph system built during the Napoleonic era. There was also serious concern that an electrical telegraph could be quickly put out of action by enemy saboteurs, something that was much more difficult to do with optical telegraphs which had no exposed hardware between stations. The Foy-Breguet telegraph was eventually adopted. This was a two-needle system using two signal wires but displayed in a uniquely different way to other needle telegraphs. The needles made symbols similar to the Chappe optical system symbols, making it more familiar to the telegraph operators. The optical system was decommissioned starting in 1846, but not completely until 1855. In that year the Foy-Breguet system was replaced with the Morse system. === Expansion === As well as the rapid expansion of the use of the telegraphs along the railways, they soon spread into the field of mass communication with the instruments being installed in post offices. The era of mass personal communication had begun. Telegraph networks were expensive to build, but financing was readily available, especially from London bankers. By 1852, National systems were in operation in major countries: The New York and Mississippi Valley Printing Telegraph Company, for example, was created in 1852 in Rochester, New York and eventually became the Western Union Telegraph Company. Although many countries had telegraph networks, there was no worldwide interconnection. Message by post was still the primary means of communication to countries outside Europe. Telegraphy was introduced in Central Asia during the 1870s. === Telegraphic improvements === A continuing goal in telegraphy was to reduce the cost per message by reducing hand-work, or increasing the sending rate. There were many experiments with moving pointers, and various electrical encodings. However, most systems were too complicated and unreliable. A successful expedient to reduce the cost per message was the development of telegraphese. The first system that did not require skilled technicians to operate was Charles Wheatstone's ABC system in 1840 in which the letters of the alphabet were arranged around a clock-face, and the signal caused a needle to indicate the letter. This early system required the receiver to be present in real time to record the message and it reached speeds of up to 15 words a minute. In 1846, Alexander Bain patented a chemical telegraph in Edinburgh. The signal current moved an iron pen across a moving paper tape soaked in a mixture of ammonium nitrate and potassium ferrocyanide, decomposing the chemical and producing readable blue marks in Morse code. The speed of the printing telegraph was 16 and a half words per minute, but messages still required translation into English by live copyists. Chemical telegraphy came to an end in the US in 1851, when the Morse group defeated the Bain patent in the US District Court. For a brief period, starting with the New York–Boston line in 1848, some telegraph networks began to employ sound operators, who were trained to understand Morse code aurally. Gradually, the use of sound operators eliminated the need for telegraph receivers to include register and tape. Instead, the receiving instrument was developed into a "sounder", an electromagnet that was energized by a current and attracted a small iron lever. When the sounding key was opened or closed, the sounder lever struck an anvil. The Morse operator distinguished a dot and a dash by the short or long interval between the two clicks. The message was then written out in long-hand. Royal Earl House developed and patented a letter-printing telegraph system in 1846 which employed an alphabetic keyboard for the transmitter and automatically printed the letters on paper at the receiver, and followed this up with a steam-powered version in 1852. Advocates of printing telegraphy said it would eliminate Morse operators' errors. The House machine was used on four main American telegraph lines by 1852. The speed of the House machine was announced as 2600 words an hour. David Edward Hughes invented the printing telegraph in 1855; it used a keyboard of 26 keys for the alphabet and a spinning type wheel that determined the letter being transmitted by the length of time that had elapsed since the previous transmission. The system allowed for automatic recording on the receiving end. The system was very stable and accurate and became accepted around the world. The next improvement was the Baudot code of 1874. French engineer Émile Baudot patented a printing telegraph in which the signals were translated automatically into typographic characters. Each character was assigned a five-bit code, mechanically interpreted from the state of five on/off switches. Operators had to maintain a steady rhythm, and the usual speed of operation was 30 words per minute. By this point, reception had been automated, but the speed and accuracy of the transmission were still limited to the skill of the human operator. The first practical automated system was patented by Charles Wheatstone. The message (in Morse code) was typed onto a piece of perforated tape using a keyboard-like device called the 'Stick Punch'. The transmitter automatically ran the tape through and transmitted the message at the then exceptionally high speed of 70 words per minute. ==== Teleprinters ==== An early successful teleprinter was invented by Frederick G. Creed. In Glasgow he created his first keyboard perforator, which used compressed air to punch the holes. He also created a reperforator (receiving perforator) and a printer. The reperforator punched incoming Morse signals onto paper tape and the printer decoded this tape to produce alphanumeric characters on plain paper. This was the origin of the Creed High Speed Automatic Printing System, which could run at an unprecedented 200 words per minute. His system was adopted by the Daily Mail for daily transmission of the newspaper contents. With the invention of the teletypewriter, telegraphic encoding became fully automated. Early teletypewriters used the ITA-1 Baudot code, a five-bit code. This yielded only thirty-two codes, so it was over-defined into two "shifts", "letters" and "figures". An explicit, unshared shift code prefaced each set of letters and figures. In 1901, Baudot's code was modified by Donald Murray. In the 1930s, teleprinters were produced by Teletype in the US, Creed in Britain and Siemens in Germany. By 1935, message routing was the last great barrier to full automation. Large telegraphy providers began to develop systems that used telephone-like rotary dialling to connect teletypewriters. These resulting systems were called "Telex" (TELegraph EXchange). Telex machines first performed rotary-telephone-style pulse dialling for circuit switching, and then sent data by ITA2. This "type A" Telex routing functionally automated message routing. The first wide-coverage Telex network was implemented in Germany during the 1930s as a network used to communicate within the government. At the rate of 45.45 (±0.5%) baud – considered speedy at the time – up to 25 telex channels could share a single long-distance telephone channel by using voice frequency telegraphy multiplexing, making telex the least expensive method of reliable long-distance communication. Automatic teleprinter exchange service was introduced into Canada by CPR Telegraphs and CN Telegraph in July 1957 and in 1958, Western Union started to build a Telex network in the United States. ==== The harmonic telegraph ==== The most expensive aspect of a telegraph system was the installation – the laying of the wire, which was often very long. The costs would be better covered by finding a way to send more than one message at a time through the single wire, thus increasing revenue per wire. Early devices included the duplex and the quadruplex which allowed, respectively, one or two telegraph transmissions in each direction. However, an even greater number of channels was desired on the busiest lines. In the latter half of the 1800s, several inventors worked towards creating a method for doing just that, including Charles Bourseul, Thomas Edison, Elisha Gray, and Alexander Graham Bell. One approach was to have resonators of several different frequencies act as carriers of a modulated on-off signal. This was the harmonic telegraph, a form of frequency-division multiplexing. These various frequencies, referred to as harmonics, could then be combined into one complex signal and sent down the single wire. On the receiving end, the frequencies would be separated with a matching set of resonators. With a set of frequencies being carried down a single wire, it was realized that the human voice itself could be transmitted electrically through the wire. This effort led to the invention of the telephone. (While the work toward packing multiple telegraph signals onto one wire led to telephony, later advances would pack multiple voice signals onto one wire by increasing the bandwidth by modulating frequencies much higher than human hearing. Eventually, the bandwidth was widened much further by using laser light signals sent through fiber optic cables. Fiber optic transmission can carry 25,000 telephone signals simultaneously down a single fiber.) === Oceanic telegraph cables === Soon after the first successful telegraph systems were operational, the possibility of transmitting messages across the sea by way of submarine communications cables was first proposed. One of the primary technical challenges was to sufficiently insulate the submarine cable to prevent the electric current from leaking out into the water. In 1842, a Scottish surgeon William Montgomerie introduced gutta-percha, the adhesive juice of the Palaquium gutta tree, to Europe. Michael Faraday and Wheatstone soon discovered the merits of gutta-percha as an insulator, and in 1845, the latter suggested that it should be employed to cover the wire which was proposed to be laid from Dover to Calais. Gutta-percha was used as insulation on a wire laid across the Rhine between Deutz and Cologne. In 1849, C. V. Walker, electrician to the South Eastern Railway, submerged a 2 miles (3.2 km) wire coated with gutta-percha off the coast from Folkestone, which was tested successfully. John Watkins Brett, an engineer from Bristol, sought and obtained permission from Louis-Philippe in 1847 to establish telegraphic communication between France and England. The first undersea cable was laid in 1850, connecting the two countries and was followed by connections to Ireland and the Low Countries. The Atlantic Telegraph Company was formed in London in 1856 to undertake to construct a commercial telegraph cable across the Atlantic Ocean. It was successfully completed on 18 July 1866 by the ship SS Great Eastern, captained by Sir James Anderson, after many mishaps along the way. John Pender, one of the men on the Great Eastern, later founded several telecommunications companies primarily laying cables between Britain and Southeast Asia. Earlier transatlantic submarine cables installations were attempted in 1857, 1858 and 1865. The 1857 cable only operated intermittently for a few days or weeks before it failed. The study of underwater telegraph cables accelerated interest in mathematical analysis of very long transmission lines. The telegraph lines from Britain to India were connected in 1870. (Those several companies combined to form the Eastern Telegraph Company in 1872.) The HMS Challenger expedition in 1873–1876 mapped the ocean floor for future underwater telegraph cables. Australia was first linked to the rest of the world in October 1872 by a submarine telegraph cable at Darwin. This brought news reports from the rest of the world. The telegraph across the Pacific was completed in 1902, finally encircling the world. From the 1850s until well into the 20th century, British submarine cable systems dominated the world system. This was set out as a formal strategic goal, which became known as the All Red Line. In 1896, there were thirty cable laying ships in the world and twenty-four of them were owned by British companies. In 1892, British companies owned and operated two-thirds of the world's cables and by 1923, their share was still 42.7 percent. === Cable and Wireless Company === Cable & Wireless was a British telecommunications company that traced its origins back to the 1860s, with Sir John Pender as the founder, although the name was only adopted in 1934. It was formed from successive mergers including: The Falmouth, Malta, Gibraltar Telegraph Company The British Indian Submarine Telegraph Company The Marseilles, Algiers and Malta Telegraph Company The Eastern Telegraph Company The Eastern Extension Australasia and China Telegraph Company The Eastern and Associated Telegraph Companies == Telegraphy and longitude == Main article § Section: History of longitude § Land surveying and telegraphy. The telegraph was very important for sending time signals to determine longitude, providing greater accuracy than previously available. Longitude was measured by comparing local time (for example local noon occurs when the sun is at its highest above the horizon) with absolute time (a time that is the same for an observer anywhere on earth). If the local times of two places differ by one hour, the difference in longitude between them is 15° (360°/24h). Before telegraphy, absolute time could be obtained from astronomical events, such as eclipses, occultations or lunar distances, or by transporting an accurate clock (a chronometer) from one location to the other. The idea of using the telegraph to transmit a time signal for longitude determination was suggested by François Arago to Samuel Morse in 1837, and the first test of this idea was made by Capt. Wilkes of the U.S. Navy in 1844, over Morse's line between Washington and Baltimore. The method was soon in practical use for longitude determination, in particular by the U.S. Coast Survey, and over longer and longer distances as the telegraph network spread across North America and the world, and as technical developments improved accuracy and productivity: 318–330 : 98–107  The "telegraphic longitude net" soon became worldwide. Transatlantic links between Europe and North America were established in 1866 and 1870. The US Navy extended observations into the West Indies and Central and South America with an additional transatlantic link from South America to Lisbon between 1874 and 1890. British, Russian and US observations created a chain from Europe through Suez, Aden, Madras, Singapore, China and Japan, to Vladivostok, thence to Saint Petersburg and back to Western Europe. Australia's telegraph network was linked to Singapore's via Java in 1871, and the net circled the globe in 1902 with the connection of the Australia and New Zealand networks to Canada's via the All Red Line. The two determinations of longitudes, one transmitted from east to west and the other from west to east, agreed within one second of arc (1⁄15 second of time – less than 30 metres). == Telegraphy in war == The ability to send telegrams brought obvious advantages to those conducting war. Secret messages were encoded, so interception alone would not be sufficient for the opposing side to gain an advantage. There were also geographical constraints on intercepting the telegraph cables that improved security, however once radio telegraphy was developed interception became far more widespread. === Crimean War === The Crimean War was one of the first conflicts to use telegraphs and was one of the first to be documented extensively. In 1854, the government in London created a military Telegraph Detachment for the Army commanded by an officer of the Royal Engineers. It was to comprise twenty-five men from the Royal Corps of Sappers & Miners trained by the Electric Telegraph Company to construct and work the first field electric telegraph. Journalistic recording of the war was provided by William Howard Russell (writing for The Times newspaper) with photographs by Roger Fenton. News from war correspondents kept the public of the nations involved in the war informed of the day-to-day events in a way that had not been possible in any previous war. After the French extended their telegraph lines to the coast of the Black Sea in late 1854, war news began reaching London in two days. When the British laid an underwater cable to the Crimean peninsula in April 1855, news reached London in a few hours. These prompt daily news reports energised British public opinion on the war, which brought down the government and led to Lord Palmerston becoming prime minister. === American Civil War === During the American Civil War the telegraph proved its value as a tactical, operational, and strategic communication medium and an important contributor to Union victory. By contrast the Confederacy failed to make effective use of the South's much smaller telegraph network. Prior to the War the telegraph systems were primarily used in the commercial sector. Government buildings were not inter-connected with telegraph lines, but relied on runners to carry messages back and forth. Before the war the Government saw no need to connect lines within city limits, however, they did see the use in connections between cities. Washington D.C. being the hub of government, it had the most connections, but there were only a few lines running north and south out of the city. It was not until the Civil War that the government saw the true potential of the telegraph system. Soon after the shelling of Fort Sumter, the South cut telegraph lines running into D.C., which put the city in a state of panic because they feared an immediate Southern invasion. Within 6 months of the start of the war, the U.S. Military Telegraph Corps (USMT) had laid approximately 300 miles (480 km) of line. By war's end they had laid approximately 15,000 miles (24,000 km) of line, 8,000 for military and 5,000 for commercial use, and had handled approximately 6.5 million messages. The telegraph was not only important for communication within the armed forces, but also in the civilian sector, helping political leaders to maintain control over their districts. Even before the war, the American Telegraph Company censored suspect messages informally to block aid to the secession movement. During the war, Secretary of War Simon Cameron, and later Edwin Stanton, wanted control over the telegraph lines to maintain the flow of information. Early in the war, one of Stanton's first acts as Secretary of War was to move telegraph lines from ending at McClellan's headquarters to terminating at the War Department. Stanton himself said "[telegraphy] is my right arm". Telegraphy assisted Northern victories, including the Battle of Antietam (1862), the Battle of Chickamauga (1863), and Sherman's March to the Sea (1864). The telegraph system still had its flaws. The USMT, while the main source of telegraphers and cable, was still a civilian agency. Most operators were first hired by the telegraph companies and then contracted out to the War Department. This created tension between generals and their operators. One source of irritation was that USMT operators did not have to follow military authority. Usually they performed without hesitation, but they were not required to, so Albert Myer created a U.S. Army Signal Corps in February 1863. As the new head of the Signal Corps, Myer tried to get all telegraph and flag signaling under his command, and therefore subject to military discipline. After creating the Signal Corps, Myer pushed to further develop new telegraph systems. While the USMT relied primarily on civilian lines and operators, the Signal Corp's new field telegraph could be deployed and dismantled faster than USMT's system. === First World War === During World War I, Britain's telegraph communications were almost completely uninterrupted, while it was able to quickly cut Germany's cables worldwide. The British government censored telegraph cable companies in an effort to root out espionage and restrict financial transactions with Central Powers nations. British access to transatlantic cables and its codebreaking expertise led to the Zimmermann Telegram incident that contributed to the US joining the war. Despite British acquisition of German colonies and expansion into the Middle East, debt from the war led to Britain's control over telegraph cables to weaken while US control grew. === Second World War === World War II revived the 'cable war' of 1914–1918. In 1939, German-owned cables across the Atlantic were cut once again, and, in 1940, Italian cables to South America and Spain were cut in retaliation for Italian action against two of the five British cables linking Gibraltar and Malta. Electra House, Cable & Wireless's head office and central cable station, was damaged by German bombing in 1941. Resistance movements in occupied Europe sabotaged communications facilities such as telegraph lines, forcing the Germans to use wireless telegraphy, which could then be intercepted by Britain. The Germans developed a highly complex teleprinter attachment (German: Schlüssel-Zusatz, "cipher attachment") that was used for enciphering telegrams, using the Lorenz cipher, between German High Command (OKW) and the army groups in the field. These contained situation reports, battle plans, and discussions of strategy and tactics. Britain intercepted these signals, diagnosed how the encrypting machine worked, and decrypted a large amount of teleprinter traffic. == End of the telegraph era == In America, the end of the telegraph era can be associated with the fall of the Western Union Telegraph Company. Western Union was the leading telegraph provider for America and was seen as the best competition for the National Bell Telephone Company. Western Union and Bell were both invested in telegraphy and telephone technology. Western Union's decision to allow Bell to gain the advantage in telephone technology was the result of Western Union's upper management's failure to foresee the surpassing of the telephone over the, at the time, dominant telegraph system. Western Union soon lost the legal battle for the rights to their telephone copyrights. This led to Western Union agreeing to a lesser position in the telephone competition, which in turn led to the lessening of the telegraph. While the telegraph was not the focus of the legal battles that occurred around 1878, the companies that were affected by the effects of the battle were the main powers of telegraphy at the time. Western Union thought that the agreement of 1878 would solidify telegraphy as the long-range communication of choice. However, due to the underestimates of telegraph's future and poor contracts, Western Union found itself declining. AT&T acquired working control of Western Union in 1909 but relinquished it in 1914 under threat of antitrust action. AT&T bought Western Union's electronic mail and Telex businesses in 1990. Although commercial "telegraph" services are still available in many countries, transmission is usually done via a computer network rather than a dedicated wired connection. == See also == == References == == Bibliography == Beauchamp, Ken (2001). History of Telegraphy. London: The Institution of Electrical Engineers. ISBN 978-0-85296-792-8. Bowers, Brian, Sir Charles Wheatstone: 1802–1875, IET, 2001 ISBN 0852961030 Calvert, J. B. (2008). "The Electromagnetic Telegraph". Copeland, B. Jack, ed. (2006). Colossus: The Secrets of Bletchley Park's Codebreaking Computers. Oxford: Oxford University Press. ISBN 978-0-19-284055-4. Fahie, John Joseph (1884). A History of Electric Telegraphy, to the Year 1837. London: E. & F.N. Spon. OCLC 559318239. Figes, Orlando (2010). Crimea: The Last Crusade. London: Allen Lane. ISBN 978-0-7139-9704-0. Gibberd, William (1966). Australian Dictionary of Biography: Edward Davy. Hochfelder, David (2012). The Telegraph in America, 1832–1920. Johns Hopkins University Press. pp. 6–17, 138–141. ISBN 9781421407470. Holzmann, Gerard J.; Pehrson, Björn, The Early History of Data Networks, Wiley, 1995 ISBN 0818667826 Huurdeman, Anton A. (2003). The Worldwide History of Telecommunications. Wiley-Blackwell. ISBN 978-0471205050. Jones, R. Victor (1999). Samuel Thomas von Sömmering's "Space Multiplexed" Electrochemical Telegraph (1808–1810). Archived from the original on 11 October 2012. Retrieved 1 May 2009. Attributed to Michaelis, Anthony R. (1965), From semaphore to satellite, Geneva: International Telecommunication Union Kennedy, P. M. (October 1971). "Imperial Cable Communications and Strategy, 1870–1914". The English Historical Review. 86 (341): 728–752. doi:10.1093/ehr/lxxxvi.cccxli.728. JSTOR 563928. Kieve, Jeffrey L. (1973). The Electric Telegraph: A Social and Economic History. David and Charles. ISBN 0-7153-5883-9. OCLC 655205099. Mercer, David, The Telephone: The Life Story of a Technology, Greenwood Publishing Group, 2006 ISBN 031333207X Schwoch, James (2018). Wired into Nature: The Telegraph and the North American Frontier. University of Illinois Press. ISBN 978-0252041778. == Further reading == Botjer, George F. (2015). Samuel F.B. Morse and the Dawn of the Age of Electricity. Lanham, MD: Lexington Books. ISBN 978-1-4985-0140-8 – via Internet Archive. Cooke, W.F., The Electric Telegraph, Was it invented by Prof. Wheatstone?, London 1856. Gray, Thomas (1892). "The Inventors of the Telegraph And Telephone". Annual Report of the Board of Regents of the Smithsonian Institution. 71: 639–659. Retrieved 7 August 2009. Gauß, C. F., Works, Göttingen 1863–1933. Howe, Daniel Walker, What Hath God Wrought: The Transformation of America, 1815–1848, Oxford University Press, 2007 ISBN 0199743797. Peterson, M.J. Roots of Interconnection: Communications, Transportation and Phases of the Industrial Revolution, International Dimensions of Ethics Education in Science and Engineering Background Reading, Version 1; February 2008. Steinheil, C.A., Ueber Telegraphie, München 1838. Yates, JoAnne. The Telegraph's Effect on Nineteenth Century Markets and Firms, Massachusetts Institute of Technology, pp. 149–163. == External links == Morse Telegraph Club, Inc. (The Morse Telegraph Club is an international non-profit organization dedicated to the perpetuation of the knowledge and traditions of telegraphy.) "Transatlantic Cable Communications". Canada's Digital Collections. Archived from the original on 29 August 2005. Shilling's telegraph, an exhibit of the A.S. Popov Central Museum of Communications History of electromagnetic telegraph The first electric telegraphs The Dawn of Telegraphy (in Russian) Pavel Shilling and his telegraph- article in PCWeek, Russian edition. Distant Writing – The History of the Telegraph Companies in Britain between 1838 and 1868 NASA – Carrington Super Flare Archived 29 March 2010 at the Wayback Machine NASA 6 May 2008 How Cables Unite The World – a 1902 article about telegraph networks and technology from the magazine The World's Work "Telegraph" . New International Encyclopedia. 1905. Indiana telegraph and telephone collection, Rare Books and Manuscripts, Indiana State Library Wonders of electricity and the elements, being a popular account of modern electrical and magnetic discoveries, magnetism and electric machines, the electric telegraph and the electric light, and the metal bases, salt, and acids from Science History Institute Digital Collections The electro magnetic telegraph: with an historical account of its rise, progress, and present condition from Science History Institute Digital Collections
Wikipedia/Electrical_telegraph
The AD–AS or aggregate demand–aggregate supply model (also known as the aggregate supply–aggregate demand or AS–AD model) is a widely used macroeconomic model that explains short-run and long-run economic changes through the relationship of aggregate demand (AD) and aggregate supply (AS) in a diagram. It coexists in an older and static version depicting the two variables output and price level, and in a newer dynamic version showing output and inflation (i.e. the change in the price level over time, which is usually of more direct interest). The AD–AS model was invented around 1950 and became one of the primary simplified representations of macroeconomic issues toward the end of the 1970s when inflation became an important political issue. From around 2000 the modified version of a dynamic AD–AS model, incorporating contemporary monetary policy strategies focusing on inflation targeting and using the interest rate as a primary policy instrument, was developed, gradually superseding the traditional static model version in university-level economics textbooks. The dynamic AD–AS model can be viewed as a simplified version of the more advanced and complex dynamic stochastic general equilibrium (DSGE) models which are state-of-the-art models used by central banks and other organizations to analyze economic fluctuations. Unlike DSGE models, the dynamic AD–AS model does not provide a microeconomic foundation in the form of optimizing firms and households, but the macroeconomic relationships ultimately posited by the optimizing models are similar to those emerging from the modern-version AD–AS model. At the same time, the latter is much simpler and consequently more easily accessible for students, making it a widespread tool for teaching purposes. == History == === Origins === According to economic historian A. K. Dutt, the AD–AS diagram first made its appearance in 1948 in a contribution by O.H. Brownlee to a textbook on applied economics. A textbook written by Kenneth E. Boulding in the same year also presented a diagram in output-price space, but unlike Brownlee's version without trying to solve the model; Boulding rather uses the diagram to warn about the dangers of aggregative thinking. Brownlee, on the contrary, went on working on the diagram and in 1950 published an article in Journal of Political Economy, which is allegedly the first published version of a full AD–AS model in Y-P space. In 1951, Jacob Marschak published lecture notes providing the first full textbook treatment of the AD–AS model, presenting the same model as Brownlee's 1948 version, though not citing neither Brownlee nor anyone else. === Growing popularity in 1970s === In the course of time, the model spread to several textbooks, becoming a standard modelling tool in principles and intermediate economics textbooks. In particular, after inflation became important in the late 1960s and 1970s, there was a need to complement the IS–LM model, which had been a dominant model for teaching purposes until that time, but assumed a constant price level, with a model that incorporated aggregate supply and consequently could provide an explanation of changes in the price level. Thus, the "IS–LM–AS model", graphically depicted as an aggregate supply curve together with a curve combining the IS and LM curves and called an aggregate demand curve, became a standard teaching model only after the inflationary supply shocks of the 1970s. In particular, two intermediate textbooks appearing in 1978 and later to be widely used, one by Rudi Dornbusch and Stanley Fischer, and one by Robert J. Gordon, together with William Hoban Branson's texbook from its second edition in 1979, all presented an AD–AS model. === Rise of the dynamic AD–AS version === From around the turn of the century, the traditional AD–AS diagram, as well as the traditional version of the IS–LM diagram, upon which the derivation of the AD curve rests, has been criticized for being obsolete. One reason is that the traditional IS–LM diagram and, consequently, AD curve rested upon the assumption of the central bank targeting money supply as its central policy variable. In contrast, central banks since around 1990 have largely abandoned controlling money supply, instead attempting to target inflation, using the policy interest rate as their main policy instrument, possibly via a Taylor rule-like strategy. Another reason is that for real-world policy purposes, it is generally not interesting to analyze the interaction between output and the price level per se, which is what the traditional AD–AS diagram illustrates, but rather between output and the change in the price level, i.e. inflation. Because of that, the original AD–AS model has increasingly been supplanted in textbooks by a dynamic version which directly analyzes equilibria in output and inflation levels, showing these variables along the axes of the diagram. In some textbooks, the dynamic AD–AS version is referred to as the "three-equation New Keynesian model", the three equations being an IS relation, often augmented with a term that allows for expectations influencing demand, a monetary policy (interest) rule and a short-run Phillips curve. Olivier Blanchard in his widely-used intermediate-level textbook uses the term IS–LM–PC model (PC standing for Phillips curve) for the same basic construction.: 195–201  === A stepping stone towards DSGE models === The dynamic AD–AS model can be viewed as a simplified version of the more advanced and complex dynamic stochastic general equilibrium (DSGE) models which are state-of-the-art models used by central banks and other organizations to analyze economic fluctuations. Unlike DSGE models, the dynamic AD–AS model does not provide a microeconomic foundation in the form of optimizing firms and households, but the macroeconomic relationships ultimately posited by the optimizing models are similar to those emerging from the modern-version AD–AS model.: 427–428  == Static AD–AS model == The traditional or static AD/AS model illustrates the relationship between output and the price level of the economy under the assumptions of the model, containing both a short-run and a long-run aggregate supply curve (abbreviated SRAS and LRAS, respectively). In the short run wages and other resource prices are sticky and slow to adjust to new price levels. This gives way to an upward sloping or, in the extreme case of completely fixed prices, horizontal SRAS. In the long-run, resource prices adjust to the price level bringing the economy back to its structural output level along a vertical LRAS. Movements of the two curves can be used to predict the effects that various exogenous events will have on two variables: real GDP and the price level. === Aggregate demand curve === The AD (aggregate demand) curve in the static AD–AS model is downward sloping, reflecting a negative correlation between output and the price level on the demand side. It shows the combinations of the price level and level of the output at which the goods and assets markets are simultaneously in equilibrium. The equation for the AD curve in general terms can be written as: Y = Y d ( M P , G , T , Z 1 ) {\displaystyle Y=Y^{d}\left({\tfrac {M}{P}},G,T,Z_{1}\right)} , where Y is real GDP, M is the nominal money supply, P is the price level, G is real government spending, T is real taxes levied, and Z1 any other variables that affect aggregate demand. === Aggregate supply curve === The aggregate supply curve in the static AD–AS model illustrates the relationship between the supply of goods and services on the one hand and the price level on the other hand.: 266  Under the premise that the price level is flexible in the long run, but sticky or even completely fixed under shorter time horizons, it is usual to distinguish between a long-run and a short-run aggregate supply curve. Whereas the long-run aggregate supply curve (LRAS) is vertical, the short-run aggregate supply curve will have a positive slope: 377  or, in the extreme case of a completely constant price level, be horizontal.: 268  The equation for the aggregate supply curve in general terms may be written as Y = Y s ( W / P , P / P e , Z 2 ) {\displaystyle Y=Y^{s}(W/P,\ \ P/P^{e},\ \ Z_{2})} , where W is the nominal wage rate (exogenous due to stickiness in the short run), Pe is the anticipated (expected) price level, and Z2 is a vector of exogenous variables that can affect the position of the labor demand curve. A horizontal aggregate supply curve (sometimes called a "Keynesian" aggregate supply curve) implies that the firm will supply whatever amount of goods is demanded at a particular price level. One possible justification for this is that when there is unemployment, firms can readily obtain as much labour as they want at that current wage, and production can increase without any additional costs (e.g. machines are idle which can simply be turned on). Firms' average costs of production therefore are assumed not to change as their output level changes. The long-run aggregate supply curve refers not to a time frame in which the capital stock is free to be set optimally (as would be the terminology in the micro-economic theory of the firm), but rather to a time frame in which wages are free to adjust in order to equilibrate the labor market and in which price anticipations are accurate. A vertical long-run aggregate supply curve (sometimes called a "classical" aggregate supply curve) illustrates a situation where the level of output does not depend on the price level, but is exclusively determined by the supply of production factors like the capital and the labour force, employment being at its structural ("natural") level.: 267  == Dynamic AD–AS model == The modern or dynamic AD/AS model illustrates the connection between output and inflation, combining an IS relation (i.e., a relation describing aggregate demand as a function of various demand components, some of which are negatively related to the interest rate), a monetary policy rule determining the policy interest rate (which together form the AD curve) and a Phillips curve relationship from which the aggregate supply curve is derived.: 263 : 593–600  === Aggregate demand curve === The AD curve slopes downward, illustrating a negative correlation between output and inflation. When the central bank observes increased inflation, it will raise its policy interest rate sufficiently to increase the real interest rate of the economy, dampening aggregate demand and consequently the overall activity level of the economy.: 263 : 411  === Aggregate supply curve === The dynamic AS curve slopes upward, reflecting the mechanisms of the Phillips curve: Other things equal, higher levels of activity reflect higher increases in wages and other marginal costs of production, causing higher inflation through the firms' price-setting mechanisms: 263 : 409  as they induce firms to raise their prices at a higher rate.: 594  There will be a vertical long-run aggregate supply curve at the level of structural (natural) output.: 595  == Shifts of aggregate demand and aggregate supply == The following summarizes the exogenous events that could shift the aggregate supply or aggregate demand curve to the right. Exogenous events happening in the opposite direction would shift the relevant curve in the opposite direction. === Shifts of aggregate demand === The dynamic aggregate demand curve shifts when either fiscal policy or monetary policy is changed or any other kinds of shocks to aggregate demand occur.: 411  Changes in the level of potential Y also shifts the AD curve, so that this type of shocks has an effect on both the supply and the demand side of the model.: 412  Rightward aggregate demand shifts can be caused by any shock to one of the autonomous components of aggregate demand, e.g.: An exogenous increase in consumer spending An exogenous increase in investment spending on physical capital An exogenous increase in intended inventory investment An exogenous increase in government spending on goods and services An exogenous increase in transfer payments from the government to the people An exogenous decrease in taxes levied An exogenous increase in purchases of the country's exports by people in other countries An exogenous decrease in imports from other countries === Shifts of aggregate supply === The dynamic aggregate supply curve is drawn for a given value of inflation expectations and level of potential output. Changes in either of these variables as well as a number of possible supply shocks will shift the dynamic aggregate supply curve.: 409  The long-run aggregate supply curve is affected by events that affect the potential output of the economy. These include the following shocks which would shift the long-run aggregate supply curve to the right: An increase in population An increase in the physical capital stock Technological progress == Applications == === Functional finance theory === AD-AS analysis are applied to Functional Finance Theory and/or MMT to study a relationship between inflation rate and economic growth rate. When a country's economy grows, the country needs deficit spending to maintain full employment without inflation. Inflation starts to occur when the interest rate of its government bond becomes larger than the growth rate, provided that the country maintains full employment. Also, when the country recovers from recession, it needs to increase government expenditure to achieve full employment. == See also == Keynesian cross AD–IA model Classical dichotomy DAD–SAS model == References == == Further reading == Blanchard, Olivier (2021). Macroeconomics (Eighth, global ed.). Harlow, England: Pearson. ISBN 978-0-134-89789-9. Dutt, Amitava K.; Skott, Peter (1996). "Keynesian Theory and the Aggregate-Supply/Aggregate-Demand Framework: A Defense". Eastern Economic Journal. 22 (3): 313–331. Dutt, Amitava K.; Skott, Peter (2006). "Keynesian Theory and the AD-AS Framework: A Reconsideration". In Chiarella, Carl; Franke, Reiner; Flaschel, Peter; Semmler, Willi (eds.). Quantitative and Empirical Analysis of Nonlinear Dynamic Macromodels. Contributions to Economic Analysis. Vol. 277. Emerald Group. pp. 149–172. doi:10.1016/S0573-8555(05)77006-1. ISBN 978-0-444-52122-4. S2CID 16009286. Mankiw, Nicholas Gregory (2022). Macroeconomics (Eleventh, international ed.). New York, NY: Worth Publishers, Macmillan Learning. ISBN 978-1-319-26390-4. Palley, Thomas I. (1997). "Keynesian Theory and AS/AD Analysis". Eastern Economic Journal. 23 (4): 459–468. JSTOR 40325806. Romer, David (2019). Advanced macroeconomics (Fifth ed.). New York, NY: McGraw-Hill. ISBN 978-1-260-18521-8. Sørensen, Peter Birch; Whitta-Jacobsen, Hans Jørgen (2022). Introducing advanced macroeconomics: growth and business cycles (Third ed.). Oxford, United Kingdom New York, NY: Oxford University Press. ISBN 978-0-19-885049-6. == External links == Sparknotes: Aggregate Supply and Aggregate Demand brief explanation of the AD–AS model "Aggregate Demand and Aggregate Supply" in CyberEconomics by Robert Schenk explains the AD–AS model and explains its relation to the IS/LM model "ThinkEconomics: Macroeconomic Phenomena in the AD/AS Model" includes an interactive graph demonstrating inflationary changes in a graph based on the AD–AS model "ThinkEconomics: The Aggregate Demand and Aggregate Supply Model" includes an interactive AD-AS graph that tests one's knowledge of how the AD and AS curves shift under different conditions
Wikipedia/AD–AS_model
An energy crisis or energy shortage is any significant bottleneck in the supply of energy resources to an economy. In literature, it often refers to one of the energy sources used at a certain time and place, in particular, those that supply national electricity grids or those used as fuel in industrial development. Population growth has led to a surge in the global demand for energy in recent years. In the 2000s, this new demand – together with Middle East tension, the falling value of the US dollar, dwindling oil reserves, concerns over peak oil, and oil price speculation – triggered the 2000s energy crisis, which saw the price of oil reach an all-time high of $147.30 per barrel ($926/m3) in 2008. Most energy crises have been caused by localized shortages, wars and market manipulation. However, the recent historical energy crises listed below were not caused by such factors. == Causes == Most energy crises have been caused by localized shortages, wars and market manipulation. Some have argued that government actions like tax hikes, nationalisation of energy companies, and regulation of the energy sector shift supply and demand of energy away from its economic equilibrium. However, the recent historical energy crises listed below were not caused by such factors. Market failure is possible when monopoly manipulation of markets occurs. A crisis can develop due to industrial actions like union organized strikes or government embargoes. The cause may be over-consumption, aging infrastructure, choke point disruption, or bottlenecks at oil refineries or port facilities that restrict fuel supply. An emergency may emerge during very cold winters due to increased consumption of energy. Large fluctuations and manipulations in future derivatives can impact price. Investment banks trade 80% of oil derivatives as of May 2012, compared to 30% a decade ago. This consolidation of trade contributed to an improvement of global energy output from 117,687 TWh in 2000 to 143,851 TWh in 2008. Limitations on free trade for derivatives could reverse this trend of growth in energy production. Kuwaiti Oil Minister Hani Hussein stated that "Under the supply and demand theory, oil prices today are not justified," in an interview with Upstream. Pipeline failures and other accidents may cause minor interruptions to energy supplies. A crisis could possibly emerge after infrastructure damage from severe weather. Attacks by terrorists or militia on important infrastructure are a possible problem for energy consumers, with a successful strike on a Middle East facility potentially causing global shortages. Political events, for example, when governments change due to regime change, monarchy collapse, military occupation, and coup may disrupt oil and gas production and create shortages. Fuel shortage can also be due to the excess and useless use of the fuels. == Historical crises == North Korea has had energy shortages for many years. Zimbabwe has experienced a shortage of energy supplies for many years due to financial mismanagement. === 20th century === 1970s energy crisis – caused by the peaking of oil production in major industrial nations (Germany, United States, Canada, etc.) and embargoes from other producers 1973 oil crisis – caused by an OAPEC oil export embargo by many of the major Arab oil-producing states, in response to Western support of Israel during the Yom Kippur War 1979 oil crisis – caused by the Iranian Revolution 1990 oil price shock – caused by the Gulf War === 2000s === 2000 fuel protests in the United Kingdom in 2000 were caused by a rise in the price of crude oil combined with already relatively high taxation on road fuel in the UK. 2000s energy crisis – Since 2003, a rise in prices caused by continued global increases in petroleum demand coupled with production stagnation, the falling value of the US dollar, and a myriad of other secondary causes. 2000–2001 California electricity crisis – Caused by market manipulation by Enron and failed deregulation; resulted in multiple large-scale power outages 2000–2008 North American natural gas crisis 2004 energy crisis in Argentina 2005, 2008 China experienced severe energy shortages towards the end of 2005 and again in early 2008. During the latter crisis they suffered severe damage to power networks along with diesel and coal shortages. Supplies of electricity in Guangdong province, the manufacturing hub of China, are predicted to fall short by an estimated 10 GW. In 2011 China was forecast to have a second quarter electrical power deficit of 44.85 – 49.85 GW. 2007 Political riots occurring during the 2007 Burmese anti-government protests were sparked by rising energy prices. 2008 energy crisis in Central Asia, caused by abnormally cold temperatures and low water levels in an area dependent on hydroelectric power. At the same time the South African President was appeasing fears of a prolonged electricity crisis in South Africa. 2008. In February, the President of Pakistan announced plans to tackle energy shortages that were reaching crisis stage, despite having significant hydrocarbon reserves. In April 2010, the Pakistani government announced the Pakistan national energy policy, which extended the official weekend and banned neon lights in response to a growing electricity shortage. 2008 South African energy crisis. The South African crisis led to large price rises for platinum in February 2008 and reduced gold production. and continues as of 2023. === 2010s === 2012 United Kingdom fuel crisis 2015 – Nepal experienced a major energy crisis in 2015 when India imposed an economic blockade on Nepal. Nepal faced shortages of various kinds of petroleum products and food materials which severely affected Nepal's economy. 2017 – The Gaza electricity crisis is a result of the tensions between Hamas, which rules the Gaza Strip, and the Palestinian Authority/Fatah, which rules the West Bank over custom tax revenue, funding of the Gaza Strip, and political authority. Residents receive electricity for a few hours a day on a rolling blackout schedule. 2019 California energy crisis === 2020s === 2021 Texas power crisis 2021 United Kingdom natural gas supplier crisis and 2021 United Kingdom fuel supply crisis 2021 global energy crisis. The record-high energy prices were driven by a global surge in demand as the world quit the economic recession caused by COVID-19 pandemic, particularly due to strong energy demand in Asia. The Lebanese liquidity crisis lead to shortages of fuel for electricity plants, resulting in the 2021 Lebanese blackout and public utilities being able to offer power for only a few hours a day. Ukrainian energy crisis Iranian energy crisis == Emerging oil shortage == "Peak oil" is the period when the maximum rate of global petroleum extraction is reached, after which the rate of production enters terminal decline. It relates to a long-term decline in the available supply of petroleum. This, combined with increasing demand, significantly increases the worldwide prices of petroleum-derived products. Most significant is the availability and price of liquid fuel for transportation. The US Department of Energy in the Hirsch report indicates that "The problems associated with world oil production peaking will not be temporary, and past 'energy crisis' experience will provide relatively little guidance." === Mitigation efforts === To avoid the serious social and economic implications a global decline in oil production could entail, the 2005 Hirsch report emphasized the need to find alternatives, at least ten to twenty years before the peak, and to phase out the use of petroleum over that time. Such mitigation could include energy conservation, fuel substitution, and the use of unconventional oil. Because mitigation can reduce the use of traditional petroleum sources, it can also affect the timing of peak oil and the shape of the Hubbert curve. Energy policy may be reformed leading to greater energy intensity, for example in Iran with the 2007 Gas Rationing Plan in Iran, Canada and the National Energy Program and in the US with the Energy Independence and Security Act of 2007 also called the Clean Energy Act of 2007. Another mitigation measure is the setup of a cache of secure fuel reserves like the United States Strategic Petroleum Reserve, in case of national emergency. Chinese energy policy includes specific targets within their 5-year plans. Andrew McKillop has been a proponent of a contract and converge model or capping scheme, to mitigate both emissions of greenhouse gases and a peak oil crisis. The imposition of a carbon tax would have mitigating effects on an oil crisis. The Oil Depletion Protocol has been developed by Richard Heinberg to implement a powerdown during a peak oil crisis. While many sustainable development and energy policy organisations have advocated reforms to energy development from the 1970s, some cater to a specific crisis in energy supply including Energy-Quest and the International Association for Energy Economics. The Oil Depletion Analysis Centre and the Association for the Study of Peak Oil and Gas examine the timing and likely effects of peak oil. Ecologist William Rees believes that To avoid a serious energy crisis in coming decades, citizens in the industrial countries should actually be urging their governments to come to an international agreement on a persistent, orderly, predictable, and steepening series of oil and natural gas price hikes over the next two decades. Due to a lack of political viability on the issue, government-mandated fuel prices hikes are unlikely and the unresolved dilemma of fossil fuel dependence is becoming a wicked problem. A global soft energy path seems improbable, due to the rebound effect. Conclusions that the world is heading towards an unprecedented large and potentially devastating global energy crisis due to a decline in the availability of cheap oil lead to calls for a decreasing dependency on fossil fuel. Other ideas concentrate on design and development of improved, energy-efficient urban infrastructure in developing nations. Government funding for alternative energy is more likely to increase during an energy crisis, so too are incentives for oil exploration. For example, funding for research into inertial confinement fusion technology increased during the 1970s. Kirk Sorensen and others have suggested that additional nuclear power plants, particularly liquid fluoride thorium reactors have the energy density to mitigate global warming and replace the energy from peak oil, peak coal and peak gas. The reactors produce electricity and heat so much of the transportation infrastructure should move over to electric vehicles. However, the high process heat of the molten salt reactors could be used to make liquid fuels from any carbon source. == Social and economic effects == The macroeconomic implications of a supply shock-induced energy crisis are large, because energy is the resource used to exploit all other resources. Oil price shocks can affect the rest of the economy through delayed business investment, sectoral shifts in the labor market, or monetary policy responses. When energy markets fail, an energy shortage develops. Electricity consumers may experience intentionally engineered rolling blackouts during periods of insufficient supply or unexpected power outages, regardless of the cause. Industrialized nations are dependent on oil, and efforts to restrict the supply of oil would have an adverse effect on the economies of oil producers. For the consumer, the price of natural gas, gasoline (petrol) and diesel for cars and other vehicles rises. An early response from stakeholders is the call for reports, investigations and commissions into the price of fuels. There are also movements towards the development of more sustainable urban infrastructure. In the market, new technology and energy efficiency measures become desirable for consumers seeking to decrease transport costs. Examples include: In 1980 Briggs & Stratton developed the first gasoline hybrid electric automobile; also appearing are plug-in hybrids. the growth of advanced biofuels. innovations like the Dahon, a folding bicycle modernized and electrifying passenger transport Railway electrification systems and new engines such as the Ganz-Mavag locomotive variable compression ratio for vehicles Other responses include the development of unconventional oil sources such as synthetic fuel from places like the Athabasca Oil Sands, more renewable energy commercialization and use of alternative propulsion. There may be a relocation trend towards local foods and possibly microgeneration, solar thermal collectors and other green energy sources. Tourism trends and gas-guzzler ownership varies with fuel costs. Energy shortages can influence public opinion on subjects from nuclear power plants to electric blankets. Building construction techniques—improved insulation, reflective roofs, thermally efficient windows, etc.—change to reduce heating costs. The percentage of businesses indicating that energy prices represent a barrier to investment has increased in 2022 (82%) as found in recent surveys, particularly for those who see it as a significant obstacle (59%). According to varied energy prices and energy intensity across nations and industries, various countries have different percentages of businesses that view energy costs as a key obstacle, ranging from 24% in Finland to 81% in Greece for example. === Crisis management === An electricity shortage is felt most acutely in heating, cooking, and water supply. Therefore, a sustained energy crisis may become a humanitarian crisis. If an energy shortage is prolonged a crisis management phase is enforced by authorities. Energy audits may be conducted to monitor usage. Various curfews with the intention of increasing energy conservation may be initiated to reduce consumption. For example, to conserve power during the Central Asia energy crisis, authorities in Tajikistan ordered bars and cafes to operate by candlelight. In the worst kind of energy crisis energy rationing and fuel rationing may be incurred. Panic buying may beset outlets as awareness of shortages spread. Facilities close down to save on heating oil; and factories cut production and lay off workers. The risk of stagflation increases. == See also == Power outage Energy conservation Energy market Embodied energy Energy industry Gasoline usage and pricing Peak coal Petroleum politics Resource-based view Social metabolism == References == == Further reading == Ammann, Daniel (2009). The King of Oil: The Secret Lives of Marc Rich. New York: St. Martin's Press. ISBN 978-0-312-57074-3. The Power of Community: How Cuba Survived Peak Oil – examines the effect of cold war oil shortages during the Special Period. Resource Wars: The New Landscape of Global Conflict by Michael Klare Half Gone: Oil, Gas, Hot Air and the Global Energy Crisis by Jeremy Leggett The Long Emergency by James Howard Kunstler, explores a psychology of previous investment Eating Fossil Fuels by Dale Allen Pfeiffer The Coming Oil Crisis by Colin Campbell Energy and American Society: Thirteen Myths – disputes an energy crisis exists in 2007 The Final Energy Crisis (2nd edition) ed by Sheila Newman (Pluto Press, London, 2008); a study of energy trends, prospects, assets and liabilities in different political systems and regions The End of Oil by Paul Roberts Sustainable energy - Without the Hot Air, David J.C. MacKay, 384 pages, UIT Cambridge (2009) ISBN 978-0954452933 2081: A Hopeful View of the Human Future, Gerard K. O'Neill, 284 pages, Simon & Schuster (1981) ISBN 978-0671242572 The Nuclear Imperative: A Critical Look at the Approaching Energy Crisis (More Physics for Presidents), Jeff Eerkens, 212 pages, Springer (2010) ISBN 978-9048186662 Rocks, Lawrence; Runyon, Richard P (1972). The Energy Crisis. Crown Publishers. ISBN 978-0-517-501641. == External links == Worldwide energy shortages
Wikipedia/Energy_crisis
Life in Great Britain during the Industrial Revolution shifted from an agrarian-based society to an urban, industrialised society. New social and technological ideas were developed, such as the factory system and the steam engine. Work became more regimented, disciplined, and moved outside the home with large segments of the rural population migrating to the cities. The industrial belts of Great Britain included the Scottish Lowlands, South Wales, northern England, and the English Midlands. The establishment of major factory centers assisted in the development of canals, roads, and railroads, particularly in Derbyshire, Lancashire, Cheshire, Staffordshire, Nottinghamshire, and Yorkshire. These regions saw the formation of a new workforce, described in Marxist theory as the proletariat. == Living standards == The nature of the Industrial Revolution's impact on living standards in Britain is debated among historians, with Charles Feinstein identifying detrimental impacts on British workers, whilst other historians, including Peter Lindert and Jeffrey Williamson claim the Industrial Revolution improved the living standards of British people. Increasing employment of workers in factories led to a marked decrease in the working conditions of the average worker, as in the absence of labour laws, factories had few safety measures, and accidents resulting in injuries were commonplace. Poor ventilation in workplaces such as cotton mills, coal mines, iron-works and brick factories is thought to have led to development of respiratory diseases among workers. Housing conditions of working class people who migrated to the cities was often overcrowded and unsanitary, creating a favourable environment for the spread of diseases such as typhoid, cholera and smallpox, further exacerbated by a lack of sick leave. According to one observer, "I am convinced that at no period of English history for which authentic records exist, was the condition of manual labor worse than it was in the forty years from 1782 to 1821, the period in which manufacturers and merchants accumulated fortune rapidly, and in which the rent of agricultural land was doubled." There was however a rise in real income and increase in availability of various consumer goods to the lower classes during this period. Prior to the industrial revolution, increases in real wages would be offset by subsequent decreases, a phenomenon which ceased to occur following the revolution. The real wage of the average worker doubled in just 32 years from 1819 to 1851. which brought many people out of poverty. == Child labour == In the industrial districts, children tended to enter the workforce at younger ages than rural ones. Children were employed favourably over adults as they were deemed more compliant and therefore easier to deal with. Although most families channeled their children's earnings into providing a better diet for them, working in the factories tended to have an overall negative effect on the health of the children. Child labourers tended to be orphans, children of widows, or from the poorest families. Children were preferred workers in textile mills as they worked for lower wages and had nimble fingers. Children's work mainly consisted of working under machines as well as cleaning and oiling tight areas. Children were physically punished by their superiors if they did not abide by their superiors expectations of work ethic. The punishments occurred as a result of the drive of master-manufacturers to maintain high output in the factories. The punishments and poor work conditions had a negative effect on the physical health of the children, causing physical deformities and illnesses. Furthermore, childhood diseases from this era have been linked to larger deformities in the future. Gender was not a discriminator for how children were treated when working during the industrial revolution. Both boys and girls would start working at the age of four or five. A sizeable proportion of children working in the mines were under 13 and a larger proportion were aged 13–18. Mines of the era were not built for stability; rather, they were small and low. Children, therefore, were needed to crawl through them. The conditions in the mines were unsafe, children would often have limbs crippled, their bodies distorted, or be killed. Children could get lost within the mines for days at a time. The air in the mines was harmful to breathe and could cause painful and fatal diseases. === Reforms for change === The Health and Morals of Apprentices Act 1802 tried to improve conditions for workers by making factory owners more responsible for the housing and clothing of the workers, but with little success. This act was never put into practice because magistrates failed to enforce it. The Cotton Mills and Factories Act 1819 forbade the employment of children under the age of nine in cotton mills, and limited the hours of work for children aged 9–16 to twelve hours a day. This act was a major step towards a better life for children since they were less likely to fall asleep during work, resulting in fewer injuries and beatings in the workplace. Michael Sadler was one of the pioneers in addressing the living and working conditions of industrial workers. In 1832, he led a parliamentary investigation of the conditions of textile workers. The Ashley Commission was another investigation committee, this time studying the situation of mine workers. One finding of the investigation was the observation alongside increased productivity, the number of working hours of the wage workers had also doubled in many cases. The efforts of Michael Sadler and the Ashley Commission resulted in the passage of the 1833 act which limited the number of working hours for women and children. This bill limited children aged 9–18 to working no more than 48 hours a week, and stipulated that they spend two hours at school during work hours. The Act also created the factory inspector and provided for routine inspections of factories to ensure factories implemented the reforms. According to one cotton manufacturer: We have never worked more than seventy-one hours a week before Sir John Hobhouse's Act was passed. We then came down to sixty-nine; and since Lord Althorp's Act was passed, in 1833, we have reduced the time of adults to sixty-seven and a half hours a week, and that of children under thirteen years of age to forty-eight hours in the week, though to do this latter has, I must admit, subjected us to much inconvenience, but the elder hands to more, in as much as the relief given to the child is in some measure imposed on the adult. The first report for women and children in mines lead to the Mines and Collieries Act 1842, which stated that children under the age of ten could not work in mines and that no women or girls could work in the mines. The second report in 1843 reinforced this act. The Factories Act 1844 limited women and young adults to working 12-hour days, and children from the ages 9 to 13 could only work nine-hour days. The Act also made mill masters and owners more accountable for injuries to workers. The Factories Act 1847, also known as the ten-hour bill, made it law that women and young people worked not more than ten hours a day and a maximum of 63 hours a week. The last two major factory acts of the Industrial Revolution were introduced in 1850 and 1856. After these acts, factories could no longer dictate working hours for women and children. They were to work from 6 am to 6 pm in the summer, and 7 am to 7 pm in the winter. These acts took a lot of power and authority away from the manufacturers and allowed women and children to have more personal time for the family and for themselves. The Prevention of Cruelty to, and Protection of, Children Act 1889 aimed to stop the abuse of children in both the work and family sphere of life. The Elementary Education Act 1870 allowed all children within the United Kingdom to have access to education. Education was not made compulsory until 1880 since many factory owners feared the removal of children as a source of cheap labour. With the basic mathematics and English skills that children were acquiring, however, factory owners had a growing pool of workers who could read. == See also == Child labour in the British Industrial Revolution Economy, industry, and trade of the Victorian era == Notes == == References == Clark, Gregory (2007) A Farewell to Alms: A Brief Economic History of the World Princeton University Press ISBN 978-0-691-12135-2. Mokyr, Joel. (1990). The Lever of Riches - Technological Creativity and Economic Progress. Oxford University Press. ISBN 0-19-506113-6. Stearns, Peter N. (1993). The Industrial Revolution in World History. Westview Press. ISBN 0-8133-8596-2. == External links == The Industrial Revolution: An Introduction. The Growth of Victorian Railways.
Wikipedia/Life_in_Great_Britain_during_the_Industrial_Revolution
Modern monetary theory or modern money theory (MMT) is a heterodox macroeconomic theory that describes currency as a public monopoly and unemployment as evidence that a currency monopolist is overly restricting the supply of the financial assets needed to pay taxes and satisfy savings desires. According to MMT, governments do not need to worry about accumulating debt since they can pay interest by printing money. MMT argues that the primary risk once the economy reaches full employment is inflation, which acts as the only constraint on spending. MMT also argues that inflation can be controlled by increasing taxes on everyone, to reduce the spending capacity of the private sector. MMT is opposed to the mainstream understanding of macroeconomic theory and has been criticized heavily by many mainstream economists. MMT is also strongly opposed by members of the Austrian school of economics. == Principles == MMT's main tenets are that a government that issues its own fiat money: Can pay for goods, services, and financial assets without a need to first collect money in the form of taxes or debt issuance in advance of such purchases Cannot be forced to default on debt denominated in its own currency Is limited in its money creation and purchases only by inflation, which accelerates once the real resources (labour, capital and natural resources) of the economy are utilized at full employment Should strengthen automatic stabilisers to control demand-pull inflation, rather than relying upon discretionary tax changes Issues bonds as a monetary policy device, rather than as a funding device Uses taxation to provide the fiscal space to spend without causing inflation and also to give a value to the currency. Taxation is often said in MMT not to fund the spending of a currency-issuing government, but without it no real spending is possible. The first four MMT tenets do not conflict with mainstream economics understanding of how money creation and inflation works. However, MMT economists disagree with mainstream economics about the fifth tenet: the impact of government deficits on interest rates. == History == MMT synthesizes ideas from the state theory of money of Georg Friedrich Knapp (also known as chartalism) and the credit theory of money of Alfred Mitchell-Innes, the functional finance proposals of Abba Lerner, Hyman Minsky's views on the banking system and Wynne Godley's sectoral balances approach. Knapp wrote in 1905 that "money is a creature of law", rather than a commodity. Knapp contrasted his state theory of money with the Gold Standard view of "metallism", where the value of a unit of currency depends on the quantity of precious metal it contains or for which it may be exchanged. He said that the state can create pure paper money and make it exchangeable by recognizing it as legal tender, with the criterion for the money of a state being "that which is accepted at the public pay offices". The prevailing view of money was that it had evolved from systems of barter to become a medium of exchange because it represented a durable commodity which had some use value, but proponents of MMT such as Randall Wray and Mathew Forstater said that more general statements appearing to support a chartalist view of tax-driven paper money appear in the earlier writings of many classical economists, including Adam Smith, Jean-Baptiste Say, J. S. Mill, Karl Marx, and William Stanley Jevons. Alfred Mitchell-Innes wrote in 1914 that money exists not as a medium of exchange but as a standard of deferred payment, with government money being debt the government may reclaim through taxation. Innes said: Whenever a tax is imposed, each taxpayer becomes responsible for the redemption of a small part of the debt which the government has contracted by its issues of money, whether coins, certificates, notes, drafts on the treasury, or by whatever name this money is called. He has to acquire his portion of the debt from some holder of a coin or certificate or other form of government money, and present it to the Treasury in liquidation of his legal debt. He has to redeem or cancel that portion of the debt ... The redemption of government debt by taxation is the basic law of coinage and of any issue of government 'money' in whatever form. Knapp and "chartalism" are referenced by John Maynard Keynes in the opening pages of his 1930 Treatise on Money and appear to have influenced Keynesian ideas on the role of the state in the economy. By 1947, when Abba Lerner wrote his article "Money as a Creature of the State", economists had largely abandoned the idea that the value of money was closely linked to gold. Lerner said that responsibility for avoiding inflation and depressions lay with the state because of its ability to create or tax away money. Hyman Minsky seemed to favor a chartalist approach to understanding money creation in his Stabilizing an Unstable Economy, while Basil Moore, in his book Horizontalists and Verticalists, lists the differences between bank money and state money. In 1996, Wynne Godley wrote an article on his sectoral balances approach, which MMT draws from. Economists Warren Mosler, L. Randall Wray, Stephanie Kelton, Bill Mitchell and Pavlina R. Tcherneva are largely responsible for reviving the idea of chartalism as an explanation of money creation; Wray refers to this revived formulation as neo-chartalism. Rodger Malcolm Mitchell's book Free Money (1996) describes in layman's terms the essence of chartalism. Pavlina R. Tcherneva has developed the first mathematical framework for MMT and has largely focused on developing the idea of the job guarantee. Bill Mitchell, professor of economics and Director of the Centre of Full Employment and Equity (CoFEE) at the University of Newcastle in Australia, coined the term 'modern monetary theory'. In their 2008 book Full Employment Abandoned, Mitchell and Joan Muysken use the term to explain monetary systems in which national governments have a monopoly on issuing fiat currency and where a floating exchange rate frees monetary policy from the need to protect foreign exchange reserves. Some contemporary proponents, such as Wray, place MMT within post-Keynesian economics, while MMT has been proposed as an alternative or complementary theory to monetary circuit theory, both being forms of endogenous money, i.e., money created within the economy, as by government deficit spending or bank lending, rather than from outside, perhaps with gold. In the complementary view, MMT explains the "vertical" (government-to-private and vice versa) interactions, while circuit theory is a model of the "horizontal" (private-to-private) interactions. By 2013, MMT had attracted a popular following through academic blogs and other websites. In 2019, MMT became a major topic of debate after U.S. Representative Alexandria Ocasio-Cortez said in January that the theory should be a larger part of the conversation. In February 2019, Macroeconomics became the first academic textbook based on the theory, published by Bill Mitchell, Randall Wray, and Martin Watts. MMT became increasingly used by chief economists and Wall Street executives for economic forecasts and investment strategies. The theory was also intensely debated by lawmakers in Japan, which was planning to raise taxes after years of deficit spending. In June 2020, Stephanie Kelton's MMT book The Deficit Myth became a New York Times bestseller. In 2020 the Sri Lankan Central Bank, under the governor W. D. Lakshman, cited MMT as a justification for adopting unconventional monetary policy, which was continued by Ajith Nivard Cabraal. This has been heavily criticized and widely cited as causing accelerating inflation and exacerbating the Sri Lankan economic crisis. MMT scholars Stephanie Kelton and Fadhel Kaboub maintain that the Sri Lankan government's fiscal and monetary policy bore little resemblance to the recommendations of MMT economists. == Theoretical approach == In sovereign financial systems, banks can create money, but these "horizontal" transactions do not increase net financial assets because assets are offset by liabilities. According to MMT advocates, "The balance sheet of the government does not include any domestic monetary instrument on its asset side; it owns no money. All monetary instruments issued by the government are on its liability side and are created and destroyed with spending and taxing or bond offerings." In MMT, "vertical money" enters circulation through government spending. Taxation and its legal tender enable power to discharge debt and establish fiat money as currency, giving it value by creating demand for it in the form of a private tax obligation. In addition, fines, fees, and licenses create demand for the currency. This currency can be issued by the domestic government or by using a foreign, accepted currency. An ongoing tax obligation, in concert with private confidence and acceptance of the currency, underpins the value of the currency. Because the government can issue its own currency at will, MMT maintains that the level of taxation relative to government spending (the government's deficit spending or budget surplus) is in reality a policy tool that regulates inflation and unemployment, and not a means of funding the government's activities by itself. The approach of MMT typically reverses theories of governmental austerity. The policy implications of the two are likewise typically opposed. === Vertical transactions === MMT labels a transaction between a government entity (public sector) and a non-government entity (private sector) as a "vertical transaction". The government sector includes the treasury and central bank. The non-government sector includes domestic and foreign private individuals and firms (including the private banking system) and foreign buyers and sellers of the currency. == Interaction between government and the banking sector == MMT is based on an account of the "operational realities" of interactions between the government and its central bank, and the commercial banking sector, with proponents like Scott Fullwiler arguing that understanding reserve accounting is critical to understanding monetary policy options. A sovereign government typically has an operating account with the country's central bank. From this account, the government can spend and also receive taxes and other inflows. Each commercial bank also has an account with the central bank, by means of which it manages its reserves (that is, money for clearing and settling interbank transactions). When a government spends money, its central bank debits its Treasury's operating account and credits the reserve accounts of the commercial banks. The commercial bank of the final recipient will then credit up this recipient's deposit account by issuing bank money. This spending increases the total reserve deposits in the commercial bank sector. Taxation works in reverse: taxpayers have their bank deposit accounts debited, along with their bank's reserve account being debited to pay the government; thus, deposits in the commercial banking sector fall. === Government bonds and interest rate maintenance === Virtually all central banks set an interest rate target, and most now establish administered rates to anchor the short-term overnight interest rate at their target. These administered rates include interest paid directly on reserve balances held by commercial banks, a discount rate charged to banks for borrowing reserves directly from the central bank, and an Overnight Reverse Repurchase (ON RRP) facility rate paid to banks for temporarily forgoing reserves in exchange for Treasury securities. The latter facility is a type of open market operation to help ensure interest rates remain at a target level. According to MMT, the issuing of government bonds is best understood as an operation to offset government spending rather than a requirement to finance it. In most countries, commercial banks' reserve accounts with the central bank must have a positive balance at the end of every day; in some countries, the amount is specifically set as a proportion of the liabilities a bank has, i.e., its customer deposits. This is known as a reserve requirement. At the end of every day, a commercial bank will have to examine the status of their reserve accounts. Those that are in deficit have the option of borrowing the required funds from the Central Bank, where they may be charged a lending rate (sometimes known as a discount window or discount rate) on the amount they borrow. On the other hand, the banks that have excess reserves can simply leave them with the central bank and earn a support rate from the central bank. Some countries, such as Japan, have a support rate of zero. Banks with more reserves than they need will be willing to lend to banks with a reserve shortage on the interbank lending market. The surplus banks will want to earn a higher rate than the support rate that the central bank pays on reserves; whereas the deficit banks will want to pay a lower interest rate than the discount rate the central bank charges for borrowing. Thus, they will lend to each other until each bank has reached their reserve requirement. In a balanced system, where there are just enough total reserves for all the banks to meet requirements, the short-term interbank lending rate will be in between the support rate and the discount rate. Under an MMT framework where government spending injects new reserves into the commercial banking system, and taxes withdraw them from the banking system, government activity would have an instant effect on interbank lending. If on a particular day, the government spends more than it taxes, reserves have been added to the banking system (see vertical transactions). This action typically leads to a system-wide surplus of reserves, with competition between banks seeking to lend their excess reserves, forcing the short-term interest rate down to the support rate (or to zero if a support rate is not in place). At this point, banks will simply keep their reserve surplus with their central bank and earn the support rate. The alternate case is where the government receives more taxes on a particular day than it spends. Then there may be a system-wide deficit of reserves. Consequently, surplus funds will be in demand on the interbank market, and thus the short-term interest rate will rise towards the discount rate. Thus, if the central bank wants to maintain a target interest rate somewhere between the support rate and the discount rate, it must manage the liquidity in the system to ensure that the correct amount of reserves is on-hand in the banking system. Central banks manage liquidity by buying and selling government bonds on the open market. When excess reserves are in the banking system, the central bank sells bonds, removing reserves from the banking system, because private individuals pay for the bonds. When insufficient reserves are in the system, the central bank buys government bonds from the private sector, adding reserves to the banking system. The central bank buys bonds by simply creating money – it is not financed in any way. It is a net injection of reserves into the banking system. If a central bank is to maintain a target interest rate, then it must buy and sell government bonds on the open market in order to maintain the correct amount of reserves in the system. == Horizontal transactions == MMT economists describe any transactions within the private sector as "horizontal" transactions, including the expansion of the broad money supply through the extension of credit by banks. MMT economists regard the concept of the money multiplier, where a bank is completely constrained in lending through the deposits it holds and its capital requirement, as misleading. Rather than being a practical limitation on lending, the cost of borrowing funds from the interbank market (or the central bank) represents a profitability consideration when the private bank lends in excess of its reserve or capital requirements (see interaction between government and the banking sector). Effects on employment are used as evidence that a currency monopolist is overly restricting the supply of the financial assets needed to pay taxes and satisfy savings desires. According to MMT, bank credit should be regarded as a "leverage" of the monetary base and should not be regarded as increasing the net financial assets held by an economy: only the government or central bank is able to issue high-powered money with no corresponding liability. Stephanie Kelton said that bank money is generally accepted in settlement of debt and taxes because of state guarantees, but that state-issued high-powered money sits atop a "hierarchy of money". == Foreign sector == === Imports and exports === MMT proponents such as Warren Mosler say that trade deficits are sustainable and beneficial to the standard of living in the short term. Imports are an economic benefit to the importing nation because they provide the nation with real goods. Exports, however, are an economic cost to the exporting nation because it is losing real goods that it could have consumed. Currency transferred to foreign ownership, however, represents a future claim over goods of that nation. Cheap imports may also cause the failure of local firms providing similar goods at higher prices, and hence unemployment, but MMT proponents label that consideration as a subjective value-based one, rather than an economic-based one: It is up to a nation to decide whether it values the benefit of cheaper imports more than it values employment in a particular industry. Similarly a nation overly dependent on imports may face a supply shock if the exchange rate drops significantly, though central banks can and do trade on foreign exchange markets to avoid shocks to the exchange rate. === Foreign sector and government === MMT says that as long as demand exists for the issuer's currency, whether the bond holder is foreign or not, governments can never be insolvent when the debt obligations are in their own currency; this is because the government is not constrained in creating its own fiat currency (although the bond holder may affect the exchange rate by converting to local currency). MMT does agree with mainstream economics that debt in a foreign currency is a fiscal risk to governments, because the indebted government cannot create foreign currency. In this case, the only way the government can repay its foreign debt is to ensure that its currency is continually in high demand by foreigners over the period that it wishes to repay its debt; an exchange rate collapse would potentially multiply the debt many times over asymptotically, making it impossible to repay. In that case, the government can default, or attempt to shift to an export-led strategy or raise interest rates to attract foreign investment in the currency. Either one negatively affects the economy. == Policy implications == Economist Stephanie Kelton explained several points made by MMT in March 2019: Under MMT, fiscal policy (i.e., government taxing and spending decisions) is the primary means of achieving full employment, establishing the budget deficit at the level necessary to reach that goal. In mainstream economics, monetary policy (i.e., Central Bank adjustment of interest rates and its balance sheet) is the primary mechanism, assuming there is some interest rate low enough to achieve full employment. Kelton said that "cutting interest rates is ineffective in a slump" because businesses, expecting weak profits and few customers, will not invest even at very low interest rates. Government interest expenses are proportional to interest rates, so raising rates is a form of stimulus (it increases the budget deficit and injects money into the private sector, other things being equal); cutting rates is a form of austerity. Achieving full employment can be administered via a centrally-funded job guarantee, which acts as an automatic stabilizer. When private sector jobs are plentiful, the government spending on guaranteed jobs is lower, and vice versa. Under MMT, expansionary fiscal policy, i.e., money creation to fund purchases, can increase bank reserves, which can lower interest rates. In mainstream economics, expansionary fiscal policy, i.e., debt issuance and spending, can result in higher interest rates, crowding out economic activity. Economist John T. Harvey explained several of the premises of MMT and their policy implications in March 2019: The private sector treats labor as a cost to be minimized, so it cannot be expected to achieve full employment without government creating jobs, too, such as through a job guarantee. The public sector's deficit is the private sector's surplus and vice versa, by accounting identity, which increased private sector debt during the Clinton-era budget surpluses. Creating money activates idle resources, mainly labor. Not doing so is immoral. Demand can be insensitive to interest rate changes, so a key mainstream assumption, that lower interest rates lead to higher demand, is questionable. There is a "free lunch" in creating money to fund government expenditure to achieve full employment. Unemployment is a burden; full employment is not. Creating money alone does not cause inflation; spending it when the economy is at full employment can. MMT says that "borrowing" is a misnomer when applied to a sovereign government's fiscal operations, because the government is merely accepting its own IOUs, and nobody can borrow back their own debt instruments. Sovereign government goes into debt by issuing its own liabilities that are financial wealth to the private sector. "Private debt is debt, but government debt is financial wealth to the private sector." In this theory, sovereign government is not financially constrained in its ability to spend; the government can afford to buy anything that is for sale in currency that it issues; there may, however, be political constraints, like a debt ceiling law. The only constraint is that excessive spending by any sector of the economy, whether households, firms, or public, could cause inflationary pressures. MMT economists advocate a government-funded job guarantee scheme to eliminate involuntary unemployment. Proponents say that this activity can be consistent with price stability because it targets unemployment directly rather than attempting to increase private sector job creation indirectly through a much larger economic stimulus, and maintains a "buffer stock" of labor that can readily switch to the private sector when jobs become available. A job guarantee program could also be considered an automatic stabilizer to the economy, expanding when private sector activity cools down and shrinking in size when private sector activity heats up. MMT economists also say quantitative easing (QE) is unlikely to have the effects that its advocates hope for. Under MMT, QE – the purchasing of government debt by central banks – is simply an asset swap, exchanging interest-bearing dollars for non-interest-bearing dollars. The net result of this procedure is not to inject new investment into the real economy, but instead to drive up asset prices, shifting money from government bonds into other assets such as equities, which enhances economic inequality. The Bank of England's analysis of QE confirms that it has disproportionately benefited the wealthiest. MMT economists say that inflation can be better controlled (than by setting interest rates) with new or increased taxes to remove extra money from the economy. These tax increases would be on everyone, not just billionaires, since the majority of spending is by average Americans. == Comparison of MMT with mainstream Keynesian economics == MMT can be compared and contrasted with mainstream Keynesian economics in a variety of ways: == Criticism == A 2019 survey of leading economists by the University of Chicago Booth's Initiative on Global Markets showed a unanimous rejection of assertions attributed by the survey to MMT: "Countries that borrow in their own currency should not worry about government deficits because they can always create money to finance their debt" and "Countries that borrow in their own currency can finance as much real government spending as they want by creating money". Directly responding to the survey, MMT economist William K. Black said "MMT scholars do not make or support either claim." Multiple MMT academics regard the attribution of these claims as a smear. Freiwirtschaft economist Felix Fuders argues that the growth imperative created by modern monetary theory has harmful environmental, mental, and social consequences. Fuders concluded that it is impossible to meaningfully address the problem of unsustainable growth or fulfill the sustainable development goals proposed by the United Nations without completely overhauling the monetary system in favor of demurrage currency. The post-Keynesian economist Thomas Palley has stated that MMT is largely a restatement of elementary Keynesian economics, but prone to "over-simplistic analysis" and understating the risks of its policy implications. Palley has disagreed with proponents of MMT who have asserted that standard Keynesian analysis does not fully capture the accounting identities and financial restraints on a government that can issue its own money. He said that these insights are well captured by standard Keynesian stock-flow consistent IS-LM models, and have been well understood by Keynesian economists for decades. He claimed MMT "assumes away the problem of fiscal–monetary conflict" – that is, that the governmental body that creates the spending budget (e.g. the legislature) may refuse to cooperate with the governmental body that controls the money supply (e.g., the central bank). He stated the policies proposed by MMT proponents would cause serious financial instability in an open economy with flexible exchange rates, while using fixed exchange rates would restore hard financial constraints on the government and "undermines MMT's main claim about sovereign money freeing governments from standard market disciplines and financial constraints". Furthermore, Palley has asserted that MMT lacks a plausible theory of inflation, particularly in the context of full employment in the employer of last resort policy first proposed by Hyman Minsky and advocated by Bill Mitchell and other MMT theorists; of a lack of appreciation of the financial instability that could be caused by permanently zero interest rates; and of overstating the importance of government-created money. Palley concludes that MMT provides no new insights about monetary theory, while making unsubstantiated claims about macroeconomic policy, and that MMT has only received attention recently due to it being a "policy polemic for depressed times". Marc Lavoie has said that whilst the neochartalist argument is "essentially correct", many of its counter-intuitive claims depend on a "confusing" and "fictitious" consolidation of government and central banking operations, which is what Palley calls "the problem of fiscal–monetary conflict". New Keynesian economist and recipient of the Nobel Prize in Economics, Paul Krugman, asserted MMT goes too far in its support for government budget deficits, and ignores the inflationary implications of maintaining budget deficits when the economy is growing. Krugman accused MMT devotees as engaging in "calvinball" – a game from the comic strip Calvin and Hobbes in which the players change the rules at whim. Austrian School economist Robert P. Murphy stated that MMT is "dead wrong" and that "the MMT worldview doesn't live up to its promises". He said that MMT saying cutting government deficits erodes private saving is true "only for the portion of private saving that is not invested" and says that the national accounting identities used to explain this aspect of MMT could equally be used to support arguments that government deficits "crowd out" private sector investment. The chartalist view of money itself, and the MMT emphasis on the importance of taxes in driving money, is also a source of criticism. In 2015, three MMT economists, Scott Fullwiler, Stephanie Kelton, and L. Randall Wray, addressed what they saw as the main criticisms being made. == See also == Everything bubble Friedman's k-percent rule - money supply should be increased at a fixed percentage Debt based monetary system - monetary system where commercial banks create the new money as debt == References == This article incorporates text by Yasuhito Tanaka available under the CC BY 4.0 license. == Further reading == == External links == January 2012: Modern Monetary Theory: A Debate (Brett Fiebiger critiques and Scott Fullwiler, Stephanie Kelton, L. Randall Wray respond; Political Economy Research Institute, Amherst, MA) June 2012: Knut Wicksell and origins of modern monetary theory (Lars Pålsson Syll) September 2020: Degrowth and MMT: A thought Experiment (Jason Hickel) The Modern Money Network is currently headquartered at Columbia University in the city of New York. October 2023: Finding The Money at IMDb is a documentary film about an underdog group of MMT economists on a mission to instigate a paradigm shift by flipping our understanding of the national debt, and the nature of money, upside down.
Wikipedia/Modern_Monetary_Theory
The Second Industrial Revolution, also known as the Technological Revolution, was a phase of rapid scientific discovery, standardisation, mass production and industrialisation from the late 19th century into the early 20th century. The First Industrial Revolution, which ended in the middle of the 19th century, was punctuated by a slowdown in important inventions before the Second Industrial Revolution in 1870. Though a number of its events can be traced to earlier innovations in manufacturing, such as the establishment of a machine tool industry, the development of methods for manufacturing interchangeable parts, as well as the invention of the Bessemer process and open hearth furnace to produce steel, later developments heralded the Second Industrial Revolution, which is generally dated between 1870 and 1914 when World War I commenced. Advancements in manufacturing and production technology enabled the widespread adoption of technological systems such as telegraph and railroad networks, gas and water supply, and sewage systems, which had earlier been limited to a few select cities. The enormous expansion of rail and telegraph lines after 1870 allowed unprecedented movement of people and ideas, which culminated in a new wave of colonialism and globalization. In the same time period, new technological systems were introduced, most significantly electrical power and telephones. The Second Industrial Revolution continued into the 20th century with early factory electrification and the production line; it ended at the beginning of World War I. Starting in 1947, the Information Age is sometimes also called the Third Industrial Revolution. == Overview == The Second Industrial Revolution was a period of rapid industrial development, primarily in the United Kingdom, Germany, and the United States, but also in France, the Low Countries, Italy and Japan. It followed on from the First Industrial Revolution that began in Britain in the late 18th century that then spread throughout Western Europe. It came to an end with the start of the World War I. While the First Revolution was driven by limited use of steam engines, interchangeable parts and mass production, and was largely water-powered, especially in the United States, the Second was characterized by the build-out of railroads, large-scale iron and steel production, widespread use of machinery in manufacturing, greatly increased use of steam power, widespread use of the telegraph, use of petroleum and the beginning of electrification. It also was the period during which modern organizational methods for operating large-scale businesses over vast areas came into use. The concept was introduced by Patrick Geddes, Cities in Evolution (1910), and was being used by economists such as Erich Zimmermann (1951), but David Landes' use of the term in a 1966 essay and in The Unbound Prometheus (1972) standardized scholarly definitions of the term, which was most intensely promoted by Alfred Chandler (1918–2007). However, some continue to express reservations about its use. In 2003, Landes stressed the importance of new technologies, especially the internal combustion engine, petroleum, new materials and substances, including alloys and chemicals, electricity and communication technologies, such as the telegraph, telephone, and radio. One author has called the period from 1867 to 1914, during which most of the great innovations were developed, "The Age of Synergy" since the inventions and innovations were engineering and science-based. == Industry and technology == A synergy between iron and steel, railroads and coal developed at the beginning of the Second Industrial Revolution. Railroads allowed cheap transportation of materials and products, which in turn led to cheap rails to build more roads. Railroads also benefited from cheap coal for their steam locomotives. This synergy led to the laying of 75,000 miles of track in the U.S. in the 1880s, the largest amount anywhere in world history. === Iron === The hot blast technique, in which the hot flue gas from a blast furnace is used to preheat combustion air blown into a blast furnace, was invented and patented by James Beaumont Neilson in 1828 at Wilsontown Ironworks in Scotland. Hot blast was the single most important advance in fuel efficiency of the blast furnace as it greatly reduced the fuel consumption for making pig iron, and was one of the most important technologies developed during the Industrial Revolution. Falling costs for producing wrought iron coincided with the emergence of the railway in the 1830s. The early technique of hot blast used iron for the regenerative heating medium. Iron caused problems with expansion and contraction, which stressed the iron and caused failure. Edward Alfred Cowper developed the Cowper stove in 1857. This stove used firebrick as a storage medium, solving the expansion and cracking problem. The Cowper stove was also capable of producing high heat, which resulted in very high throughput of blast furnaces. The Cowper stove is still used in today's blast furnaces. With the greatly reduced cost of producing pig iron with coke using hot blast, demand grew dramatically and so did the size of blast furnaces. === Steel === The Bessemer process, invented by Sir Henry Bessemer, allowed the mass-production of steel, increasing the scale and speed of production of this vital material, and decreasing the labor requirements. The key principle was the removal of excess carbon and other impurities from pig iron by oxidation with air blown through the molten iron. The oxidation also raises the temperature of the iron mass and keeps it molten. The "acid" Bessemer process had a serious limitation in that it required relatively scarce hematite ore which is low in phosphorus. Sidney Gilchrist Thomas developed a more sophisticated process to eliminate the phosphorus from iron. Collaborating with his cousin, Percy Gilchrist a chemist at the Blaenavon Ironworks, Wales, he patented his process in 1878; Bolckow Vaughan & Co. in Yorkshire was the first company to use his patented process. His process was especially valuable on the continent of Europe, where the proportion of phosphoric iron was much greater than in England, and both in Belgium and in Germany the name of the inventor became more widely known than in his own country. In America, although non-phosphoric iron largely predominated, an immense interest was taken in the invention. The next great advance in steel making was the Siemens–Martin process. Sir Charles William Siemens developed his regenerative furnace in the 1850s, for which he claimed in 1857 to able to recover enough heat to save 70–80% of the fuel. The furnace operated at a high temperature by using regenerative preheating of fuel and air for combustion. Through this method, an open-hearth furnace can reach temperatures high enough to melt steel, but Siemens did not initially use it in that manner. French engineer Pierre-Émile Martin was the first to take out a license for the Siemens furnace and apply it to the production of steel in 1865. The Siemens–Martin process complemented rather than replaced the Bessemer process. Its main advantages were that it did not expose the steel to excessive nitrogen (which would cause the steel to become brittle), it was easier to control, and that it permitted the melting and refining of large amounts of scrap steel, lowering steel production costs and recycling an otherwise troublesome waste material. It became the leading steel making process by the early 20th century. The availability of cheap steel allowed building larger bridges, railroads, skyscrapers, and ships. Other important steel products—also made using the open hearth process—were steel cable, steel rod and sheet steel which enabled large, high-pressure boilers and high-tensile strength steel for machinery which enabled much more powerful engines, gears and axles than were previously possible. With large amounts of steel it became possible to build much more powerful guns and carriages, tanks, armored fighting vehicles and naval ships. === Rail === The increase in steel production from the 1860s meant that railways could finally be made from steel at a competitive cost. Being a much more durable material, steel steadily replaced iron as the standard for railway rail, and due to its greater strength, longer lengths of rails could now be rolled. Wrought iron was soft and contained flaws caused by included dross. Iron rails could also not support heavy locomotives and were damaged by hammer blow. The first to make durable rails of steel rather than wrought iron was Robert Forester Mushet at the Darkhill Ironworks, Gloucestershire in 1857. The first of Mushet's steel rails was sent to Derby Midland railway station. The rails were laid at part of the station approach where the iron rails had to be renewed at least every six months, and occasionally every three. Six years later, in 1863, the rail seemed as perfect as ever, although some 700 trains had passed over it daily. This provided the basis for the accelerated construction of railways throughout the world in the late nineteenth century. The first commercially available steel rails in the US were manufactured in 1867 at the Cambria Iron Works in Johnstown, Pennsylvania. Steel rails lasted over ten times longer than did iron, and with the falling cost of steel, heavier weight rails were used. This allowed the use of more powerful locomotives, which could pull longer trains, and longer rail cars, all of which greatly increased the productivity of railroads. Rail became the dominant form of transport infrastructure throughout the industrialized world, producing a steady decrease in the cost of shipping seen for the rest of the century. === Electrification === The theoretical and practical basis for the harnessing of electric power was laid by the scientist and experimentalist Michael Faraday. Through his research on the magnetic field around a conductor carrying a direct current, Faraday established the basis for the concept of the electromagnetic field in physics. His inventions of electromagnetic rotary devices were the foundation of the practical use of electricity in technology. In 1881, Sir Joseph Swan, inventor of the first feasible incandescent light bulb, supplied about 1,200 Swan incandescent lamps to the Savoy Theatre in the City of Westminster, London, which was the first theatre, and the first public building in the world, to be lit entirely by electricity. Swan's lightbulb had already been used in 1879 to light Mosley Street, in Newcastle upon Tyne, the first electrical street lighting installation in the world. This set the stage for the electrification of industry and the home. The first large scale central distribution supply plant was opened at Holborn Viaduct in London in 1882 and later at Pearl Street Station in New York City. The first modern power station in the world was built by the English electrical engineer Sebastian de Ferranti at Deptford. Built on an unprecedented scale and pioneering the use of high voltage (10,000V) alternating current, it generated 800 kilowatts and supplied central London. On its completion in 1891 it supplied high-voltage AC power that was then "stepped down" with transformers for consumer use on each street. Electrification allowed the final major developments in manufacturing methods of the Second Industrial Revolution, namely the assembly line and mass production. Electrification was called "the most important engineering achievement of the 20th century" by the National Academy of Engineering. Electric lighting in factories greatly improved working conditions, eliminating the heat and pollution caused by gas lighting, and reducing the fire hazard to the extent that the cost of electricity for lighting was often offset by the reduction in fire insurance premiums. Frank J. Sprague developed the first successful DC motor in 1886. By 1889 110 electric street railways were either using his equipment or in planning. The electric street railway became a major infrastructure before 1920. The AC motor (Induction motor) was developed in the 1890s and soon began to be used in the electrification of industry. Household electrification did not become common until the 1920s, and then only in cities. Fluorescent lighting was commercially introduced at the 1939 World's Fair. Electrification also allowed the inexpensive production of electro-chemicals, such as aluminium, chlorine, sodium hydroxide, and magnesium. === Machine tools === The use of machine tools began with the onset of the First Industrial Revolution. The increase in mechanization required more metal parts, which were usually made of cast iron or wrought iron—and hand working lacked precision and was a slow and expensive process. One of the first machine tools was John Wilkinson's boring machine, that bored a precise hole in James Watt's first steam engine in 1774. Advances in the accuracy of machine tools can be traced to Henry Maudslay and refined by Joseph Whitworth. Standardization of screw threads began with Henry Maudslay around 1800, when the modern screw-cutting lathe made interchangeable V-thread machine screws a practical commodity. In 1841, Joseph Whitworth created a design that, through its adoption by many British railway companies, became the world's first national machine tool standard called British Standard Whitworth. During the 1840s through 1860s, this standard was often used in the United States and Canada as well, in addition to myriad intra- and inter-company standards. The importance of machine tools to mass production is shown by the fact that production of the Ford Model T used 32,000 machine tools, most of which were powered by electricity. Henry Ford is quoted as saying that mass production would not have been possible without electricity because it allowed placement of machine tools and other equipment in the order of the work flow. === Paper making === The first paper making machine was the Fourdrinier machine, built by Sealy and Henry Fourdrinier, stationers in London. In 1800, Matthias Koops, working in London, investigated the idea of using wood to make paper, and began his printing business a year later. However, his enterprise was unsuccessful due to the prohibitive cost at the time. It was in the 1840s, that Charles Fenerty in Nova Scotia and Friedrich Gottlob Keller in Saxony both invented a successful machine which extracted the fibres from wood (as with rags) and from it, made paper. This started a new era for paper making, and, together with the invention of the fountain pen and the mass-produced pencil of the same period, and in conjunction with the advent of the steam driven rotary printing press, wood based paper caused a major transformation of the 19th century economy and society in industrialized countries. With the introduction of cheaper paper, schoolbooks, fiction, non-fiction, and newspapers became gradually available by 1900. Cheap wood based paper also allowed keeping personal diaries or writing letters and so, by 1850, the clerk, or writer, ceased to be a high-status job. By the 1880s chemical processes for paper manufacture were in use, becoming dominant by 1900. === Petroleum === The petroleum industry, both production and refining, began in 1848 with the first oil works in Scotland. The chemist James Young set up a tiny business refining the crude oil in 1848. Young found that by slow distillation he could obtain a number of useful liquids from it, one of which he named "paraffine oil" because at low temperatures it congealed into a substance resembling paraffin wax. In 1850 Young built the first truly commercial oil-works and oil refinery in the world at Bathgate, using oil extracted from locally mined torbanite, shale, and bituminous coal to manufacture naphtha and lubricating oils; paraffin for fuel use and solid paraffin were not sold till 1856. Cable tool drilling was developed in ancient China and was used for drilling brine wells. The salt domes also held natural gas, which some wells produced and which was used for evaporation of the brine. Chinese well drilling technology was introduced to Europe in 1828. Although there were many efforts in the mid-19th century to drill for oil, Edwin Drake's 1859 well near Titusville, Pennsylvania, is considered the first "modern oil well". Drake's well touched off a major boom in oil production in the United States. Drake learned of cable tool drilling from Chinese laborers in the U. S. The first primary product was kerosene for lamps and heaters. Similar developments around Baku fed the European market. Kerosene lighting was much more efficient and less expensive than vegetable oils, tallow and whale oil. Although town gas lighting was available in some cities, kerosene produced a brighter light until the invention of the gas mantle. Both were replaced by electricity for street lighting following the 1890s and for households during the 1920s. Gasoline was an unwanted byproduct of oil refining until automobiles were mass-produced after 1914, and gasoline shortages appeared during World War I. The invention of the Burton process for thermal cracking doubled the yield of gasoline, which helped alleviate the shortages. === Chemical === Synthetic dye was discovered by English chemist William Henry Perkin in 1856. At the time, chemistry was still in a quite primitive state; it was still a difficult proposition to determine the arrangement of the elements in compounds and chemical industry was still in its infancy. Perkin's accidental discovery was that aniline could be partly transformed into a crude mixture which when extracted with alcohol produced a substance with an intense purple colour. He scaled up production of the new "mauveine", and commercialized it as the world's first synthetic dye. After the discovery of mauveine, many new aniline dyes appeared (some discovered by Perkin himself), and factories producing them were constructed across Europe. Towards the end of the century, Perkin and other British companies found their research and development efforts increasingly eclipsed by the German chemical industry which became world dominant by 1914. === Maritime technology === This era saw the birth of the modern ship as disparate technological advances came together. The screw propeller was introduced in 1835 by Francis Pettit Smith who discovered a new way of building propellers by accident. Up to that time, propellers were literally screws, of considerable length. But during the testing of a boat propelled by one, the screw snapped off, leaving a fragment shaped much like a modern boat propeller. The boat moved faster with the broken propeller. The superiority of screw against paddles was taken up by navies. Trials with Smith's SS Archimedes, the first steam driven screw, led to the famous tug-of-war competition in 1845 between the screw-driven HMS Rattler and the paddle steamer HMS Alecto; the former pulling the latter backward at 2.5 knots (4.6 km/h). The first seagoing iron steamboat was built by Horseley Ironworks and named the Aaron Manby. It also used an innovative oscillating engine for power. The boat was built at Tipton using temporary bolts, disassembled for transportation to London, and reassembled on the Thames in 1822, this time using permanent rivets. Other technological developments followed, including the invention of the surface condenser, which allowed boilers to run on purified water rather than salt water, eliminating the need to stop to clean them on long sea journeys. The Great Western , built by engineer Isambard Kingdom Brunel, was the longest ship in the world at 236 ft (72 m) with a 250-foot (76 m) keel and was the first to prove that transatlantic steamship services were viable. The ship was constructed mainly from wood, but Brunel added bolts and iron diagonal reinforcements to maintain the keel's strength. In addition to its steam-powered paddle wheels, the ship carried four masts for sails. Brunel followed this up with the Great Britain, launched in 1843 and considered the first modern ship built of metal rather than wood, powered by an engine rather than wind or oars, and driven by propeller rather than paddle wheel. Brunel's vision and engineering innovations made the building of large-scale, propeller-driven, all-metal steamships a practical reality, but the prevailing economic and industrial conditions meant that it would be several decades before transoceanic steamship travel emerged as a viable industry. Highly efficient multiple expansion steam engines began being used on ships, allowing them to carry less coal than freight. The oscillating engine was first built by Aaron Manby and Joseph Maudslay in the 1820s as a type of direct-acting engine that was designed to achieve further reductions in engine size and weight. Oscillating engines had the piston rods connected directly to the crankshaft, dispensing with the need for connecting rods. To achieve this aim, the engine cylinders were not immobile as in most engines, but secured in the middle by trunnions which allowed the cylinders themselves to pivot back and forth as the crankshaft rotated, hence the term oscillating. It was John Penn, engineer for the Royal Navy who perfected the oscillating engine. One of his earliest engines was the grasshopper beam engine. In 1844 he replaced the engines of the Admiralty yacht, HMS Black Eagle with oscillating engines of double the power, without increasing either the weight or space occupied, an achievement which broke the naval supply dominance of Boulton & Watt and Maudslay, Son & Field. Penn also introduced the trunk engine for driving screw propellers in vessels of war. HMS Encounter (1846) and HMS Arrogant (1848) were the first ships to be fitted with such engines and such was their efficacy that by the time of Penn's death in 1878, the engines had been fitted in 230 ships and were the first mass-produced, high-pressure and high-revolution marine engines. The revolution in naval design led to the first modern battleships in the 1870s, evolved from the ironclad design of the 1860s. The Devastation-class turret ships were built for the British Royal Navy as the first class of ocean-going capital ship that did not carry sails, and the first whose entire main armament was mounted on top of the hull rather than inside it. === Rubber === The vulcanization of rubber, by American Charles Goodyear and Englishman Thomas Hancock in the 1840s paved the way for a growing rubber industry, especially the manufacture of rubber tyres John Boyd Dunlop developed the first practical pneumatic tyre in 1887 in South Belfast. Willie Hume demonstrated the supremacy of Dunlop's newly invented pneumatic tyres in 1889, winning the tyre's first ever races in Ireland and then England. Dunlop's development of the pneumatic tyre arrived at a crucial time in the development of road transport and commercial production began in late 1890. === Bicycles === The modern bicycle was designed by the English engineer Harry John Lawson in 1876, although it was John Kemp Starley who produced the first commercially successful safety bicycle a few years later. Its popularity soon grew, causing the bike boom of the 1890s. Road networks improved greatly in the period, using the Macadam method pioneered by Scottish engineer John Loudon McAdam, and hard surfaced roads were built around the time of the bicycle craze of the 1890s. Modern tarmac was patented by British civil engineer Edgar Purnell Hooley in 1901. === Automobile === German inventor Karl Benz patented the world's first automobile in 1886. It featured wire wheels (unlike carriages' wooden ones) with a four-stroke engine of his own design between the rear wheels, with a very advanced coil ignition and evaporative cooling rather than a radiator. Power was transmitted by means of two roller chains to the rear axle. It was the first automobile entirely designed as such to generate its own power, not simply a motorized-stage coach or horse carriage. Benz began to sell the vehicle, advertising it as the Benz Patent Motorwagen, in the late summer of 1888, making it the first commercially available automobile in history. Henry Ford built his first car in 1896 and worked as a pioneer in the industry, with others who would eventually form their own companies, until the founding of Ford Motor Company in 1903. Ford and others at the company struggled with ways to scale up production in keeping with Henry Ford's vision of a car designed and manufactured on a scale so as to be affordable by the average worker. The solution that Ford Motor developed was a completely redesigned factory with machine tools and special purpose machines that were systematically positioned in the work sequence. All unnecessary human motions were eliminated by placing all work and tools within easy reach, and where practical on conveyors, forming the assembly line, the complete process being called mass production. This was the first time in history when a large, complex product consisting of 5000 parts had been produced on a scale of hundreds of thousands per year. The savings from mass production methods allowed the price of the Model T to decline from $780 in 1910 to $360 in 1916. In 1924 2 million T-Fords were produced and retailed $290 each.($5,321 in 2024 dollars) === Applied science === Applied science opened many opportunities. By the middle of the 19th century there was a scientific understanding of chemistry and a fundamental understanding of thermodynamics and by the last quarter of the century both of these sciences were near their present-day basic form. Thermodynamic principles were used in the development of physical chemistry. Understanding chemistry greatly aided the development of basic inorganic chemical manufacturing and the aniline dye industries. The science of metallurgy was advanced through the work of Henry Clifton Sorby and others. Sorby pioneered metallography, the study of metals under the microscope, which paved the way for a scientific understanding of metal and the mass-production of steel. In 1863 he used etching with acid to study the microscopic structure of metals and was the first to understand that a small but precise quantity of carbon gave steel its strength. This paved the way for Henry Bessemer and Robert Forester Mushet to develop the method for mass-producing steel. Other processes were developed for purifying various elements such as chromium, molybdenum, titanium, vanadium and nickel which could be used for making alloys with special properties, especially with steel. Vanadium steel, for example, is strong and fatigue resistant, and was used in half the automotive steel. Alloy steels were used for ball bearings which were used in large scale bicycle production in the 1880s. Ball and roller bearings also began being used in machinery. Other important alloys are used in high temperatures, such as steam turbine blades, and stainless steels for corrosion resistance. The work of Justus von Liebig and August Wilhelm von Hofmann laid the groundwork for modern industrial chemistry. Liebig is considered the "father of the fertilizer industry" for his discovery of nitrogen as an essential plant nutrient and went on to establish Liebig's Extract of Meat Company which produced the Oxo meat extract. Hofmann headed a school of practical chemistry in London, under the style of the Royal College of Chemistry, introduced modern conventions for molecular modeling and taught Perkin who discovered the first synthetic dye. The science of thermodynamics was developed into its modern form by Sadi Carnot, William Rankine, Rudolf Clausius, William Thomson, James Clerk Maxwell, Ludwig Boltzmann and J. Willard Gibbs. These scientific principles were applied to a variety of industrial concerns, including improving the efficiency of boilers and steam turbines. The work of Michael Faraday and others was pivotal in laying the foundations of the modern scientific understanding of electricity. Scottish scientist James Clerk Maxwell was particularly influential—his discoveries ushered in the era of modern physics. His most prominent achievement was to formulate a set of equations that described electricity, magnetism, and optics as manifestations of the same phenomenon, namely the electromagnetic field. The unification of light and electrical phenomena led to the prediction of the existence of radio waves and was the basis for the future development of radio technology by Hughes, Marconi and others. Maxwell himself developed the first durable colour photograph in 1861 and published the first scientific treatment of control theory. Control theory is the basis for process control, which is widely used in automation, particularly for process industries, and for controlling ships and airplanes. Control theory was developed to analyze the functioning of centrifugal governors on steam engines. These governors came into use in the late 18th century on wind and water mills to correctly position the gap between mill stones, and were adapted to steam engines by James Watt. Improved versions were used to stabilize automatic tracking mechanisms of telescopes and to control speed of ship propellers and rudders. However, those governors were sluggish and oscillated about the set point. James Clerk Maxwell wrote a paper mathematically analyzing the actions of governors, which marked the beginning of the formal development of control theory. The science was continually improved and evolved into an engineering discipline. === Fertilizer === Justus von Liebig was the first to understand the importance of ammonia as fertilizer, and promoted the importance of inorganic minerals to plant nutrition. In England, he attempted to implement his theories commercially through a fertilizer created by treating phosphate of lime in bone meal with sulfuric acid. Another pioneer was John Bennet Lawes who began to experiment on the effects of various manures on plants growing in pots in 1837, leading to a manure formed by treating phosphates with sulphuric acid; this was to be the first product of the nascent artificial manure industry. The discovery of coprolites in commercial quantities in East Anglia, led Fisons and Edward Packard to develop one of the first large-scale commercial fertilizer plants at Bramford, and Snape in the 1850s. By the 1870s superphosphates produced in those factories, were being shipped around the world from the port at Ipswich. The Birkeland–Eyde process was developed by Norwegian industrialist and scientist Kristian Birkeland along with his business partner Sam Eyde in 1903, but was soon replaced by the much more efficient Haber process, developed by the Nobel Prize-winning chemists Carl Bosch of IG Farben and Fritz Haber in Germany. The process used molecular nitrogen (N2) and methane (CH4) gas in an economically sustainable synthesis of ammonia (NH3). The ammonia produced in the Haber process is the main raw material for production of nitric acid. === Engines and turbines === The steam turbine was developed by Sir Charles Parsons in 1884. His first model was connected to a dynamo that generated 7.5 kW (10 hp) of electricity. The invention of Parson's steam turbine made cheap and plentiful electricity possible and revolutionized marine transport and naval warfare. By the time of Parson's death, his turbine had been adopted for all major world power stations. Unlike earlier steam engines, the turbine produced rotary power rather than reciprocating power which required a crank and heavy flywheel. The large number of stages of the turbine allowed for high efficiency and reduced size by 90%. The turbine's first application was in shipping followed by electric generation in 1903. The first widely used internal combustion engine was the Otto type of 1876. From the 1880s until electrification it was successful in small shops because small steam engines were inefficient and required too much operator attention. The Otto engine soon began being used to power automobiles, and remains as today's common gasoline engine. The diesel engine was independently designed by Rudolf Diesel and Herbert Akroyd Stuart in the 1890s using thermodynamic principles with the specific intention of being highly efficient. It took several years to perfect and become popular, but found application in shipping before powering locomotives. It remains the world's most efficient prime mover. === Telecommunications === The first commercial telegraph system was installed by Sir William Fothergill Cooke and Charles Wheatstone in May 1837 between Euston railway station and Camden Town in London. The rapid expansion of telegraph networks took place throughout the century, with the first undersea telegraph cable being built by John Watkins Brett between France and England. The Atlantic Telegraph Company was formed in London in 1856 to undertake construction of a commercial telegraph cable across the Atlantic Ocean. This was successfully completed on 18 July 1866 by the ship SS Great Eastern, captained by Sir James Anderson after many mishaps along the away. From the 1850s until 1911, British submarine cable systems dominated the world system. This was set out as a formal strategic goal, which became known as the All Red Line. The telephone was patented in 1876 by Alexander Graham Bell, and like the early telegraph, it was used mainly to speed business transactions. As mentioned above, one of the most important scientific advancements in all of history was the unification of light, electricity and magnetism through Maxwell's electromagnetic theory. A scientific understanding of electricity was necessary for the development of efficient electric generators, motors and transformers. David Edward Hughes and Heinrich Hertz both demonstrated and confirmed the phenomenon of electromagnetic waves that had been predicted by Maxwell. It was Italian inventor Guglielmo Marconi who successfully commercialized radio at the turn of the century. He founded The Wireless Telegraph & Signal Company in Britain in 1897 and in the same year transmitted Morse code across Salisbury Plain, sent the first ever wireless communication over open sea and made the first transatlantic transmission in 1901 from Poldhu, Cornwall to Signal Hill, Newfoundland. Marconi built high-powered stations on both sides of the Atlantic and began a commercial service to transmit nightly news summaries to subscribing ships in 1904. The key development of the vacuum tube by Sir John Ambrose Fleming in 1904 underpinned the development of modern electronics and radio broadcasting. Lee De Forest's subsequent invention of the triode allowed the amplification of electronic signals, which paved the way for radio broadcasting in the 1920s. === Modern business management === Railroads are credited with creating the modern business enterprise by scholars such as Alfred Chandler. Previously, the management of most businesses had consisted of individual owners or groups of partners, some of whom often had little daily hands-on operations involvement. Centralized expertise in the home office was not enough. A railroad required expertise available across the whole length of its trackage, to deal with daily crises, breakdowns and bad weather. A collision in Massachusetts in 1841 led to a call for safety reform. This led to the reorganization of railroads into different departments with clear lines of management authority. When the telegraph became available, companies built telegraph lines along the railroads to keep track of trains. Railroads involved complex operations and employed extremely large amounts of capital and ran a more complicated business compared to anything previous. Consequently, they needed better ways to track costs. For example, to calculate rates they needed to know the cost of a ton-mile of freight. They also needed to keep track of cars, which could go missing for months at a time. This led to what was called "railroad accounting", which was later adopted by steel and other industries, and eventually became modern accounting. Later in the Second Industrial Revolution, Frederick Winslow Taylor and others in America developed the concept of scientific management or Taylorism. Scientific management initially concentrated on reducing the steps taken in performing work (such as bricklaying or shoveling) by using analysis such as time-and-motion studies, but the concepts evolved into fields such as industrial engineering, manufacturing engineering, and business management that helped to completely restructure the operations of factories, and later entire segments of the economy. Taylor's core principles included: replacing rule-of-thumb work methods with methods based on a scientific study of the tasks scientifically selecting, training, and developing each employee rather than passively leaving them to train themselves providing "detailed instruction and supervision of each worker in the performance of that worker's discrete task" dividing work nearly equally between managers and workers, such that the managers apply scientific-management principles to planning the work and the workers actually perform the tasks == Socio-economic impacts == The period from 1870 to 1890 saw the greatest increase in economic growth in such a short period as ever in previous history. Living standards improved significantly in the newly industrialized countries as the prices of goods fell dramatically due to the increases in productivity. This caused unemployment and great upheavals in commerce and industry, with many laborers being displaced by machines and many factories, ships and other forms of fixed capital becoming obsolete in a very short time span. "The economic changes that have occurred during the last quarter of a century -or during the present generation of living men- have unquestionably been more important and more varied than during any period of the world's history". Crop failures no longer resulted in starvation in areas connected to large markets through transport infrastructure. Massive improvements in public health and sanitation resulted from public health initiatives, such as the construction of the London sewerage system in the 1860s and the passage of laws that regulated filtered water supplies—(the Metropolis Water Act introduced regulation of the water supply companies in London, including minimum standards of water quality for the first time in 1852). This greatly reduced the infection and death rates from many diseases. By 1870 the work done by steam engines exceeded that done by animal and human power. Horses and mules remained important in agriculture until the development of the internal combustion tractor near the end of the Second Industrial Revolution. Improvements in steam efficiency, like triple-expansion steam engines, allowed ships to carry much more freight than coal, resulting in greatly increased volumes of international trade. Higher steam engine efficiency caused the number of steam engines to increase several fold, leading to an increase in coal usage, the phenomenon being called the Jevons paradox. By 1890 there was an international telegraph network allowing orders to be placed by merchants in England or the US to suppliers in India and China for goods to be transported in efficient new steamships. This, plus the opening of the Suez Canal, led to the decline of the great warehousing districts in London and elsewhere, and the elimination of many middlemen. The tremendous growth in productivity, transportation networks, industrial production and agricultural output lowered the prices of almost all goods. This led to many business failures and periods that were called depressions that occurred as the world economy actually grew. See also: Long depression The factory system centralized production in separate buildings funded and directed by specialists (as opposed to work at home). The division of labor made both unskilled and skilled labor more productive, and led to a rapid growth of population in industrial centers. The shift away from agriculture toward industry had occurred in Britain by the 1730s, when the percentage of the working population engaged in agriculture fell below 50%, a development that would only happen elsewhere (the Low Countries) in the 1830s and '40s. By 1890, the figure had fallen to under 10% and the vast majority of the British population was urbanized. This milestone was reached by the Low Countries and the US in the 1950s. Like the first industrial revolution, the second supported population growth and saw most governments protect their national economies with tariffs. Britain retained its belief in free trade throughout this period. The wide-ranging social impact of both revolutions included the remaking of the working class as new technologies appeared. The changes resulted in the creation of a larger, increasingly professional, middle class, the decline of child labor and the dramatic growth of a consumer-based, material culture. By 1900, the leaders in industrial production was Britain with 24% of the world total, followed by the US (19%), Germany (13%), Russia (9%) and France (7%). Europe together accounted for 62%. The great inventions and innovations of the Second Industrial Revolution are part of our modern life. They continued to be drivers of the economy until after WWII. Major innovations occurred in the post-war era, some of which are: computers, semiconductors, the fiber optic network and the Internet, cellular telephones, combustion turbines (jet engines) and the Green Revolution. Although commercial aviation existed before WWII, it became a major industry after the war. == United Kingdom == New products and services were introduced which greatly increased international trade. Improvements in steam engine design and the wide availability of cheap steel meant that slow, sailing ships were replaced with faster steamship, which could handle more trade with smaller crews. The chemical industries also moved to the forefront. Britain invested less in technological research than the U.S. and Germany, which caught up. The development of more intricate and efficient machines along with mass production techniques after 1910 greatly expanded output and lowered production costs. As a result, production often exceeded domestic demand. Among the new conditions, more markedly evident in Britain, the forerunner of Europe's industrial states, were the long-term effects of the severe Long Depression of 1873–1896, which had followed fifteen years of great economic instability. Businesses in practically every industry suffered from lengthy periods of low – and falling – profit rates and price deflation after 1873. == United States == The U.S. had its highest economic growth rate in the last two decades of the Second Industrial Revolution; however, population growth slowed while productivity growth peaked around the mid 20th century. The Gilded Age in America was based on heavy industry such as factories, railroads and coal mining. The iconic event was the opening of the First transcontinental railroad in 1869, providing six-day service between the East Coast and San Francisco. During the Gilded Age, American railroad mileage tripled between 1860 and 1880, and tripled again by 1920, opening new areas to commercial farming, creating a truly national marketplace and inspiring a boom in coal mining and steel production. The voracious appetite for capital of the great trunk railroads facilitated the consolidation of the nation's financial market in Wall Street. By 1900, the process of economic concentration had extended into most branches of industry—a few large corporations, some organized as "trusts" (e.g. Standard Oil), dominated in steel, oil, sugar, meatpacking, and the manufacture of agriculture machinery. Other major components of this infrastructure were the new methods for manufacturing steel, especially the Bessemer process. The first billion-dollar corporation was United States Steel, formed by financier J. P. Morgan in 1901, who purchased and consolidated steel firms built by Andrew Carnegie and others. Increased mechanization of industry and improvements to worker efficiency, increased the productivity of factories while undercutting the need for skilled labor. Mechanical innovations such as batch and continuous processing began to become much more prominent in factories. This mechanization made some factories an assemblage of unskilled laborers performing simple and repetitive tasks under the direction of skilled foremen and engineers. In some cases, the advancement of such mechanization substituted for low-skilled workers altogether. Both the number of unskilled and skilled workers increased, as their wage rates grew Engineering colleges were established to feed the enormous demand for expertise. Together with rapid growth of small business, a new middle class was rapidly growing, especially in northern cities. == Germany == The German Empire came to rival Britain as Europe's primary industrial nation during this period. Since Germany industrialized later, it was able to model its factories after those of Britain, thus making more efficient use of its capital and avoiding legacy methods in its leap to the envelope of technology. Germany invested more heavily than the British in research, especially in chemistry, motors and electricity. The German concern system (known as Konzerne), being significantly concentrated, was able to make more efficient use of capital. Germany was not weighted down with an expensive worldwide empire that needed defense. Following Germany's annexation of Alsace-Lorraine in 1871, it absorbed parts of what had been France's industrial base. By 1900 the German chemical industry dominated the world market for synthetic dyes. The three major firms BASF, Bayer and Hoechst produced several hundred different dyes, along with the five smaller firms. In 1913 these eight firms produced almost 90 percent of the world supply of dyestuffs, and sold about 80 percent of their production abroad. The three major firms had also integrated upstream into the production of essential raw materials and they began to expand into other areas of chemistry such as pharmaceuticals, photographic film, agricultural chemicals and electrochemical. Top-level decision-making was in the hands of professional salaried managers, leading Chandler to call the German dye companies "the world's first truly managerial industrial enterprises". There were many spin offs from research—such as the pharmaceutical industry, which emerged from chemical research. == Belgium == Belgium during the Belle Époque showed the value of the railways for speeding the Second Industrial Revolution. After 1830, when it broke away from the Netherlands and became a new nation, it decided to stimulate industry. It planned and funded a simple cruciform system that connected major cities, ports and mining areas, and linked to neighboring countries. Belgium thus became the railway center of the region. The system was soundly built along British lines, so that profits were low but the infrastructure necessary for rapid industrial growth was put in place. == Alternative uses == There have been other times that have been called "second industrial revolution". Industrial revolutions may be renumbered by taking earlier developments, such as the rise of medieval technology in the 12th century, or of ancient Chinese technology during the Tang dynasty, or of ancient Roman technology, as first. "Second industrial revolution" has been used in the popular press and by technologists or industrialists to refer to the changes following the spread of new technology after World War I. Excitement and debate over the dangers and benefits of the Atomic Age were more intense and lasting than those over the Space age but both were predicted to lead to another industrial revolution. At the start of the 21st century the term "second industrial revolution" has been used to describe the anticipated effects of hypothetical molecular nanotechnology systems upon society. In this more recent scenario, they would render the majority of today's modern manufacturing processes obsolete, transforming all facets of the modern economy. Subsequent industrial revolutions include the Digital revolution and Environmental revolution. == See also == == Notes == == References == Atkeson, Andrew and Patrick J. Kehoe. "Modeling the Transition to a New Economy: Lessons from Two Technological Revolutions," American Economic Review, March 2007, Vol. 97 Issue 1, pp 64–88 in EBSCO Appleby, Joyce Oldham. The Relentless Revolution: A History of Capitalism (2010) excerpt and text search Beaudreau, Bernard C. The Economic Consequences of Mr. Keynes: How the Second Industrial Revolution Passed Great Britain (2006) Bernal, J. D. (1970) [1953]. Science and Industry in the Nineteenth Century. Bloomington: Indiana University Press. ISBN 0-253-20128-4. Broadberry, Stephen, and Kevin H. O'Rourke. The Cambridge Economic History of Modern Europe (2 vol. 2010), covers 1700 to present Chandler, Jr., Alfred D. Scale and Scope: The Dynamics of Industrial Capitalism (1990). Chant, Colin, ed. Science, Technology and Everyday Life, 1870–1950 (1989) emphasis on Britain Hobsbawm, E. J. (1999). Industry and Empire: From 1750 to the Present Day. rev. and updated with Chris Wrigley (2nd ed.). New York: New Press. ISBN 1-56584-561-7. Hull, James O. "From Rostow to Chandler to You: How revolutionary was the second industrial revolution?" Journal of European Economic History, Spring 1996, Vol. 25 Issue 1, pp. 191–208 Hunter, Louis C.; Bryant, Lynwood (1991). A History of Industrial Power in the United States, 1730–1930, Vol. 3: The Transmission of Power. Cambridge, MA: MIT Press. ISBN 978-0262081986. Kornblith, Gary. The Industrial Revolution in America (1997) Kranzberg, Melvin; Carroll W. Pursell Jr (1967). Technology in Western Civilization (2 vols. ed.). New York: Oxford University Press. Landes, David (2003). The Unbound Prometheus: Technical Change and Industrial Development in Western Europe from 1750 to the Present (2nd ed.). New York: Cambridge University Press. ISBN 0-521-53402-X. Licht, Walter. Industrializing America: The Nineteenth Century (1995) Mokyr, Joel The Second Industrial Revolution, 1870–1914 (1998) Mokyr, Joel. The Enlightened Economy: An Economic History of Britain 1700–1850 (2010) Rider, Christine, ed. Encyclopedia of the Age of the Industrial Revolution, 1700–1920 (2 vol. 2007) Roberts, Wayne. "Toronto Metal Workers and the Second Industrial Revolution, 1889–1914," Labour / Le Travail, Autumn 1980, Vol. 6, pp 49–72 Roe, Joseph Wickham (1916). English and American Tool Builders. New Haven, Connecticut: Yale University Press. LCCN 16011753.. Reprinted by McGraw-Hill, New York and London, 1926 (LCCN 27-24075); and by Lindsay Publications, Inc., Bradley, Illinois, (ISBN 978-0917914737). Smil, Vaclav. Creating the Twentieth Century: Technical Innovations of 1867–1914 and Their Lasting Impact White, Richard C. (2017). The Republic for Which It Stands. Oxford University Press. == External links == Media related to Industrial revolution at Wikimedia Commons
Wikipedia/Second_Industrial_Revolution
John "Iron-Mad" Wilkinson (1728 – 14 July 1808) was an English industrialist who pioneered the manufacture of cast iron and the use of cast-iron goods during the Industrial Revolution. He was the inventor of a precision boring machine that could bore cast iron cylinders, such as cannon barrels and piston cylinders used in the steam engines of James Watt. His boring machine has been called the first machine tool. He also developed a blowing device for blast furnaces that allowed higher temperatures, increasing their efficiency, and helped sponsor the first iron bridge in Coalbrookdale. He is notable for his method of cannon boring, his techniques at casting iron and his work with the government of France to establish a cannon foundry. == Biography == === Early life === John Wilkinson was born in Little Clifton, Bridgefoot, Cumberland (now part of Cumbria), the eldest son of Isaac Wilkinson and Mary Johnson. Isaac was then the potfounder at the blast furnace there, one of the first to use coke instead of charcoal, which was pioneered by Abraham Darby. John and his half-brother William, who was 17 years younger, were raised in a non-conformist Presbyterian family and he was educated at a dissenting academy at Kendal, Westmorland (also now part of Cumbria), run by Dr Caleb Rotherham. His sister Mary married another non-conformist, Joseph Priestley in 1762. Priestley also played a role in educating John's younger brother, William. In 1745, when John was 17, he was apprenticed to a Liverpool merchant for five years and then entered into partnership with his father. When his father moved to Bersham furnace near Wrexham, north Wales, in 1753 John remained at Kirkby Lonsdale in Westmorland where he married Ann Maudesley on 12 June 1755. === Iron master === After working with his father in his foundry, from 1755 John Wilkinson became a partner in the Bersham concern and in 1757 with partners, he erected a blast furnace at Willey, near Broseley in Shropshire. Later he built another furnace and works at New Willey. He made his home in Broseley in a house called 'The Lawns' which became his headquarters for many years. He had houses either side of 'The Lawns' which served for administration, one being named 'The Mint' used for distribution of the thousands of tokens, each valued equivalent to a halfpenny. In east Shropshire he also developed iron works at Snedshill, Hollinswood, Hadley and Hampton Loade. He and Edward Blakeway also leased land to build another works at Bradley in Bilston parish, near Wolverhampton. He became known as the 'Father' of the extensive South Staffordshire iron industry with Bilston as the start of the Black Country. In 1761, he took over Bersham Ironworks as well. Bradley became his largest and most successful enterprise, and was the site of extensive experiments in getting raw coal to substitute for coke in the production of cast iron. At its peak, it included a number of blast furnaces, a brick works, potteries, glass works, and rolling mills. The Birmingham Canal was subsequently built near the Bradley works. == Inventions == Wilkinson was a prolific inventor of new products and processes, and especially anything connected with novel uses of cast iron and wrought iron. His development of a machine tool for boring cast iron cannons presaged the accurate boring of cylinders for the first Watt steam engines. He also improved the air supply for the blast furnace using a new design of bellows, and was the first to use wrought iron in canal barges. He supported the construction of the first important cast iron bridge at Coalbrookdale. === Cannon boring machine === Bersham became well known for high-quality casting and a producer of guns and cannon. Historically, cannons had been cast with a core and then bored to remove imperfections, but in 1774 Wilkinson patented a technique for boring iron guns from a solid piece, rotating the gun barrel rather than the boring-bar. This technique made the guns more accurate since the bore was uniform in diameter, and less likely to explode. While bronze cannons were already being bored from the solid, the boring of large iron naval cannon was novel. The patent was quashed in 1779 (the Royal Navy saw it as a monopoly and sought to overthrow it) but Wilkinson still remained a major manufacturer. In 1792, Wilkinson bought the Brymbo Hall estate in Denbighshire, not far from Bersham, where furnaces and other plant were installed. After his death and the decline of his industrial empire, the ironworks lay idle for some years until in 1842. It became once again an important works and eventually became Brymbo Steelworks, which continued to operate until 1990. === Boring machine for steam engines === James Watt had tried unsuccessfully for several years to obtain accurately bored cylinders for his steam engines, and was forced to use hammered iron, which was out of round and caused leakage past the piston. In 1774 John Wilkinson invented a boring machine in which the shaft that held the cutting tool extended through the cylinder and was supported on both ends, unlike the cantilevered borers then in use. With this machine he was able to bore the cylinder for Boulton & Watt's first commercial engine, and was given an exclusive contract for the provision of cylinders owing to the lower tolerance between the piston and cylinder and the resulting improvement in efficiency by lowering steam losses through the gap. Until this era, advancements in drilling and boring practice had lain only within the application field of gun barrels for firearms and cannon; Wilkinson's achievement was a milestone in the gradual development of boring technology, as its fields of application broadened into engines, pumps, and other industrial uses. While the main market for steam engines had been for pumping water out of mines, he saw much more use for them in the driving of machinery in ironworks such as blowing engines, forge hammers and rolling mills, the first rotary action steam engine being installed at Bradley in 1783. Among his many inventions was a reversing rolling mill with two steam cylinders that made the process much more economical. John Wilkinson took a key interest in obtaining orders for these more efficient steam engines and other uses for cast iron from the owners of Cornish copper mines. As part of this interest, he bought shares in eight of the mines to help provide capital. === Hydraulic blowing engine === In 1757, Wilkinson patented a hydraulic powered blowing engine to increase the air blast through the tuyeres for blast furnaces, so improving the rate of production of cast iron. The historian Joseph Needham likened Wilkinson's design to the one described in 1313 by the Chinese Imperial Government metallurgist Wang Zhen in his Treatise on Agriculture. == Iron Bridge == In 1775 John Wilkinson was the prime mover initiating the building of the Iron Bridge connecting the then-important industrial town of Broseley with the other side of the River Severn. His friend Thomas Farnolls Pritchard had written to him with plans for the bridge. A committee of subscribers was formed, mostly including Broseley businessmen, to agree to the use of iron rather than wood or stone and obtain price quotations and an authorising act of Parliament. Wilkinson's persuasion and drive held together the group's support through several problems during the parliamentary process. Had Wilkinson not succeeded in this and also drawn support from influential parliamentarians, the bridge might not have been built or might have been made of other materials. Consequently, the name 'Ironbridge' would not have been coined for the district in Madeley; the area would not have attained the status of a World Heritage Site. Abraham Darby III was chosen as the preferred builder after quoting to build the bridge for £3,150/-/-. When construction started, Wilkinson sold his shares to Abraham Darby III in 1777, leaving the latter to steer the project to its successful conclusion in 1779 and be opened in 1781. In 1787 he launched the first barge made of wrought iron, constructed in Broseley. It was a development which would become common over the years ahead and in large ships in the following century. He patented several other inventions. == Copper interests == John Wilkinson made his fortune selling high quality goods made of iron and reached his limit of investment expansion. His expertise proved useful when he invested in many copper interests. In 1761, the Royal Navy clad the hull of the frigate HMS Alarm with copper sheet to reduce the growth of marine biofouling and prevent attack by the Teredo shipworm. The drag from the hull growth cut the speed and the shipworm caused severe hull damage, especially in tropical waters. After the success of this work the Navy decreed that all ships should be clad and this created a large demand for copper that Wilkinson noted during his visits to shipyards. He bought shares in eight Cornish copper mines and met Thomas Williams, the 'Copper King' of the Parys Mountain mines in Anglesey. Besides supplying Williams with large quantities of plate and equipment, Wilkinson also supplied iron scrap for the process of recovery of copper from solution by cementation. Wilkinson bought a 1/16th share in the Mona Mine at Parys Mountain and shares in Williams industries at Holywell, Flintshire, St Helens, near Liverpool and Swansea, South Wales. Wilkinson and Williams worked together on several projects. They were amongst the first to issue trade tokens ('Willys' and 'Druids') to alleviate the shortage of small coins. Jointly they set up the Cornish Metal Company in 1785 as a marketing company for copper. Its aim was to ensure both a good return for the Cornish miners and a stable price for the users of copper. Warehouses were set up in Birmingham, London, Bristol and Liverpool. To help his business interests and to service his trade tokens, Wilkinson bought into partnerships with banks in Birmingham, Bilston, Bradley, Brymbo and Shrewsbury. == Lead mines and works == Wilkinson bought lead mines at Minera in Wrexham, five miles from Bersham, Llyn Pandy at Soughton (now Sychdyn) and Mold, also in Flintshire. He installed steam pumping engines to make them viable again. His lead was exported through the port of Chester. To use some of the lead produced, Wilkinson had a lead pipe works at Rotherhithe, London. This factory lasted for many years eventually making the solder filler alloys used in the car factory at Dagenham. == Philanthropy == Wilkinson had a good reputation as an employer. Wherever new works were established, cottages were built to accommodate employees and their families. He gave significant financial support to his brother-in-law, the famous chemist Dr Joseph Priestley. He became a church warden in Broseley and was later elected High Sheriff of Denbighshire. In schools that had no slates he was able to provide iron troughs to hold sand for the practice of writing and arithmetic. He provided a cast-iron pulpit for the church at Bilston. == Family life, and death == John married Ann Maudsley in 1759. Her family was wealthy and her dowry helped to pay for a share in the New Willey Company. After the death of Ann, his second marriage, when he was 35, was to Mary Lee, whose money helped him to buy out his partners. When he was in his seventies, his mistress Mary Ann Lewis, a maid at his estate in Brymbo Hall, gave birth to his only children, a boy and two girls. By 1796, when he was 68, he was producing about one-eighth of Britain's cast iron. He became "a titan" – very wealthy, and somewhat eccentric. His "iron madness" reached a peak in the 1790s, when he had almost everything around him made of iron, even several coffins and a massive obelisk to mark his grave, which still stands in the village of Lindale-in-Cartmel, now in Cumbria. He was appointed Sheriff of Denbighshire for 1799. He died on 14 July 1808 at his works in Bradley, probably from diabetes. He was originally buried at his Castlehead estate at Grange-over-Sands, raised above the adjoining moss lands which he drained and improved from 1778 onwards. He left a very large estate in his will (more than £130,000 - equivalent to £12,810,000 in 2023), to which he intended to make his three children the principal heirs, with executors to manage the estate for them. However his nephew Thomas Jones contested the will in the Court of Chancery. By 1828, the estate had largely been dissipated by lawsuits and poor management. His corpse, in its distinctive iron coffin, was moved several times over the next decades, but is now lost. == References == == Bibliography == Braid, Douglas. Wilkinson Studies Vol II (1992) ISBN 0-9520009-0-3. Chaloner, W.H. "Builders of Industry: John Wilkinson, Ironmaster." History Today (1951) 1#5 pp 63–69. J. R. Harris, Wilkinson, John (1728–1808). Oxford Dictionary of National Biography, Oxford University Press, 2004; online edn, Jan 2011. Newman, J. and Pevsner, N. Pevsner Architectural Guides: The Buildings of Shropshire, Yale University Press (2006). ISBN 0-300-12083-4. Soldon, Norbert C. John Wilkinson (1728–1808): English Ironmaster and Inventor. Studies in British History, Edwin Mellen Press, (1998) ISBN 0-7734-8268-7. == External links == John Wilkinson Heritage John Wilkinson — Encyclopædia Britannica Chisholm, Hugh, ed. (1911). "Wilkinson, John" . Encyclopædia Britannica. Vol. 28 (11th ed.). Cambridge University Press. pp. 647–648. BBC — History — John Wilkinson (1728–1808) Cumbria Directory entry John Wilkinson (industrialist) Summary — BookRags The Brymbo Heritage Group Archived 22 February 2020 at the Wayback Machine brymboheritage.co.uk John Wilkinson at Ward's Book of Days John Wilkinson achievements and interesting facts
Wikipedia/John_Wilkinson_(industrialist)
From the mid-1980s to September 2003, the inflation-adjusted price of a barrel of crude oil on NYMEX was generally under US$25/barrel in 2008 dollars. During 2003, the price rose above $30, reached $60 by 11 August 2005, and peaked at $147.30 in July 2008. Commentators attributed these price increases to multiple factors, including Middle East tension, soaring demand from China, the falling value of the U.S. dollar, reports showing a decline in petroleum reserves, worries over peak oil, and financial speculation. For a time, geopolitical events and natural disasters had strong short-term effects on oil prices, such as North Korean missile tests, the 2006 conflict between Israel and Lebanon, worries over Iranian nuclear plans in 2006, Hurricane Katrina, and various other factors. By 2008, such pressures appeared to have an insignificant impact on oil prices given the onset of the global recession. The recession caused demand for energy to shrink in late 2008, with oil prices collapsing from the July 2008 high of $147 to a December 2008 low of $32. However, it has been disputed that the laws of supply and demand of oil could have been responsible for an almost 80% drop in the oil price within a six-month period. Oil prices stabilized by August 2009 and generally remained in a broad trading range between $70 and $120 through November 2014, before returning to 2003 pre-crisis levels by early 2016, as US production increased dramatically. The United States went on to become the largest oil producer by 2018. == New inflation-adjusted peaks == The price of crude oil in 2003 traded in a range between $20–$30/bbl. Between 2003 and July 2008, prices steadily rose, reaching $100/bbl in late 2007, coming close to the previous inflation-adjusted peak set in 1980. A steep rise in the price of oil in 2008 – also mirrored by other commodities – culminated in an all-time high of $147.27 during trading on 11 July 2008, more than a third above the previous inflation-adjusted high. High oil prices and economic weakness contributed to a demand contraction in 2007–2008. In the United States, gasoline consumption declined by 0.4% in 2007, then fell by 0.5% in the first two months of 2008 alone. Record-setting oil prices in the first half of 2008 and economic weakness in the second half of the year prompted a 1.2 Mbbl (190,000 m3)/day contraction in US consumption of petroleum products, representing 5.8% of total US consumption, the largest annual decline since 1980 at the climax of the 1979 energy crisis. == Possible causes == === Demand === World crude oil demand grew an average of 1.76% per year from 1994 to 2006, with a high of 3.4% in 2003–2004. World demand for oil is projected to increase 37% over 2006 levels by 2030, according to the 2007 U.S. Energy Information Administration's (EIA) annual report. In 2007, the EIA expected demand to reach an ultimate high of 118 million barrels per day (18.8×10^6 m3/d), from 2006's 86 million barrels (13.7×10^6 m3), driven in large part by the transportation sector. A 2008 report from the International Energy Agency (IEA) predicted that although drops in petroleum demand due to high prices have been observed in developed countries and are expected to continue, a 3.7 percent rise in demand by 2013 is predicted in developing countries. This is projected to cause a net rise in global petroleum demand during that period. Transportation consumes the largest proportion of energy, and has seen the largest growth in demand in recent decades. This growth has largely come from new demand for cars and other personal-use vehicles powered by internal combustion engines. This sector also has the highest consumption rates, accounting for approximately 55% of oil use worldwide as documented in the Hirsch report and 68.9% of the oil used in the United States in 2006. Cars and trucks are predicted to cause almost 75% of the increase in oil consumption by India and China between 2001 and 2025. In 2008, auto sales in China were expected to grow by as much as 15–20 percent, resulting in part from economic growth rates of over 10 percent for five years in a row. Demand growth is highest in the developing world, but the United States is the world's largest consumer of petroleum. Between 1995 and 2005, US consumption grew from 17.7 million barrels (2,810,000 m3) a day to 20.7 million barrels (3,290,000 m3) a day, an increase of 3 million barrels (480,000 m3) a day. China, by comparison, increased consumption from 3.4 million barrels (540,000 m3) a day to 7 million barrels (1,100,000 m3) a day, an increase of 3.6 million barrels (570,000 m3) a day, in the same time frame. Per capita, annual consumption is 24.85 barrels (3.951 m3) by people in the US, 1.79 barrels (0.285 m3) in China, and 0.79 barrels (0.126 m3) in India. As countries develop, industry, rapid urbanization and higher living standards drive up energy use, most often of oil. Thriving economies such as China and India are quickly becoming large oil consumers. China has seen oil consumption grow by 8% yearly since 2002, doubling from 1996–2006. Although swift continued growth in China is often predicted, others predict that China's export-dominated economy will not continue such growth trends due to wage and price inflation and reduced demand from the US. India's oil imports are expected to more than triple from 2005 levels by 2020, rising to 5 million barrels per day (790×10^3 m3/d). Another large factor on petroleum demand has been human population growth. Because world population grew faster than oil production, production per capita peaked in 1979 (preceded by a plateau during the period of 1973–1979). The world’s population in 2030 is expected to be double that of 1980. ==== Role of fuel subsidies ==== State fuel subsidies shielded consumers in multiple nations from higher market prices, but a number of these subsidies were reduced or removed as the governmental cost rose. In June 2008, AFP reported that: China became the latest Asian nation to curb energy subsidies last week after hiking retail petrol and diesel prices as much as 18 percent... Elsewhere in Asia, Malaysia has hiked fuel prices by 41 percent and Indonesia by around 29 percent, while Taiwan and India have also raised their energy costs. In the same month, Reuters reported that: Countries like China and India, along with Gulf nations whose retail oil prices are kept below global prices, contributed 61 percent of the increase in global consumption of crude oil from 2000 to 2006, according to JPMorgan. Other than Japan, Hong Kong, Singapore and South Korea, most Asian nations subsidize domestic fuel prices. The more countries subsidize them, the less likely high oil prices will have any affect [sic] in reducing overall demand, forcing governments in weaker financial situations to surrender first and stop their subsidies. That is what happened over the past two weeks. Indonesia, Taiwan, Sri Lanka, Bangladesh, India, and Malaysia have either raised regulated fuel prices or pledged that they will. The Economist reported: "Half of the world's population enjoys fuel subsidies. This estimate, from Morgan Stanley, implies that almost a quarter of the world's petrol is sold at less than the market price." U.S. Secretary of Energy Samuel Bodman stated that around 30 million barrels per day (4,800,000 m3/d) of oil consumption (over a third of the global total) was subsidized. === Supply === An important contributor to the price increase was the slowdown in oil supply growth, which has been a general trend since oil production surpassed new discoveries in 1980. The likelihood that global oil production will decline at some point, leading to lower supply, is a long-term fundamental cause of rising prices. Although there is contention about the exact time at which global production will peak, a majority of industry participants acknowledge that the concept of a production peak is valid. However, some commentators argued that global warming awareness and new energy sources would limit demand before the effects of supply could, suggesting that reserve depletion would be a non-issue. A large factor in the lower supply growth of petroleum has been that oil's historically high ratio of Energy Returned on Energy Invested is in significant decline. Petroleum is a limited resource, and the remaining accessible reserves are consumed more rapidly each year. Remaining reserves are increasingly difficult to extract and therefore more expensive. Eventually, reserves will only be economically feasible to extract at extremely high prices. Even if total oil supply does not decline, increasing numbers of experts believe the easily accessible sources of light sweet crude are almost exhausted and in the future the world will depend on more-expensive unconventional oil reserves and heavy crude oil, as well as renewable energy sources. It is thought by a number of people, including energy economists such as Matthew Simmons, that prices could continue to rise indefinitely until a new market equilibrium is reached at which point supply satisfies worldwide demand. Timothy Kailing, in a 2008 Journal of Energy Security article, pointed out the difficulty of increasing production in mature petroleum regions, even with vastly increased investment in exploration and production. By looking at the historical response of production to variation in drilling effort, he claimed that very little increase of production could be attributed to increased drilling. This was due to a tight quantitative relationship of diminishing returns with increasing drilling effort: As drilling effort increased, the energy obtained per active drill rig was reduced according to a severely diminishing power law. This analysis suggested that even an enormous increase of drilling effort was unlikely to lead to significantly increased oil and gas production in a mature petroleum region like the United States. A prominent example of investment in non-conventional sources is seen in the Canadian oil sands. They are a far less cost-efficient source of heavy, low-grade oil than conventional crude; but when oil trades above $60/bbl, the tar sands become attractive to exploration and production companies. While Canada's oil sands region is estimated to contain as much "heavy" oil as all the world's reserves of "conventional" oil, efforts to economically exploit these resources lag behind the increasing demand of recent years. Until 2008, CERA (a consulting company wholly owned by energy consultants IHS Energy) did not believe this would be such an immediate problem. However, in an interview with The Wall Street Journal, Daniel Yergin, previously known for his quotes that the price of oil would soon return down to "normal", amended the company's position on 7 May 2008 to predict that oil would reach $150 during 2008, due to tightness of supply. This reversal of opinion was significant, as CERA, among other consultancies, provided price projections that were used by multiple official bodies to plan long-term strategy in respect of energy mix and price. Other major energy organisations, such as the International Energy Agency (IEA), had already been much less optimistic in their assessments for some time. In 2008, the IEA drastically accelerated its prediction of production decline for existing oilfields, from 3.7% a year to 6.7% a year, based largely on better accounting methods, including actual research of individual oil field production throughout the world. Terrorist and insurgent groups have increasingly targeted oil and gas installations, and succeeded in stopping a substantial volume of exports during the 2003–2008 height of the American occupation of Iraq. Such attacks are sometimes perpetrated by militias in regions where oil wealth has produced few tangible benefits for the local citizenry, as is the case in the Niger Delta. A number of factors have resulted in possible and/or actual concerns about the reduced supply of oil. The post-9/11 war on terror, labor strikes, hurricane threats to oil platforms, fires and terrorist threats at refineries, and other short-lived problems are not solely responsible for the higher prices. Such problems do push prices higher temporarily, but have not historically been fundamental to long-term price increases. === Investment/speculation demand === Investment demand for oil occurs when investors purchase futures contracts to buy a commodity at a set price for future delivery. "Speculators are not buying any actual crude. ... When [the] contracts mature, they either settle them with a cash payment or sell them on to genuine consumers." Several claims have been made implicating financial speculation as a major cause of the price increases. In May 2008 the transport chief for Germany's Social Democrats estimated that 25 percent of the rise to $135 a barrel had nothing to do with underlying supply and demand. Testimony was given to a U.S. Senate committee in May indicating that "demand shock" from "institutional investors" had increased by 848 million barrels (134,800,000 m3) over the previous five years, almost as much as the increased physical demand from China (920 million barrels (146,000,000 m3)). The influence of institutional investors, such as sovereign wealth funds, was also discussed in June 2008, when Lehman Brothers suggested that price increases were related to increases in exposure to commodities by such investors. It claimed that "for every $100 million in new inflows, the price of West Texas Intermediate, the U.S. benchmark, increased by 1.6%." Also in May 2008, an article in The Economist pointed out that oil futures transactions on the New York Mercantile Exchange (NYMEX), nearly mirrored the price of oil increases for a several-year period; however, the article conceded that the increased investment might be following rising prices, rather than causing them, and that the nickel market value had halved in the year between May 2007 and May 2008 despite significant speculative interest. It also reminded readers that "Investment can flood into the oil market without driving up prices because speculators are not buying any actual crude... no oil is hoarded or somehow kept off the market," and that prices of some commodities which are not openly traded have actually risen faster than oil prices. In June 2008, OPEC's Secretary General Abdallah Salem el-Badri stated that current world consumption of oil at 87 million bpd was far exceeded by the "paper market" for oil, which equaled about 1.36 billion bpd, or more than 15 times the actual market demand. An interagency task force on commodities markets was formed in the U.S. government to investigate the claims of speculators' influence on the petroleum market. The task force concluded in July 2008 that "market fundamentals" such as supply and demand provided the best explanations for oil price increases, and that increased speculation was not statistically correlated with the increases. The report also noted that increased prices with an elastic supply would cause increases in petroleum inventories. As inventories actually declined, the task force concluded that market pressures were most likely to blame. Other commodities that were not subject to market speculation (such as coal, steel, and onions) saw similar price increases over the same time period. In June 2008 U.S. energy secretary Samuel Bodman said that insufficient oil production, not financial speculation, was driving rising crude prices. He said that oil production had not kept pace with growing demand. "In the absence of any additional crude supply, for every 1% of crude demand, we will expect a 20% increase in price in order to balance the market," Bodman said. This contradicted earlier statements by Iranian OPEC governor Mohammad-Ali Khatibi indicating that the oil market was saturated and that an increase in production announced by Saudi Arabia was "wrong". In September 2008, Masters Capital Management released a study of the oil market, concluding that speculation did significantly impact the price. The study stated that over $60 billion was invested in oil during the first six months of 2008, helping drive the price per barrel from $95 to $147, and that by the beginning of September, $39 billion had been withdrawn by speculators, causing prices to fall. == Effects == There is debate over what the effects of the 2000s energy crisis will be over the long term. Some speculated that an oil-price spike could create a recession comparable to those that followed the 1973 and 1979 energy crises or a potentially worse situation such as a global oil crash. Increased petroleum prices are reflected in a vast number of products derived from petroleum, as well as those transported using petroleum fuels. Political scientist George Friedman has postulated that if high prices for oil and food persist, they will define the fourth distinct geopolitical regime since the end of World War II, the previous three being the Cold War, the 1989–2001 period in which economic globalization was primary, and the post-9/11 "war on terror". In addition to high oil prices, from year 2000 volatility in the price of oil has increased notably and this volatility has been suggested to be a cause of the 2008 financial crisis. The perceived increase in oil price differs internationally according to currency market fluctuations and the purchasing power of currencies. For example, excluding changes in relative purchasing power of various currencies, from 1 January 2002 to 1 January 2008: In US$, oil price rose from $20.37 to nearly $100, about 4.91 times as expensive; In the same period, the Taiwanese dollar gained value over the U.S. dollar to make oil in Taiwan 4.53 times as expensive; In the same period, the Japanese Yen gained value over the U.S. dollar to make oil in Japan 4.10 times as expensive; In the same period, the Euro gained value over the U.S. dollar to make oil in the Eurozone 2.94 times as expensive. On average, oil prices roughly quadrupled for these areas, triggering widespread protest activities. A similar price surge for petroleum-based fertilizers contributed to the 2007–08 world food price crisis and further unrest. In 2008, a report by Cambridge Energy Research Associates stated that 2007 had been the year of peak gasoline usage in the United States, and that record energy prices would cause an "enduring shift" in energy consumption practices. According to the report, in April gas consumption had been lower than a year before for the sixth straight month, suggesting 2008 would be the first year U.S. gasoline usage declined in 17 years. The total miles driven in the U.S. began declining in 2006. In the United States, oil prices contributed to inflation averaging 3.3% in 2005–2006, significantly above the average of 2.5% in the preceding 10-year period. As a result, during this period the Federal Reserve steadily raised interest rates to curb inflation. High oil prices typically affect less-affluent countries first, particularly the developing world with less discretionary income. There are fewer vehicles per capita, and oil is often used for electricity generation as well as private transport. The World Bank has looked more deeply at the effect of oil prices in the developing countries. One analysis found that in South Africa a 125 percent increase in the price of crude oil and refined petroleum reduces employment and GDP by approximately 2 percent, and reduces household consumption by approximately 7 percent, affecting mainly the poor. OPEC's annual oil export revenue surged to a new record in 2008, estimated around US$800 billion. == Forecasted prices and trends == According to informed observers, OPEC, meeting in early December 2007, seemed to desire a high but stable price that would deliver substantial needed income to the oil-producing states, but avoid prices so high that they would negatively impact the economies of the oil-consuming nations. A range of US$70–80 per barrel was suggested by some analysts to be OPEC's goal. In November 2008, as prices fell below $60 a barrel, the IEA warned that falling prices could lead to both a lack of investment in new sources of oil and a fall in production of more-expensive unconventional reserves such as the oil sands of Canada. The IEA's chief economist warned, "Oil supplies in the future will come more and more from smaller and more-difficult fields," meaning that future production requires more investment every year. A lack of new investment in such projects, which had already been observed, could eventually cause new and more-severe supply issues than had been experienced in the early 2000s according to the IEA. Because the sharpest production declines had been seen in developed countries, the IEA warned that the greatest growth in production was expected to come from smaller projects in OPEC states, raising their world production share from 44% in 2008 to a projected 51% in 2030. The IEA also pointed out that demand from the developed world may have also peaked, so that future demand growth was likely to come from developing nations such as China, contributing 43%, and India and the Middle East, each about 20%. == End of the crisis == By the beginning of September 2008, prices had fallen to $110. OPEC Secretary General El-Badri said that the organization intended to cut output by about 500,000 barrels (79,000 m3) a day, which he saw as correcting a "huge oversupply" due to declining economies and a stronger U.S. dollar. On 10 September, the International Energy Agency (IEA) lowered its 2009 demand forecast by 140,000 barrels (22,000 m3) to 87.6 million barrels (13,930,000 m3) a day. As countries throughout the world entered an economic recession in the third quarter of 2008 and the global banking system came under severe strain, oil prices continued to slide. In November and December, global demand growth fell, and U.S. oil demand fell an estimated 10% overall from early October to early November 2008 (accompanying a significant drop in auto sales). In their December meeting, OPEC members agreed to reduce their production by 2.2 million barrels (350,000 m3) per day, and said their resolution to reduce production in October had an 85% compliance rate. Petroleum prices fell below $35 in February 2009, but by May 2009 had risen back to mid-November 2008 levels around $55. The global economic downturn left oil-storage facilities with more oil than in any year since 1990, when Iraq's invasion of Kuwait upset the market. In early 2011, crude oil rebounded above US$100/bbl due to the Arab Spring protests in the Middle East and North Africa, including the 2011 Egyptian revolution, the 2011 Libyan civil war, and steadily tightening international sanctions against Iran. The oil price fluctuated around $100 through early 2014. By 2014–2015, the world oil market was again steadily oversupplied, led by an unexpected near-doubling in U.S. oil production from 2008 levels due to substantial improvements in shale "fracking" technology. By January 2016, the OPEC Reference Basket fell to US$22.48/bbl – less than one-sixth of its record from July 2008 ($140.73), and back below the April 2003 starting point ($23.27) of its historic run-up. OPEC production was poised to rise further with the lifting of Iranian sanctions, at a time when markets already appeared to be oversupplied by at least 2 million barrels per day. == Possible mitigations == Attempts to mitigate the impacts of oil price increases include: Increasing the supply of petroleum Finding substitutes for petroleum Decreasing the demand for petroleum Attempting to reduce the impact of rising prices on petroleum consumers Better urban planning with more emphasis on bike lanes, public transit, and high dense residential zoning. In mainstream economic theory, a free market rations an increasingly scarce commodity by increasing its price. A higher price should stimulate producers to produce more, and consumers to consume less, while possibly shifting to substitutes. The first three mitigation strategies in the above list are, therefore, in keeping with mainstream economic theory, as government policies can affect the supply and demand for petroleum as well as the availability of substitutes. In contrast, the last type of strategy in the list (attempting to shield consumers from rising prices) would seem to work against classical economic theory, by encouraging consumers to overconsume the scarce quantity, thus making it even scarcer. To avoid creating outright shortages, attempts at price control may require some sort of rationing scheme. === Alternative propulsion === ==== Alternative fuels ==== Economists say that the substitution effect will spur demand for alternate fossil fuels, such as coal or liquefied natural gas and for renewable energy, such as solar power, wind power, and advanced biofuels. For example, China and India are currently heavily investing in natural gas and coal liquefaction facilities. Nigeria is working on burning natural gas to produce electricity instead of simply flaring the gas, where all non-emergency gas flaring will be forbidden after 2008. Outside the U.S., more than 50% of oil is consumed for stationary, non-transportation purposes such as electricity production where it is relatively easy to substitute natural gas for oil. Oil companies including the supermajors have begun to fund research into alternative fuel. BP has invested half a billion dollars for research over the next several years. The motivations behind such moves are to acquire the patent rights as well as understanding the technology so vertical integration of the future industry could be achieved. ==== Electric propulsion ==== The rise in oil prices caused renewed interest in electric cars, with several new models hitting the market, both hybrid and purely electric. The most successful among the former being the Toyota Prius and among the latter the cars of companies like Tesla. Several countries also incentivized the use of electric cars through tax-breaks or subsidies or by building charging stations. ==== High speed rail ==== In a similar vein as the original TGV that was switched from gas turbine to electric propulsion after the 1973 oil crisis, several countries have renewed and increased their efforts for electric propulsion in their rail systems, specifically high-speed rail. In the time since 2003, the global High speed rail network almost doubled and there are plans globally that amount to the network being doubled again within the next ten to twenty years, based on current constructions. China in particular went from having no high-speed rail whatsoever in 2003 to having the longest network in the world in 2015. === Bioplastics and bioasphalt === Another major factor in petroleum demand is the widespread use of petroleum products such as plastic. These could be partially replaced by bioplastics, which are derived from renewable plant feedstocks such as vegetable oil, cornstarch, pea starch, or microbiota. They are used either as a direct replacement for traditional plastics or as blends with traditional plastics. The most common end use market is for packaging materials. Japan has also been a pioneer in bioplastics, incorporating them into electronics and automobiles. Bioasphalt can also be used as a replacement of petroleum asphalt. === United States Strategic Fuel Reserve === The United States Strategic Petroleum Reserve could, on its own, supply current U.S. demand for about a month in the event of an emergency, unless it were also destroyed or inaccessible in the emergency. This could potentially be the case if a major storm were to hit the Gulf of Mexico, where the reserve is located. While total consumption has increased, the western economies are less reliant on oil than they were twenty-five years ago, due both to substantial growth in productivity and the growth of sectors of the economy with little oil dependence such as finance and banking, retail, etc. The decline of heavy industry and manufacturing in most developed countries has reduced the amount of oil per unit GDP; however, since these items are imported anyway, there is less change in the oil dependence of industrialized countries than the direct consumption statistics indicate. === Fuel taxes === One recourse used and discussed in the past to avoid the negative impacts of oil shocks in the a number of developed countries which have high fuel taxes has been to temporarily or permanently suspend these taxes as fuel costs rise. France, Italy, and the Netherlands lowered taxes in 2000 in response to protests over high prices, but other European nations resisted this option because public service finance is partly based on energy taxes. The issue came up again in 2004, when oil reached $40 a barrel causing a meeting of 25 EU finance ministers to lower economic growth forecasts for that year. Because of budget deficits in several countries, they decided to pressure OPEC to lower prices instead of lowering taxes. In 2007, European truckers, farmers, and fishermen again raised concerns over record oil prices cutting into their earnings, hoping to have taxes lowered. In the United Kingdom, where fuel taxes were raised in October and were scheduled to rise again in April 2008, there was talk of protests and roadblocks if the tax issue was not addressed. On 1 April 2008, a 25 yen per liter fuel tax in Japan was allowed to lapse temporarily. This method of softening price shocks is even less viable to countries with much lower gas taxes, such as the United States. Locally decreasing fuel tax can decrease fuel prices, but globally prices are set by supply and demand, and therefore fuel tax decreases may have no effect on fuel prices, and fuel tax increases might actually decrease fuel prices by reducing demand. But this depends on the price elasticity of demand for fuel which is -0.09 to -0.31, meaning that fuel is a relatively inelastic commodity, i.e. increasing or decreasing prices have overall only a small effect on demand and therefore price change. === Demand management === Transportation demand management has the potential to be an effective policy response to fuel shortages or price increases and has a greater probability of long term benefits than other mitigation options. There are major differences in energy consumption for private transport between cities; an average U.S. urban dweller uses 24 times more energy annually for private transport as a Chinese urban resident. These differences cannot be explained by wealth alone but are closely linked to the rates of walking, cycling, and public transport use and to enduring features of the city including urban density and urban design. For individuals, remote work provides alternatives to daily commuting and long-distance air travel for business. Technologies such as videoconferencing, e-mail, and corporate wikis, continue to improve. As the cost of moving human workers continues to rise, while the cost of moving information electronically continues to fall, presumably market forces should cause more people to substitute virtual travel for physical travel. Matthew Simmons explicitly calls for "liberating the workforce" by changing the corporate mindset from paying people to show up physically to work every day, to paying them instead for the work they do, from any location. This would allow more information workers to work from home either part-time or full-time, or from satellite offices or Internet cafes near to where they live, freeing them from long daily commutes to central offices. However, even full adoption of remote work by all eligible workers might only decrease energy consumption by about 1% (with present energy savings estimated at 0.01–0.04%). By comparison, a 20% increase in automobile fuel economy would save 5.4%. === Political action against market speculation === The price rises of mid-2008 led to a variety of proposals to change the rules governing energy markets and energy futures markets, in order to prevent rises due to market speculation. On 26 July 2008, the United States House of Representatives passed the Energy Markets Emergency Act of 2008 (H.R. 6377), which directs the Commodity Futures Trading Commission (CFTC) "to utilize all its authority, including its emergency powers, to curb immediately the role of excessive speculation in any contract market within the jurisdiction and control of the Commodity Futures Trading Commission, on or through which energy futures or swaps are traded, and to eliminate excessive speculation, price distortion, sudden or unreasonable fluctuations or unwarranted changes in prices, or other unlawful activity causing major market disturbances that prevent the market from accurately reflecting the forces of supply and demand for energy commodities." == See also == 2000s commodities boom 2020s commodities boom 1970s commodities boom Global energy crisis (2021–2023) 1970s energy crisis == Notes == == External links == U.S. DOE EIA energy chronology and analysis Oil Price History and Analysis
Wikipedia/2000s_energy_crisis
Abbeydale Industrial Hamlet is an industrial museum in the south of the City of Sheffield, England. The museum forms part of a former steel-working site on the River Sheaf, with a history going back to at least the 13th century. It consists of a number of dwellings and workshops that were formerly the Abbeydale Works—a scythe-making plant that was in operation until the 1930s—and is a remarkably complete example of a 19th-century works. The works are atypical in that much of the production process was completed on the same site (in a similar manner to a modern factory). A more typical example of water-powered works in the area can be found at Shepherd Wheel. The site is a scheduled monument, the works are Grade I listed and the workers' cottages, counting house, and manager's house are Grade II* listed. == History == The site was used for iron forging for 500 years, although there is evidence of other metal working before 1200. Its early history is intimately tied with the nearby Beauchief Abbey, which operated a smithy (blacksmith's shop) in the vicinity as well as number of mills along the River Sheaf. A 1725 map shows that the fields, subsequently flooded to provide the dam at the site, had been called "Sinder Hills", the cinders referring to the waste resulting from prior lead smelting activities in the area in the 16th and early 17th centuries. However, the "Abbey Dale Works" as such, the buildings of which now form the Abbeydale Industrial Hamlet, are first formally recorded in 1714 (though it may have derived directly from the "New Wheel" operated by Hugh Stephenson, as detailed in rent books from 1685). Development of the site continued with: 1777 enlargement of the dam 1785 construction of the tilt hammer 1793 construction of the workmen's cottages 1817 construction of the grinding hull 1838 construction of the manager's house 1840 construction of the coach house and stabling 1876 construction of the first storey warehouse (above the blacking shop). From the 17th century onwards, the site primarily operated as a scythe works until, in 1933, it was closed by Tyzak Sons and Turner (tenants since 1849). In 1935 it was bought by the Alderman J. G. Graves Trust, which donated the site to the city. The works was briefly reopened during the Second World War to aid in Britain's war effort. The Council for the Conservation of Sheffield Antiquities explored and initiated the restoration of Abbeydale Works in 1964. They discovered the remains of six buildings in addition to those still standing. These were identified from a 1924 map of the site as: a "disused hardening shop" a "disused open furnace shed" a "lime and coke shed" a "boiler house and chimney" the "housing for the steam engine" a "store for clay and anvils" Following the complete restoration the works were finally opened as a museum in 1970. Sheffield City Council closed the museum in 1997 as a cost-cutting measure. It was then leased to the Sheffield Industrial Museums Trust who reopened the museum in 1998. == The museum == Abbeydale Industrial Hamlet is run as a working museum, with works and buildings dating from between 1714 and 1876. The museum demonstrates the process making blister steel from iron and coke, then refining this steel using techniques that originated with Benjamin Huntsman's invention of the crucible steel process. The river provides water power via a water wheel. There are several wheels on the site for driving a tilt hammer, for the initial forging of the scythe blades; grinding machinery, which also has steam installed as backup for times of drought, and a set of bellows. The blades were also hand forged for finishing. The museum is open Thursday to Sunday inclusive during peak season, and entry is free. == See also == Kelham Island Museum Shepherd Wheel == References == == External links == Official website
Wikipedia/Abbeydale_Industrial_Hamlet
The corporate debt bubble is the large increase in corporate bonds, excluding that of financial institutions, following the 2008 financial crisis. Global corporate debt rose from 84% of gross world product in 2009 to 92% in 2019, or about $72 trillion. In the world's eight largest economies—the United States, China, Japan, the United Kingdom, France, Spain, Italy, and Germany—total corporate debt was about $51 trillion in 2019, compared to $34 trillion in 2009. Excluding debt held by financial institutions—which trade debt as mortgages, student loans, and other instruments—the debt owed by non-financial companies in early March 2020 was $13 trillion worldwide, of which about $9.6 trillion was in the U.S. The corporate bond market historically centered in the United States. The U.S. Federal Reserve noted in November 2019 that leveraged loans, corporate bonds made to companies with poor credit histories or large amounts of existing debt, were the fastest growing asset class, increasing in size by 14.6% in 2018 alone. Total U.S. corporate debt in November 2019 reached a record 47% of the entire U.S. economy. However, corporate borrowing expanded worldwide under the low interest rates of the Great Recession. Two-thirds of global growth in corporate debt occurred in developing countries, in particular China. The value of outstanding Chinese non-financial corporate bonds increased from $69 billion in 2007 to $2 trillion in 2017. In December 2019, Moody's Analytics described Chinese corporate debt as the "biggest threat" to the global economy. Regulators and investors have raised concern that large amounts of risky corporate debt have created a critical vulnerability for financial markets, in particular mutual funds, during the next recession. Former Fed Chair Janet Yellen has warned that the large amount of corporate debt could "prolong" the next recession and cause corporate bankruptcies. The Institute of International Finance forecast that, in an economic downturn half as severe as the 2008 crisis, $19 trillion in debt would be owed by non-financial firms without the earnings to cover the interest payments, referred to as zombie firms. The McKinsey Global Institute warned in 2018 that the greatest risks would be to emerging markets such as China, India, and Brazil, where 25–30% of bonds had been issued by high-risk companies. As of March 2021, U.S. corporations faced a record $10.5 trillion in debt. On March 31, 2021, the Commercial Paper Funding Facility re-established by the Federal Reserve the previous March ceased purchasing commercial paper. While trade in corporate bonds typically centered in the U.S., two-thirds of corporate debt growth since 2007 was in developing countries. == Low bond yields led to purchase of riskier bonds == Following the 2008 financial crisis, the Federal Reserve Board lowered short- and long-term interest rates in order to convince investors to move out of interest-bearing assets and match with borrowers seeking capital. The resulting market liquidity was accomplished through two steps: cutting the Fed Funds rate, the rate that the Fed charges institutional investors to borrow money; and quantitative easing, whereby the Fed buying trillions of dollars of toxic assets, effectively creating functioning markets for these assets and reassuring investors. The success of the U.S. Fed in dropping interest rates to historically low levels and preventing illiquid markets from worsening the financial crisis prompted central banks around the world to copy these techniques. However, the effect of quantitative easing was not limited to the toxic mortgage bonds targeted by central banks, as it effectively reduced the supply of bonds as a class, causing prices for bonds generally to rise and bond yields to lower. For over a decade, the artificially low interest rates and artificially low bond yields, have caused a "mispricing of risk" as investors continually seek out higher yields. As an example, high-yield debt, colloquially known as "junk bonds", has historically yielded 10% or more to compensate investors for the increased risk; in February 2020, the U.S. yield on these bonds dipped to nearly 5%. This indicates that investors flocking to higher yield have bought so much high-yield debt that it has driven the yield below the level needed to compensate for the risk. U.S. corporate bonds held by mutual funds had tripled over the previous decade. In June 2018, 22% of outstanding U.S. nonfinancial corporate debt was rated "junk", and a further 40% was rated one step above junk at "BBB", so that approximately two-thirds of all corporate debt was from companies at the highest risk of default, in particular retailers who were losing business to online services. The U.S. Fed noted in November 2019 that mutual funds held about one-sixth of outstanding corporate debt, but were acquiring one-fifth of new leveraged corporate loans. The size of high-yield corporate bond mutual funds, which specialize in riskier bonds, had doubled in the decade prior to 2019. While trade in corporate bonds typically centered in the U.S., two-thirds of corporate debt growth since 2007 was in developing countries. China became one of the largest corporate bond markets in the world, with the value of Chinese corporate bonds increasing from $69 billion in 2007 to $2 trillion at the end of 2017. By mid-2018, total outstanding U.S. corporate debt reached 45% of GDP, which was larger than that seen during the dot-com bubble and subprime mortgage crisis. Noting negative bond yields in Switzerland, the United Kingdom, and the US in August 2019, Bloomberg News stated that effectively paying borrowers to borrow is distorting incentives and misallocating resources, concluding that bonds are on a bubble. == Low interest rates led to increasingly leveraged companies == Companies that do not make enough profit to pay off their debts and are only able to survive by repeatedly refinancing their loans, known as "zombie firms", have been able to turn over their debt because low interest rates increase the willingness of lenders to buy higher yield corporate debt, while the yield they offer on their bonds remains at near-historical lows. In a 2018 study of 14 rich countries, the Bank of International Settlements stated that zombie firms increased from 2% of all firms in the 1980s to 12% in 2016. By March 2020, one-sixth of all publicly traded companies in the U.S. did not make enough profit to cover the interest on their issued debt. In developing countries, high-risk bonds were concentrated in particular industries. In China, one-third of bonds issued by industrial companies and 28% of those issued by real-estate companies are at a higher risk of default, defined as having a times interest earned of 1.5 or less. In Brazil, one-quarter of all corporate bonds at a higher risk of default are in the industrial sector. Fitch stated in December 2019 that the majority of Chinese companies listed on A-share markets, namely the Shanghai Stock Exchange and Shenzhen Stock Exchange, were unable to repay their debt with their operational cash flow and required refinancing. Bond investing in Europe closely followed the actions of the European Central Bank, in particular the quantitative easing implemented in response to the European debt crisis. In June 2016, the ECB began using its corporate sector purchase programme (CSPP), acquiring 10.4 billion euro in non-financial corporate bonds in the first month of operation, with the explicit purpose of ensuring liquidity in the corporate bond market. News in mid-2019 that the ECB would restart its asset purchase program pushed the iBoxx euro corporate bond index, valued at $1.92 trillion, to record highs. The increased purchases resulted in 42% of European investment-grade corporate debt having a negative yield, as investors effectively paid less risky companies to borrow money. The Federal Reserve Bank of New York noted in January 2020 that only two U.S. firms had the highest rating of AAA, Johnson & Johnson and Microsoft, while there was an increased number of firms at the lowest-end, called BAA (on the Moody's rating scale) or BBB (on the S&P rating scale). Investment-grade firms, those with a rating between AAA and BAA, were more highly leveraged than the high-yield ("junk") firms. Observing that investors tend to divest bonds that are downgraded to high-yield, the New York Fed stated, "In the current corporate debt landscape, with a greater amount outstanding of BAA-rated corporate debt and higher net leverage of investment-grade debt overall, the possibility of a large volume of corporate bond downgrades poses a financial stability concern." Examples of leveraged corporate debt transactions include: Halliburton doubled its corporate debt to $11.5 billion between 2012 and 2020; it sold $1 billion in debt in early March 2020 with the explicit purpose of paying off existing debt and has $3.8 billion in debt payments due through 2026. AT&T debt ballooned to $180 billion following its acquisition of Time Warner in 2016. In 2018, Moody's declared AT&T to be "beholden to the health of the capital markets" because of its reliance on continued credit to service its debt load. KKR sold about $1.3 billion of cov-lite debt in 2017 to pay for its buyout of Unilever, despite Moody's rating the offer 4.99 on a scale of 1 to 5, with 5 being the riskiest. Kraft Heinz had its credit rating downgraded to BBB− or "junk" in February 2020 due to low earnings expectations and the firm's determination to use available capital to provide stock dividends rather than pay down debt. That month, Kraft Heinz had $22.9 billion in total debt with only $2.3 billion in cash assets. Corporations in the United States have used the debt to finance share buybacks, dividends, and mergers & acquisitions that boost share price to a greater extent than before. This has been done in place of long-term business investments and expansions. The U.S. Tax Cuts and Jobs Act of December 2017 offered a tax holiday under the logic that firms would use the extra profits to increase investments. Instead, it vastly increased an existing trend towards share buybacks, which increase the value of the remaining publicly traded shares and contributed to the rise of stock market indexes generally. While the S&P 500 has risen by over 300% from its low in the Great Recession, this rise is driven partly by the selling of corporate debt to purchase stock that becomes more expensive due to the purchases. The cyclically adjusted price-to-earnings ratio for the S&P 500 indicates it is the most overvalued it has been since the dot-com bubble and is around Wall Street crash of 1929 valuations. The McKinsey Global Institute cautioned in 2018 against excessive alarm, noting that if interest rates rose by 2%, less than 10% of bonds issued in all advanced economies would be at higher risk of default, with the percentage falling to less than 5% of European debt, which is largely issued by AAA-rated companies. == Search for yield results in growth in covenant light bonds == Most leveraged corporate bonds are "cov-lite", or covenant light, that do not contain the usual protections for purchasers of the debt. In some cases, cov-lite terms may force the purchaser of the debt to buy more debt. By mid-2018, 77.4% of U.S. leveraged corporate loans were cov-lite. Cov-lite loans as a percentage of outstanding leveraged loans in European markets reached 78% in 2018, compared to under 10% in 2013. Investors seeking stronger covenants lost the struggle with companies and private equity firms seeking to offload risk to the buyers of their debt. A writer for Bloomberg News opined in February 2020, "If and when the credit cycle turns, the aggressive push toward weakening protections virtually ensures that recovery rates will be worse than in 2008. But there's no going back now: The risky debt markets are full of cov-lite deals. Investors either have to acclimate to that reality or get out of high-yield and leveraged loans." == Chinese debt == The Chinese government's reaction to the 2008 financial crisis was to direct banks to loan to Chinese state-owned enterprises (SOEs), which then built factories and equipment to stimulate the economy despite the lack of demand for the products created. The economic activity of SOEs in 2017 was 22% of China's total GDP, though SOEs accounted for over half of China's corporate debt. It is often not clear the degree to which Chinese SOEs are owned by the state, making it difficult to differentiate corporate and sovereign debt. Government-directed lending gradually shifted from large banks offering loans to smaller local and provincial banks offering lightly regulated wealth management products. This "shadow banking" sector grew from $80 billion in 2006 to almost $9 trillion in 2018. In 2017, the International Monetary Fund estimated that 15.5% of all commercial bank loans in China were made to firms that did not have an operational cash flow sufficient to cover the interest on the loans. A 60% default rate of these loans could result in losses equal to 7% of Chinese GDP. In 2017, both Moody's and Standard & Poor's Financial Services LLC downgraded China's sovereign debt rating because of concerns about the health of the financial system. The Chinese government recognized the risk posed by corporate debt. The 13th Five-Year Plan, unveiled in 2015, included financial reforms to reduce capacity in highly leveraged sectors. There were a wide variety of other policies and restrictions implemented to reduce debt burdens and manage the failure of zombie firms. In 2017, the government established the Financial Stability and Development Committee, chaired by Vice-Premier Liu He, to coordinate financial regulation, with the full impact of new regulations expected in 2021. The China–United States trade war that began in 2018 forced the government to pause debt reduction efforts in order to emphasize stimulus as both domestic and global demand for Chinese products fell. Government attempts to crack down on risky debt combined with the economic slowdown to quadruple the size of defaults on yuan-denominated bonds from 2017 to 2018. The government subsequently encouraged banks to increase lending, in particular to small struggling firms. In the first half of 2019, local governments issued $316.5 billion in bonds. In December 2019, both Moody's Analytics and Fitch warned that Chinese debt was the biggest threat within the "fault line in the financial system and the broader economy" posed by overall corporate debt. Fitch noted that 4.9% of Chinese private companies had defaulted on bond payments in first 11 months of 2019, compared to 0.6% in all of 2014. == Potential role of corporate debt in a future recession == The Organisation for Economic Co-operation and Development noted in February 2020 that "today’s stock of outstanding corporate bonds has lower overall credit quality, higher payback requirements, longer maturities and inferior covenant protection" that "may amplify the negative effects that an economic downturn would have on the non-financial corporate sector and the overall economy". If the corporate debt bubble bursts, the bonds would be repriced, resulting in a massive loss by the mutual funds, high-yield funds, pension funds, and endowments with corporate bond assets. As with the 2008 crisis, this may result in increased caution by lenders and the shrinking of the entire bond market, resulting in higher rates for individual consumers for mortgages, car loans, and small-business loans. The International Monetary Fund conducted a stress test for a hypothetical shock half as large as the 2008 crisis and found that $19 trillion of corporate debt from eight countries—China, the United States, Japan, the United Kingdom, France, Spain, Italy, and Germany—representing roughly 40% of all corporate debt would be at risk of default because it would be difficult for companies to raise cash to repay loans that come due. In contrast, other observers believed that a crisis could be averted, noting that banks are better capitalized and central banks more responsive than in the 2008 financial crisis. In 2019, the McKinsey Global Institute expressed doubt that defaults in the corporate debt market would result in systemic collapses like that caused by the subprime mortgage crisis. On 12 March 2020, Kenneth Rogoff of Harvard University stated, "I don’t think we have anything shaping up like 2008 or 1929, particularly in the United States." Though he later revised as the situation worsened, stating on 30 March, "there is a good chance it will look as bad as anything over the last century and half." === Concern about COVID-19–related economic turmoil === Several financial commentators expressed alarm at the economic fallout of the COVID-19 pandemic and related collapse of the agreement between OPEC and non-OPEC producers, particularly Russia, to prop up crude oil prices and resulting stock market crash during the week of 9 March 2020. The concern is that this economic instability may initiate the collapse of the corporate debt bubble. The total economic debt owed by non-financial companies in early March was $13 trillion worldwide, of which about $9.6 trillion was in the U.S. The Chief Investment Officer of Guggenheim Partners noted on 9 March 2020, "the overleveraged corporate sector [is] about to face the prospect that new-issue bond markets may seize up, as they did last week, and that even seemingly sound companies will find credit expensive or difficult to obtain ... Our estimate is that there is potentially as much as a trillion dollars of high-grade bonds heading to junk. That supply would swamp the high yield market as it would double the size of the below investment grade bond market. That alone would widen [yield] spreads even without the effect of increasing defaults." At end of the trading day on 9 March the yield spread for junk bonds reached 6.68% from a low of 3.49% on 6 January, as sellers attempted to lure cautious traders with higher yields. The bonds of firms in the energy sector, who make up about 10% of the total junk bond market and were particularly exposed to the Saudi-Russian oil price war, suffered large yield spreads. A debt default by energy companies would harm the regional banks of Texas and Oklahoma, potentially causing a chain reaction through the corporate bond market. On 12 March, the spread on junk bonds over U.S. Treasuries increased to 7.42% in U.S. markets, the highest level since December 2015, indicating less willingness to buy corporate debt. As the airline and oil industries faced dire consequences from the economic slowdown and the Russia–Saudi Arabia oil price war, investors became increasingly concerned that corporate bond fund managers dealing with redemption requests from clients would be forced to engage in forced liquidation, potentially prompting other investors to try to sell first, driving down the value of the bonds, and increasing the cash crunch on investors. A concern is that companies, unable to cover their debt, will draw down their credit lines to banks, thereby reducing bank liquidity. An example is Boeing, which declared on 11 March that it would draw down the entirety of a $13.825 billion line of credit meant to cover costs related to the Boeing 737 MAX groundings to "preserve cash", resulting in an 18% drop in its stock. While U.S. banks should have capacity to supply liquidity to companies due to post-2008 crisis regulations, analysts are concerned about funds holding bonds, which were also seeking to build cash reserves in anticipation of imminent client withdrawals during the economic turmoil. In the week of 9 March, investors pulled a record $15.9 billion from investment-grade bond funds and $11.2 billion from high-yield bond funds, the second-highest on record. As of 13 March, the market was pricing in about a 50% chance of recession, indicating future strain if a recession actually came to pass. From 20 February to 16 March 2020, the yield of the iBoxx euro liquid high-yield index doubled. The market for new European junk-rated corporate debt, including leveraged debt, had effectively disappeared. Around 38 billion euros of debt is due by junk-rated corporate and financial issuers in European currencies by the end of 2021. Analysts were concerned that Eurozone companies vulnerable to the COVID-19 economic downturn, and with debt coming due over the next two years, would be unable to refinance their debt and would be forced to restructure. One U.S. analyst on 16 March opined, "The longer the pandemic lasts, the greater the risk that the sharp downturn morphs into a financial crisis with zombie companies starting a chain of defaults just like subprime mortgages did in 2008." On 19 March, the European Central Bank announced a 750 billion euro ($820 billion) bond-buying program, called the Pandemic Emergency Purchase Programme, to calm European debt markets. The PEPP and corporate sector purchase programme were authorized to buy non-financial commercial paper. ==== Fed action soothe U.S. markets at the end of March ==== In the week of 23 March, investment-grade firms in the US issued $73 billion in debt, about 21% higher than the previous record set in 2013. These firms sought to build cash reserves prior to the full impact of the recession. For example, the retail-focused businesses Nike, Inc. and The Home Depot began the week borrowing $6 billion and $5 billion, respectively. Unusually, about 25% of the debt was being bought by investors who typically trade stocks. Investment-grade yields had increased as mutual fund and money-market funds sold their short-term bonds to meet client redemptions in previous weeks. The high yields and the announcement by the Fed that it would purchase investment-grade bonds to ensure market liquidity attracted hedge funds and other non-traditional buyers seeking a refuge from market volatility. The Fed's attempts to maintain corporate liquidity, including with $687 billion in support on 26 March, were primarily focused on companies with higher credit ratings. The Council on Foreign Relations opined that due to the dependence of riskier companies on commercial paper to meet short-term liabilities, there would be a large increase in corporate defaults, unless aid was extended to lower-rated borrowers. Also on 23 March, the People's Bank of China (PBOC) began open market operations to inject liquidity for the first time since 17 February, and also lowered interest rates. Chinese firms had sold $445 billion in onshore (yuan-denominated) bonds in 2020, a 12% increase from the first quarter of 2019. This followed Chinese government efforts to increase liquidity, which drove interest rates to a 14-year low. Chinese debt yields remained stable. The amount of Chinese corporate bond defaults fell 30% in the first quarter, year on year, to less than 24 billion yuan ($3.4 billion) as banks rushed loans to stabilize businesses. While bankruptcies and job losses have been avoided in the short term, unless demand for Chinese goods and services increases, the increased loans may turn into more corporate nonperforming debt. On 30 March, Moody's downgraded the outlook on U.S. corporate debt from stable to negative. It mentioned in particular firms in global air travel, lodging and cruise ships, automobiles, oil and gas, and the banking sector. Moody's also noted that $169 billion in corporate debt is due in 2020, and further $300 billion in 2021, which would be difficult to roll over in the strained economic climate. At the end of March, Goldman Sachs estimated that $765 billion in U.S. corporate bonds had already experienced rating downgrades. Slippage of firms from investment-grade to junk status continued to pose a stability risk. Fitch forecasted a doubling of defaults on US leveraged loans from 3% in 2019 to 5–6% in 2020, with a default rate for retail and energy companies of up to 20%. Fitch further forecast defaults in these two markets of 8–9% in 2020, totaling $200 billion over two years. The ability of NCR Corporation and Wynn Resorts to raise $1 billion in unsecured junk-rated debt on 7 April was seen as a sign of increased investor tolerance of risk. The previous week, Yum! Brands and Carnival Corporation were able to issue debt secured against their assets. Some investment funds began spending up to $2.5 billion acquiring loans and bonds that they viewed as becoming undervalued in the chaos of March. Also on 7 April, the Institute of International Finance identified five nation's corporate sectors with high levels of debt and limited cash that were at the most risk from COVID-19 disruption: Argentina, India, Spain, Thailand, and Turkey. On 8 April, South Korea began won-denominated debt purchases of up to $16 billion to provide liquidity to investment-grade firms. This enabled Lotte Food Co on 9 April to issue the first won-denominated debt in three weeks. Nevertheless, yields on won-denominated corporate debt were at the highest since 2012 amid pessimism about the global economic outlook and impacts upon South Korean firms. ==== Fed extends lifeline to "fallen angels" ==== On 9 April, following passage of the U.S. Coronavirus Aid, Relief, and Economic Security Act (CARES Act), the Fed announced that it would buy up to $2.3 trillion in debt from the U.S. market. This included purchase of debt from "fallen angels", firms that were downgraded to junk after 22 March. The Fed's Primary and Secondary Market Corporate Credit Facilities totals $750 billion. They are designed as credit backstops for U.S.-listed firms rated at least BBB-/Baa3; if downgraded to junk after 22 March, the firm must be rated at least BB-/Ba3 (the highest tier of junk) when a Facility buys the debt. The Fed's announcement drove a sharp rise in prices on junk bond exchange-traded funds and individual junk-rated bonds, such as Ford Motor Company and Macy's. Also on 9 April, ECB President Lagarde dismissed the idea of cancelling Eurozone corporate debt acquired during the COVID-19 crisis, calling it "totally unthinkable". This followed an opinion piece by former ECB President Mario Draghi arguing that national governments absorbing the cost of debt acquired by companies while economic activity was suspended would be ultimately less harmful to national economies than letting the companies default on their debt and go into restructuring. On 16 April, Bloomberg News reported that Chinese interest rates were so low that Chinese firms were being incentivized to sell short-term debt in order to buy high-yield and less-regulated wealth management products. This arbitrage was attractive as the poor economic climate reduced incentives to invest in fixed capital and labor. However, it relies on Chinese local and provincial banks remaining solvent to be low risk. UBS warned on 16 April that the amount of Eurozone BBB-rated debt had risen from $359 billion in 2011 to $1.24 trillion. UBS estimated a high-risk of downgrades to junk status. The average of its models indicated that about $69 billion in Euro non-financial corporate assets may be downgraded to high yield status. There are many uncertainties, but UBS predicts downgrades like that experienced in 2011–12 at the height of the European debt crisis, but not as severe as those experienced by Europe in the 2008 financial crisis. In mid-April, traders in Asian commodity markets reported that it was increasingly difficult to obtain short-term bank letters of credit to conduct deals. Lenders reported that they were reducing exposure by refusing to lend to some smaller firms and demanding more collateral for the loans that they are making; some firms affected by COVID-19-related supply chain disruptions in the low-margin, high-volume commodity business found themselves unable to service their existing debt. One prominent Singaporean commodity firm, Agritrade International had gone bankrupt after being unable to service $1.55 billion in debt, while another, Hin Leong Trading, was struggling to manage almost $4 billion in debt. The Chief Economist of trading giant Trafigura expressed concern that the credit squeeze in Asian commodity markets would spread to the United States and Europe, stating, "We have been talking about this as a series of cascading waves. First the virus, then the economic and then potentially the credit side of it." On 17 April, the $105 billion in debt issued by Mexican oil giant Pemex was downgraded to junk status, making it the largest company to fall from investment grade. However, its bond yields held steady as investors assumed an implicit guarantee by the Mexican government. On 19 April, The New York Times reported that U.S. corporations had drawn more than $200 billion from existing credit lines during the COVID-19 crisis, far more than had been extended in the 2008 crisis. It noted that debt-laden firms "may be forced to choose between skipping loan payments and laying off workers". The International Association of Credit Portfolio Managers forecast that credit risk would greatly increase over the next three months. Neiman Marcus missed payments on about $4.8 billion in debt and stated on 19 April that it would declare bankruptcy, in the context of the ongoing North American retail apocalypse. Ratings agencies had downgraded Neiman Marcus and J. C. Penney the previous week. J.C. Penney decided not to make a scheduled $12 million interest payment on a 2036 bond on 15 April and has a one-month grace period before creditors can demand payment. ==== Negative oil futures focused attention on the U.S. oil sector ==== On 20 April, May futures contracts for West Texas Intermediate crude oil fell to -$37.63 per barrel as uninterrupted supply met collapsing demand. Even reports that the U.S. administration was considering paying companies not to extract oil did not comfort U.S. oil companies. The head of U.S. oil services company Canary, LLC stated, "A tidal wave of bankruptcies is about to hit the sector." While oil company bonds had rallied after Fed actions earlier in the month, the collapse of oil prices undermined market confidence. Junk-rated U.S. shale oil companies comprise 12% of the benchmark iShares iBoxx $ High Yield Corporate Bond ETF, which fell 3% from 20 to 21 April. With crude prices so low, U.S. shale oil companies cannot make money pumping more oil. MarketWatch noted that now "investors are likely to focus less on the viability of a driller’s operations and how cheaply it could unearth oil. Instead, money managers would look to assess if a company’s finances were resilient enough to stay afloat during the current economic downturn." The potential failure of highly leveraged U.S. shale oil bonds may pose a risk to the high-yield market as a whole. The airline Virgin Australia entered voluntary administration on 21 April, after being unable to manage $4.59 billion in debt. It named Deloitte as its administrator, with the intention of receiving binding offers on the entirety of the company and its operations by the end of June 2020. On 22 April, The New York Times reported that many smaller U.S. oil companies are expected to seek bankruptcy protection in the coming months. Oil production companies have $86 billion in debt coming due between 2020 and 2024, with oil pipeline companies having an additional $123 billion due over the same period. Many U.S. oil firms were operating on Federal loans offered through the CARES Act, but those funds were already running out. The president of oil developer Texland stated, "April is going to be terrible, but May is going to be impossible." Assets for companies in the U.S. car rental market, which were not included in the CARES Act, were under severe stress on 24 April. S&P Global Ratings had downgraded Avis and Hertz to "highly speculative", while credit default swaps for Hertz bonds indicated a 78% chance of default within 12 months and a 100% chance within five years. ANZ Bank reported in late April that corporate debt in Asia was rising fastest in China, South Korea, and Singapore. Energy companies in Singapore and in South Korea, in particular, were singled out for being "over-leveraged and short on cash buffers". In China, the real estate sector was similarly over-extended. Over 60% of outstanding Singaporean corporate debt was denominated in U.S. dollars, increasing exposure to foreign exchange risk, compared to only a fifth of South Korean corporate debt. ANZ Bank, noting that most Chinese corporate debt is owned by the state and has an implicit guarantee, concluded that Chinese corporations are the least vulnerable to debt loads. ==== Record debt purchases in April ==== Between 1 January and 3 May, a record $807.1 billion of U.S. investment-grade corporate bonds were issued. Similarly, U.S. corporations sold over $300 billion in debt in April 2020, a new record. This included Boeing, which sold $25 billion in bonds, stating that it would no longer need a bailout from the U.S. government. Apple, which borrowed $8.5 billion potentially to pay back the $8 billion in debt coming due later in 2020; Starbucks, which raised $3 billion.; Ford, which sold $8 billion in junk-rated bonds despite just losing its investment rating; and cruise line operator Carnival, which increased its offering to $4 billion to meet demand. The main reasons for the lively market are the low interest rates and the Fed's actions to ensure market liquidity. The iShares iBoxx USD Investment Grade Corporate Bond, an exchange-traded fund with assets directly benefiting from Fed actions, grew by a third between March 11 and the end of April. However, companies are growing increasingly leveraged as they increase their debt while earnings fall. Through the end of April 2020, investment-grade corporate bonds gained 1.4% versus Treasury bonds' 8.9%, indicating potential investor wariness about the risk of corporate bonds. Morgan Stanley estimated 2020 U.S. investment-grade bond issuance at $1.4 trillion, around 2017's record, while Barclays estimated the non-financial corporations will need to borrow $125–175 billion in additional debt to cover the drop in earnings from the pandemic recession. Warren Buffett noted that the terms offered by the Fed were far better than those that Berkshire Hathaway could offer. The Bank of Japan increased its holdings of commercial paper by 27.8% in April 2020, which followed a rise of $16.9% in March. Efforts to alleviate strain on Japanese corporate finances also included increasing BoJ corporate bond holdings by 5.27% in April. The chief market economist at Daiwa Securities noted, "The steps the BOJ has taken so far are aimed at preventing a worsening economy from triggering a financial crisis. We'll know around late June through July whether their plan will work." On 4 May, U.S. retailer J.Crew filed for bankruptcy protection to convert $1.6 billion in debt to equity. Its debt largely resulted from the 2011 leveraged buyout by its current owners. J.Crew became the first U.S. retailer to go bankrupt in the COVID-19 downturn. In the week of 4 May, the Chamber of Deputies in the National Congress of Brazil was seeking to pass an amendment to the Constitution that would allow the Brazil to buy private sector securities. However, the Central Bank was concerned that bank officials could face accusations of corruption for buying assets from individual companies and were seeking personal liability protection for Central Bank purchases. As of 6 May, the Fed had not yet utilized its Primary Market Corporate Credit and Secondary Market Corporate Credit facilities and had not explained how companies could be certified for these lending programs. However, investors had already bought debt as if the Fed backstop existed. Bank of America Global Research expressed concern that unless the Fed began actually buying debt, the uncertainty could further roil bond markets. A group of U.S. Republican lawmakers asked President Trump to mandate that loans be provided to U.S. energy companies through the Coronavirus Aid, Relief, and Economic Security Act's Main Street Lending Program. They specifically mentioned BlackRock, which is a fiduciary to the Federal Reserve Bank of New York, and had declared in January that it was divesting itself of assets connected to power plant coal. Democratic lawmakers had previously called that oil and gas companies be barred from Main Street facility loans. More than $1.9 billion in CARES Act benefits were being claimed by oil and oil services companies, using a tax provision that allowed companies to claim losses from before the pandemic using the highest tax rate of the previous five years, even if the losses didn't happen under that tax rate. Dubbed a "stealth bailout" of the oil industry, the loss carryback provision was expected to cost at least $25 billion over 10 years. On 9 May, Goldman Sachs warned that U.S. investors may be overestimating the Fed guarantees to junk-rated debt. Between 9 April and 4 May, the two largest junk exchange-traded funds (ETFs), the SPDR Bloomberg Barclays High Yield Bond ETF and the iShares iBoxx $ High Yield Corporate Bond ETF, respectively received $1.6 billion and $4.71 billion in net inflows. However, Goldman cautioned that even the BB-rated bonds that make up half of these two ETF's portfolios are likely to experience further downgrades. State Street Global Advisors commented on the distortions being created by the Fed's implicit guarantee: "The disconnect between the underlying fundamentals of bond issuers and bond prices is tough to reconcile." On 12 May, the Fed began buying corporate bond ETFs for the first time in its history. It stated its intention to buy bonds directly "in the near future". As companies must prove that they can not otherwise access normal credit to be eligible for the primary market facility, analysts opined that it may create a stigma for companies and be little used. However, the guarantee of a Fed backstop appears to have ensured market liquidity. In its annual review on 14 May, the Bank of Canada concluded that its three interest rate cuts in March and first ever bond buying program had succeeded in stabilizing Canadian markets. However, it expressed concern about the ability of the energy sector to refinance its debt given historically low oil prices. About C$17 billion in Canadian corporate bonds was sold in April 2020, one of the largest volumes since 2010. On 15 May, J. C. Penney filed for bankruptcy. It followed the filings of Neiman Marcus and J.Crew, but was the largest U.S. retailer to file by far. On 22 May, The Hertz Corporation filed for Chapter 11 bankruptcy. This bankruptcy allows them to continue operations as a company, while attempting to work out some form of a deal between them and their creditors. On 5 August, Virgin Atlantic filed for bankruptcy. On March 12, 2021, CNBC published a short video on the corporate debt/bond bubble. == See also == List of countries by corporate debt The Age of Debt Bubbles == Notes == == External links == "Age of Easy Money". FRONTLINE. Season 41. Episode 6. March 14, 2023. PBS. WGBH. Retrieved July 12, 2023.
Wikipedia/Corporate_debt_bubble
The Anglo-Saxon model (so called because it is practiced in Anglosphere countries such as the United Kingdom, the United States, Canada, New Zealand, Australia and Ireland) is a regulated market-based economic model that emerged in the 1970s based on the Chicago school of economics, spearheaded in the 1980s in the United States by the economics of then President Ronald Reagan (dubbed Reaganomics), and reinforced in the United Kingdom by then Prime Minister Margaret Thatcher (dubbed Thatcherism). However, its origins are said to date to the 18th century in the United Kingdom and the ideas of the classical economist Adam Smith. Characteristics of this model include low levels of regulation and taxation, with the public sector providing minimal services. It also means strong private property rights, contract enforcement, and overall ease of doing business as well as low barriers to free trade. == Disagreements over meaning == Proponents of the term "Anglo-Saxon economy" argue that the economies of these countries currently are so closely related in their liberal and free market orientation that they can be regarded as sharing a specific macroeconomic model. However, those who disagree with the use of the term claim that the economies of these countries differ as much from each other as they do from the so-called "welfare capitalist" economies of northern and continental Europe. The Anglo-Saxon model of capitalism is usually contrasted with the Continental model of capitalism, known as Rhine capitalism, the social market economy or the German model, but it is also contrasted with Northern-European models of capitalism found in the Nordic countries, called the Nordic model. The major difference between these economies from Anglo-Saxon economies is the scope of collective bargaining rights and corporatist policies. Differences between Anglo-Saxon economies are illustrated by taxation and the welfare state. The United Kingdom has a significantly higher level of taxation than the United States. Moreover, the United Kingdom spends far more than the United States on the welfare state as a percentage of GDP and also spends more than Spain, Portugal, or the Netherlands. This spending figure is still considerably lower than that of France or Germany. In northern continental Europe, most countries use mixed economy models, called Rhine capitalism (a current term used especially for the macroeconomics of Germany, France, Belgium and the Netherlands), or its close relative the Nordic model (which refers to the macroeconomics of Denmark, Iceland, Norway, Sweden and Finland). The debate amongst economists as to which economic model is better, circles around perspectives involving poverty, job insecurity, social services and inequality. Generally speaking, advocates of Anglo-Saxon model argue that more liberalized economies produce greater overall prosperity while defenders of continental models counter that they produce lesser inequality and lesser poverty at the lowest margins. The rise of China has brought into focus the relevance of an alternate economic model which has helped propel the economy of China for thirty years since its opening up in 1978. The socialist market economy or a system based on what is called "socialism with Chinese characteristics". A confident China is increasingly offering it as an alternate development model to the Anglo-Saxon model to emerging economies in Africa and Asia. == History of Anglo-Saxon model == The Anglo-Saxon model came out in the 1970s from the Chicago School of Economics. The return to economic liberalism in the Anglo-Saxon countries is explained by the failure of Keynesian economic management to control the stagflation in the 1970s and early 1980s The Anglo-Saxon model was made from the ideas of Friedman and the Chicago School economists and the conventional wisdom of pre-Keynesian, liberal economic ideas which stated that success in fighting inflation is dependent on managing the money supply whilst efficiency in the utilization of resources and that unrestricted markets are the most efficient for this goal of combating inflation. By the end of the 1970s the British post-war economic model was in trouble. After Labour failed to solve the problems it was left to Margaret Thatcher's Conservatives to reverse Britain's economic decline. During Thatcher's second term in office the nature of the British economy and its society started to change. Marketization, privatization and the deliberate diminishing of the remnants of the post-war social-democratic model were all affected by the American ideas. The Thatcher era revived British social and economic thinking. It did not entail wholesale import of American ideas and practices, so the British shift to the right did not cause the any real convergence toward American socio-economic norms. However, with time the British approach, that European economies should be inspired by the success of the United States, built an ideological proximity with the United States. After a process of transferring policy from the United States, it became apparent that a distinctive Anglo-Saxon economic model was forming. == Types of Anglo-Saxon economic models == According to some researchers, not all liberal economics models are created equally. There are different sub-types and variations among countries that practice Anglo-Saxon model. One of these variations is neo-classical economic liberalism exhibited in American and British economies. The underlying assumption of this variation is that the inherent selfishness of individuals is transferred by the self-regulating market into general economic well-being, known as the invisible hand. In neo-classical economic liberalism, competitive markets should function as equilibrating mechanisms, which deliver both economic welfare and distributive justice. One of the main aims of the economic liberalism in the United States and United Kingdom, which was significantly influenced by Friedrich Hayek's ideas, is that government should regulate economic activity; but the state should not get involved as economic actor. The other variation of economic liberalism is "balanced model" or ‘ordoliberalism’ (the concept is from the concept of ‘ordo’, the Latin word for ‘order’). Ordoliberalism means an ideal economic system which would be more well ordered than the laissez-faire economy supported by classical liberals. After the 1929 Stock Market Crash and Great Depression, the German Freiburg School's intellectuals argued that to ensure that market functions effectively, government should undertake an active role, backed by a strong legal system and suitable regulatory framework. They claimed that without strong government private interests would undercut competition in the system which is characterized by differences in relative power. Ordoliberals thought that liberalism (the freedom of individuals to compete in markets) and laissez-faire (the freedom of markets from government intervention) should be separated. Walter Eucken, the founding father and one of the most influential representatives of the Freiburg School, condemned classical laissez-faire liberalism for its ‘naturalistic naivety.’ Eucken states that the market and competition can only exist if economic order is created by a strong state. The power of government should be clearly determined, but in its area in which the state plays a role, the state has to be active and powerful. For ordoliberals, the right kind of government is the solution of the problem. Alexander Rüstow claimed that government should refrain from getting too engaged in markets. He was against protectionism, subsidies or cartels. However, he suggested limited interventionism should be allowed as long as it went "in the direction of the market’s laws." Another difference between two variations is that ordoliberals saw the main enemy of free society in monopolies instead of the state. It is hard to empirically show a direct influence of the history of ordoliberalism on Australia or Canada. However, economic liberalism in Australia and Canada resembles German ordoliberalism much more than neo-classical liberalism of the US and UK. Differing interpretations of the Anglo-Saxon economic school of thought and, especially different justifications and perceptions of state intervention in the economy, led to policy differences within these countries. Then these policies continued and influenced the relationship between the public and private sectors. For example, in the United States, the state enforces notably lower tax rates than in the United Kingdom. In addition, the government of the United Kingdom invests more money proportionately on welfare programs and social services than the government of the United States. == See also == == References == == Bibliography == == External links == IMF World Economic Outlook database CIA World Factbook
Wikipedia/Anglo-Saxon_model
The real interest rate is the rate of interest an investor, saver or lender receives (or expects to receive) after allowing for inflation. It can be described more formally by the Fisher equation, which states that the real interest rate is approximately the nominal interest rate minus the inflation rate. If, for example, an investor were able to lock in a 5% interest rate for the coming year and anticipated a 2% rise in prices, they would expect to earn a real interest rate of 3%. The expected real interest rate is not a single number, as different investors have different expectations of future inflation. Since the inflation rate over the course of a loan is not known initially, volatility in inflation represents a risk to both the lender and the borrower. In the case of contracts stated in terms of the nominal interest rate, the real interest rate is known only at the end of the period of the loan, based on the realized inflation rate; this is called the ex-post real interest rate. Since the introduction of inflation-indexed bonds, ex-ante real interest rates have become observable. == Compensation for lending == An individual who lends money for repayment at a later point in time expects to be compensated for the time value of money, or not having the use of that money while it is lent. In addition, they will want to be compensated for the expected value of the loss of purchasing power when the loan is repaid. These expected losses include the possibility that the borrower will default or be unable to pay on the originally agreed upon terms, or that collateral backing the loan will prove to be less valuable than estimated; the possibility of changes in taxation and regulatory changes which would prevent the lender from collecting on a loan or having to pay more in taxes on the amount repaid than originally estimated; and the loss of buying power compared to the money originally lent, due to inflation. Nominal interest rates measure the sum of the compensations for all three sources of loss, plus the time value of the money itself. Real interest rates measure the compensation for expected losses due to default and regulatory changes as well as measuring the time value of money; they differ from nominal rates of interest by excluding the inflation compensation component. On an economy-wide basis, the "real interest rate" in an economy is often considered to be the rate of return on a risk-free investment, such as US Treasury notes, minus an index of inflation, such as the rate of change of the CPI or GDP deflator. === Fisher equation === The relation between real and nominal interest rates and the expected inflation rate is given by the Fisher equation 1 + i = ( 1 + r ) ( 1 + π e ) {\displaystyle 1+i=(1+r)(1+\pi _{e})} where i = nominal interest rate; r = real interest rate; π e {\displaystyle \pi _{e}} = expected inflation rate. For example, if somebody lends $1000 for a year at 10%, and receives $1100 back at the end of the year, this represents a 10% increase in her purchasing power if prices for the average goods and services that she buys are unchanged from what they were at the beginning of the year. However, if the prices of the food, clothing, housing, and other things that she wishes to purchase have increased 25% over this period, she has, in fact, suffered a real loss of about 15% in her purchasing power. (Notice that the approximation here is a bit rough; since 1.1/1.25 - 1 = 0.88 - 1 = -.12, the actual loss of purchasing power is exactly 12%.) If the inflation rate and the nominal interest are relatively low, the Fisher equation can be approximated by r = i − π e . {\displaystyle r=i-\pi _{e}.} === After-tax real interest rate === The real return actually gained by a lender is lower if there is a non-zero tax rate imposed on interest earnings. Generally taxes are imposed on nominal interest earnings, not adjusted for inflation. If the tax rate is denoted as t, the before-tax nominal earning rate is i, the amount of taxes paid (per dollar or other unit invested) is i × t, and so the after-tax nominal earning is i × (1–t ). Hence the expected after-tax real return to the investor, using the simplified approximate Fisher equation above, is given by Expected real after-tax return = i ( 1 − t ) − π e . {\displaystyle i(1-t)-\pi _{e}.} === Variations in inflation === The inflation rate will not be known in advance. People often base their expectation of future inflation on an average of inflation rates in the past, but this gives rise to errors. The real interest rate ex-post may turn out to be quite different from the real interest rate (ex-ante real interest rate) that was expected in advance. Borrowers hope to repay in cheaper money in the future, while lenders hope to collect on more expensive money. When inflation and currency risks are underestimated by lenders, then they will suffer a net reduction in buying power. The complexity increases for bonds issued for a long-term, where the average inflation rate over the term of the loan may be subject to a great deal of uncertainty. In response to this, many governments have issued real return bonds, also known as inflation-indexed bonds, in which the principal value and coupon rises each year with the rate of inflation, with the result that the interest rate on the bond approximates a real interest rate. (E.g., the three-month indexation lag of TIPS can result in a divergence of as much as 0.042% from the real interest rate, according to research by Grishchenko and Huang.) In the US, Treasury Inflation Protected Securities (TIPS) are issued by the US Treasury. The expected real interest rate can vary considerably from year to year. The real interest rate on short term loans is strongly influenced by the monetary policy of central banks. The real interest rate on longer term bonds tends to be more market driven, and in recent decades, with globalized financial markets, the real interest rates in the industrialized countries have become increasingly correlated. Real interest rates have been low by historical standards since 2000, due to a combination of factors, including relatively weak demand for loans by corporations, plus strong savings in newly industrializing countries in Asia. The latter has offset the large borrowing demands by the US Federal Government, which might otherwise have put more upward pressure on real interest rates. Related is the concept of "risk return", which is the rate of return minus the risks as measured against the safest (least-risky) investment available. Thus if a loan is made at 15% with an inflation rate of 5% and 10% in risks associated with default or problems repaying, then the "risk adjusted" rate of return on the investment is 0%. == Importance in economic theory == The amount of physical investment—in particular the purchasing of new machines and other productive capacity—that firms engage in partially depends on the level of real interest rates because such purchases typically must be financed by issuing new bonds. If real interest rates are high, the cost of borrowing may exceed the real physical return of some potentially purchased machines (in the form of output produced); in that case those machines will not be purchased. Lower real interest rates would make it profitable to borrow to finance the purchasing of a greater number of machines. The real interest rate is used in various economic theories to explain such phenomena as capital flight, business cycles and economic bubbles. When the real rate of interest is high, because demand for credit is high, then the usage of income will, all other things being equal, move from consumption to saving, and physical investment will fall. Conversely, when the real rate of interest is low, income usage will move from saving to consumption, and physical investment will rise. Different economic theories, beginning with the work of Knut Wicksell, have had different explanations of the effect of rising and falling real interest rates. Thus -- assuming risks are constant -- international capital moves to markets that offer higher real rates of interest from markets that offer low or negative real rates of interest. Capital flows of this kind often reflect speculation in financial and foreign exchange rate markets. === Real federal funds rate === In setting monetary policy, the U.S. Federal Reserve (and other central banks) uses open market operations, affecting the amounts of very short-term funds (federal funds) supplied and demanded and thus affecting the federal funds rate. By targeting this at a low rate, they can encourage borrowing and thus economic activity; or the reverse by raising the rate. Like any interest rate, there are a nominal and a real value defined as described above. Further, there is a concept called the "equilibrium real federal funds rate" (r*, or "r-star"), alternatively called the "natural rate of interest" or the "neutral real rate", which is the "level of the real federal funds rate, if allowed to prevail for several years, [that] would place economic activity at its potential and keep inflation low and stable." There are various methods used to estimate this amount, using tools such as the Taylor Rule. It is possible for this rate to be negative. == Negative real interest rates == The real interest rate solved from the Fisher equation is 1 + i 1 + π − 1 = r {\displaystyle {\frac {1+i}{1+\pi }}-1=r} If there is a negative real interest rate, it means that the inflation rate is greater than the nominal interest rate. If the Federal funds rate is 2% and the inflation rate is 10%, then the borrower would gain 7.27% of every dollar borrowed per year. 1 + 0.02 1 + 0.1 − 1 = − 0.0727 {\displaystyle {\frac {1+0.02}{1+0.1}}-1=-0.0727} Negative real interest rates are an important factor in government fiscal policy. Since 2010, the U.S. Treasury has been obtaining negative real interest rates on government debt, meaning the inflation rate is greater than the interest rate paid on the debt. Such low rates, outpaced by the inflation rate, occur when the market believes that there are no alternatives with sufficiently low risk, or when popular institutional investments such as insurance companies, pensions, or bond, money market, and balanced mutual funds are required or choose to invest sufficiently large sums in Treasury securities to hedge against risk. Lawrence Summers stated that at such low rates, government debt borrowing saves taxpayer money, and improves creditworthiness. In the late 1940s through the early 1970s, the US and UK both reduced their debt burden by about 30% to 40% of GDP per decade by taking advantage of negative real interest rates, but there is no guarantee that government debt rates will continue to stay so low. Between 1946 and 1974, the US debt-to-GDP ratio fell from 121% to 32% even though there were surpluses in only eight of those years which were much smaller than the deficits. == See also == Real versus nominal value Shadow rate - a variation on the real interest rate Inflation Deflation IS–LM model Macroeconomics Financial repression Natural rate of interest == References == == External links == "Equilibrium Real Interest Rate," by Roger Ferguson, 2004. On the distinction between real return and nominal bonds, by Peter Spiro, 2004. Real interest rates by country via Quandl
Wikipedia/Real_interest_rate
In demography, demographic transition is a phenomenon and theory in the social sciences referring to the historical shift from high birth rates and high death rates to low birth rates and low death rates as societies attain more technology, education (especially of women), and economic development. The demographic transition has occurred in most of the world over the past two centuries, bringing the unprecedented population growth of the post-Malthusian period, then reducing birth rates and population growth significantly in all regions of the world. The demographic transition strengthens economic growth process through three changes: a reduced dilution of capital and land stock, an increased investment in human capital, and an increased size of the labour force relative to the total population and changed age population distribution. Although this shift has occurred in many industrialized countries, the theory and model are frequently imprecise when applied to individual countries due to specific social, political, and economic factors affecting particular populations. However, the existence of some kind of demographic transition is widely accepted because of the well-established historical correlation linking dropping fertility to social and economic development. Scholars debate whether industrialization and higher incomes lead to lower population or whether lower populations lead to industrialization and higher incomes. Scholars also debate to what extent various proposed and sometimes interrelated factors such as higher per capita income, lower mortality, old-age security, and rise of demand for human capital are involved. Human capital gradually increased in the second stage of the industrial revolution, which coincided with the demographic transition. The increasing role of human capital in the production process led to the investment of human capital in children by families, which may be the beginning of the demographic transition. == History == The theory is based on an interpretation of demographic history developed in 1930 by the American demographer Warren Thompson (1887–1973). Adolphe Landry of France made similar observations on demographic patterns and population growth potential around 1934. In the 1940s and 1950s Frank W. Notestein developed a more formal theory of demographic transition. In the 2000s Oded Galor researched the "various mechanisms that have been proposed as possible triggers for the demographic transition, assessing their empirical validity, and their potential role in the transition from stagnation to growth." In 2011, the unified growth theory was completed, the demographic transition becomes an important part in unified growth theory. By 2009, the existence of a negative correlation between fertility and industrial development had become one of the most widely accepted findings in social science. The Jews of Bohemia and Moravia were among the first populations to experience a demographic transition, in the 18th century, prior to changes in mortality or fertility in other European Jews or in Christians living in the Czech lands. John Caldwell (demographer) explained fertility rates in the third world are not dependent on the spread of industrialization or even on economic development and also illustrates fertility decline is more likely to precede industrialization and to help bring it about than to follow it. == Summary == The transition involves four stages, or possibly five. In stage one, pre-industrial society, death rates and birth rates are high and roughly in balance. All human populations are believed to have had this balance until the late 18th century when this balance ended in Western Europe. In fact, growth rates were less than 0.05% at least since the Agricultural Revolution over 10,000 years ago. Population growth is typically very slow in this stage because the society is constrained by the available food supply; therefore, unless the society develops new technologies to increase food production (e.g. discovers new sources of food or achieves higher crop yields), any fluctuations in birth rates are soon matched by death rates. In stage two, that of a developing country, the death rates drop quickly due to improvements in food supply and sanitation, which increase life expectancy and reduce disease. The improvements specific to food supply typically include selective breeding and crop rotation and farming techniques. Numerous improvements in public health reduce mortality, especially childhood mortality. Prior to the mid-20th century, these improvements in public health were primarily in the areas of food handling, water supply, sewage, and personal hygiene. One of the variables often cited is the increase in female literacy combined with public health education programs which emerged in the late 19th and early 20th centuries. In Europe, the death rate decline started in the late 18th century in northwestern Europe and spread to the south and east over approximately the next 100 years. Without a corresponding fall in birth rates this produces an imbalance, and the countries in this stage experience a large increase in population. In stage three, birth rates fall due to various fertility factors such as access to contraception, increases in wages, urbanization, a reduction in subsistence agriculture, an increase in the status and education of women, a reduction in the value of children's work, an increase in parental investment in the education of children, and other social changes. Population growth begins to level off. The birth rate decline in developed countries started in the late 19th century in northern Europe. While improvements in contraception do play a role in birth rate decline, contraceptives were not generally available nor widely used in the 19th century and as a result likely did not play a significant role in the decline then. It is important to note that birth rate decline is caused also by a transition in values, not just because of the availability of contraceptives. In stage four, there are low birth rates and low death rates. Birth rates may drop to well below replacement level, as has happened in countries like Germany, Italy, and Japan, leading to a shrinking population, a threat to many industries that rely on population growth. As the large group born during stage two ages, it creates an economic burden on the shrinking working population. Death rates may remain consistently low or increase slightly due to increases in lifestyle diseases due to low exercise levels and high obesity rates and an aging population in developed countries. By the late 20th century, birth rates and death rates in developed countries leveled off at lower rates. Some scholars break out, from stage four, a "stage five" of below-replacement fertility levels. Others hypothesize a different "stage five" involving an increase in fertility. As with all models, this is an idealized picture of population change in these countries. The model is a generalization that applies to these countries as a group and may not accurately describe all individual cases. The extent to which it applies to less-developed societies today remains to be seen. Many countries such as China, Brazil and Thailand have passed through the Demographic Transition Model (DTM) very quickly due to fast social and economic change. Some countries, particularly African countries, appear to be stalled in the second stage due to stagnant development and the effects of under-invested and under-researched tropical diseases such as malaria and AIDS to a limited extent. == Stages == === Stage one === In pre-industrial society, death rates and birth rates were both high, fluctuating rapidly according to natural events, such as drought and disease, to produce a relatively constant and young population. Family planning and contraception were virtually nonexistent; therefore, birth rates were essentially only limited by the ability of women to bear children. Emigration depressed death rates in some special cases (for example, Europe and particularly the Eastern United States during the 19th century), but, overall, death rates tended to match birth rates, often exceeding 40 per 1000 per year. Children contributed to the economy of the household from an early age by carrying water, firewood, and messages, caring for younger siblings, sweeping, washing dishes, preparing food, and working in the fields. Raising a child cost little more than feeding him or her; there were no education or entertainment expenses. Thus, the total cost of raising children barely exceeded their contribution to the household. In addition, as they became adults they became a major input to the family business, mainly farming, and were the primary form of insurance for adults in old age. In India, an adult son was all that prevented a widow from falling into destitution. While death rates remained high there was no question as to the need for children, even if the means to prevent them had existed. During this stage, the society evolves in accordance with Malthusian paradigm, with population essentially determined by the food supply. Any fluctuations in food supply (either positive, for example, due to technology improvements, or negative, due to droughts and pest invasions) tend to translate directly into population fluctuations. Famines resulting in significant mortality are frequent. Overall, population dynamics during stage one are comparable to those of animals living in the wild. This is the earlier stage of demographic transition in the world and also characterized by primary activities such as small fishing activities, farming practices, pastoralism, and petty businesses. === Stage two === This stage leads to a fall in death rates and an increase in population. The changes leading to this stage in Europe were initiated in the Agricultural Revolution of the eighteenth century and were initially quite slow. In the twentieth century, the falls in death rates in developing countries tended to be substantially faster. Countries in this stage include Yemen, Afghanistan, and Iraq and much of Sub-Saharan Africa (but this does not include South Africa, Botswana, Eswatini, Lesotho, Namibia, Gabon and Ghana, which have begun to move into stage 3). The decline in the death rate is due initially to two factors: First, improvements in the food supply brought about by higher yields in agricultural practices and better transportation reduce death due to starvation and lack of water. Agricultural improvements included crop rotation, selective breeding, and seed drill technology. Second, significant improvements in public health reduce mortality, particularly in childhood. These are not so much medical breakthroughs (Europe passed through stage two before the advances of the mid-twentieth century, although there was significant medical progress in the nineteenth century, such as the development of vaccination) as they are improvements in water supply, sewerage, food handling, and general personal hygiene following from growing scientific knowledge of the causes of disease and the improved education and social status of mothers. A consequence of the decline in mortality in Stage Two is an increasingly rapid growth in population growth (a.k.a. "population explosion") as the gap between deaths and births grows wider and wider. Note that this growth is not due to an increase in fertility (or birth rates) but to a decline in deaths. This change in population occurred in north-western Europe during the nineteenth century due to the Industrial Revolution. During the second half of the twentieth century less-developed countries entered Stage Two, creating the worldwide rapid growth of number of living people that has demographers concerned today. In this stage of DT, countries are vulnerable to become failed states in the absence of progressive governments. Another characteristic of Stage Two of the demographic transition is a change in the age structure of the population. In Stage One, the majority of deaths are concentrated in the first 5–10 years of life. Therefore, the decline in death rates in Stage Two entails the increasing survival of children and a growing population. Hence, the age structure of the population becomes increasingly youthful and start to have big families and more of these children enter the reproductive cycle of their lives while maintaining the high fertility rates of their parents. The bottom of the "age pyramid" widens first where children, teenagers and infants are here, accelerating population growth rate. The age structure of such a population is illustrated by using an example from the Third World today. === Stage three === In Stage 3 of the Demographic Transition Model (DTM), death rates are low and birth rates diminish, as a rule accordingly of enhanced economic conditions, an expansion in women's status and education, and access to contraception. The decrease in birth rate fluctuates from nation to nation, as does the time span in which it is experienced. Stage Three moves the population towards stability through a decline in the birth rate. Several fertility factors contribute to this eventual decline, and are generally similar to those associated with sub-replacement fertility, although some are speculative: In rural areas continued decline in childhood death meant that at some point parents realized that they did not need as many children to ensure a comfortable old age. As childhood death continues to fall and incomes increase, parents can become increasingly confident that fewer children will suffice to help in family business and care for them at old age. Increasing urbanization changes the traditional values placed upon fertility and the value of children in rural society. Urban living also raises the cost of dependent children to a family. A recent theory suggests that urbanization also contributes to reducing the birth rate because it disrupts optimal mating patterns. A 2008 study in Iceland found that the most fecund marriages are between distant cousins. Genetic incompatibilities inherent in more distant out breeding makes reproduction harder. In both rural and urban areas, the cost of children to parents is exacerbated by the introduction of compulsory education acts and the increased need to educate children so they can take up a respected position in society. Children are increasingly prohibited under law from working outside the household and make an increasingly limited contribution to the household, as school children are increasingly exempted from the expectation of making a significant contribution to domestic work. Even in equatorial Africa, children (under the age of 5) are now required to have clothes and shoes, and may even need school uniforms. Parents begin to consider it a duty to buy children's books and toys. Partly due to education and access to family planning, people begin to reassess their need for children and their ability to raise them. Increasing literacy and employment lowers the uncritical acceptance of childbearing and motherhood as measures of the status of women. Working women have less time to raise children; this is particularly an issue where fathers traditionally make little or no contribution to child-raising, such as southern Europe or Japan. Valuation of women beyond childbearing and motherhood becomes important. Improvements in contraceptive technology are now a major factor in fertility decline. Changes in values regarding children and gender play as significant a role as the availability of contraceptives and knowledge of how to use them. The resulting changes in the age structure of the population include a decline in the youth dependency ratio and eventually population aging. The population structure becomes less triangular and more like an elongated balloon. During the period between the decline in youth dependency and rise in old age dependency there is a demographic window of opportunity that can potentially produce economic growth through an increase in the ratio of working age to dependent population; the demographic dividend. However, unless factors such as those listed above are allowed to work, a society's birth rates may not drop to a low level in due time, which means that the society cannot proceed to stage three and is locked in what is called a demographic trap. Countries that have witnessed a fertility decline of over 50% from their pre-transition levels include: Costa Rica, El Salvador, Panama, Jamaica, Mexico, Colombia, Ecuador, Guyana, Philippines, Indonesia, Malaysia, Sri Lanka, Turkey, Azerbaijan, Turkmenistan, Uzbekistan, Tunisia, Algeria, Morocco, Lebanon, South Africa, India, Saudi Arabia, and many Pacific islands. Countries that have experienced a fertility decline of 25–50% include: Guatemala, Tajikistan, Egypt and Zimbabwe. Countries that have experienced a fertility decline of less than 25% include: Sudan, Niger, Afghanistan. === Stage four === This occurs where birth and death rates are both low, leading to total population stability. Death rates are low for a number of reasons, primarily due to lower rates of diseases and increased food production. The birth rate is low because people have more opportunities to choose if they want children. This is made possible by improvements in contraception or women gaining more independence and work opportunities. The DTM (Demographic Transition model) is only a suggestion about the future population levels of a country, not a prediction. Countries that were at this stage (total fertility rate between 2.0 and 2.5) in 2015 include: Antigua and Barbuda, Argentina, Bahrain, Bangladesh, Bhutan, Cabo Verde, El Salvador, Faroe Islands, Grenada, Guam, India, Indonesia, Kosovo, Libya, Malaysia, Maldives, Mexico, Myanmar, Nepal, New Caledonia, Nicaragua, Palau, Peru, Seychelles, Sri Lanka, Suriname, Tunisia, Turkey, and Venezuela. === Stage five === The original Demographic Transition model has just four stages, but additional stages have been proposed. Both more-fertile and less-fertile futures have been claimed as a Stage Five. Some countries have sub-replacement fertility (that is, below 2.1–2.2 children per woman). Replacement fertility is generally slightly higher than 2 (the level which replaces the two parents, achieving equilibrium) both because boys are born more often than girls (about 1.05–1.1 to 1), and to compensate for deaths prior to full reproduction. Many European and East Asian countries now have higher death rates than birth rates. Population aging and population decline may eventually occur, assuming that the fertility rate does not change and sustained mass immigration does not occur. Using data through 2005, researchers have suggested that the negative relationship between development, as measured by the Human Development Index (HDI), and birth rates had reversed at very high levels of development. In many countries with very high levels of development, fertility rates were approaching two children per woman in the early 2000s. However, fertility rates declined significantly in many very high development countries between 2010 and 2018, including in countries with high levels of gender parity. The global data no longer support the suggestion that fertility rates tend to broadly rise at very high levels of national development. From the point of view of evolutionary biology, wealthier people having fewer children is unexpected, as natural selection would be expected to favor individuals who are willing and able to convert plentiful resources into plentiful fertile descendants. This may be the result of a departure from the environment of evolutionary adaptedness. Most models posit that the birth rate will stabilize at a low level indefinitely. Some dissenting scholars note that the modern environment is exerting evolutionary pressure for higher fertility, and that eventually due to individual natural selection or cultural selection, birth rates may rise again. Part of the "cultural selection" hypothesis is that the variance in birth rate between cultures is significant; for example, some religious cultures have a higher birth rate that is not accounted for by differences in income. In his book Shall the Religious Inherit the Earth?, Eric Kaufmann argues that demographic trends point to religious fundamentalists greatly increasing as a share of the population over the next century. Jane Falkingham of Southampton University has noted that "We've actually got population projections wrong consistently over the last 50 years... we've underestimated the improvements in mortality... but also we've not been very good at spotting the trends in fertility." In 2004 a United Nations office published its guesses for global population in the year 2300; estimates ranged from a "low estimate" of 2.3 billion (tending to −0.32% per year) to a "high estimate" of 36.4 billion (tending to +0.54% per year), which were contrasted with a deliberately "unrealistic" illustrative "constant fertility" scenario of 134 trillion (obtained if 1995–2000 fertility rates stay constant into the far future). == Effects on age structure == The decline in death rate and birth rate that occurs during the demographic transition may transform the age structure. When the death rate declines during the second stage of the transition, the result is primarily an increase in the younger population. This is because when the death rate is high (stage one), the infant mortality rate is very high, often above 200 deaths per 1000 children born. As the death rate falls or improves, this may lead to a lower infant mortality rate and increased child survival. Over time, as individuals with increased survival rates age, there may also be an increase in the number of older children, teenagers, and young adults. This implies that there is an increase in the fertile population proportion which, with constant fertility rates, may lead to an increase in the number of children born. This will further increase the growth of the child population. The second stage of the demographic transition, therefore, implies a rise in child dependency and creates a youth bulge in the population structure. As a population continues to move through the demographic transition into the third stage, fertility declines and the youth bulge prior to the decline ages out of child dependency into the working ages. This stage of the transition is often referred to as the golden age, and is typically when populations see the greatest advancements in living standards and economic development. However, further declines in both mortality and fertility will eventually result in an aging population, and a rise in the aged dependency ratio. An increase of the aged dependency ratio often indicates that a population has reached below replacement levels of fertility, and as result does not have enough people in the working ages to support the economy, and the growing dependent population. == Historical studies == === Britain === Between 1750 and 1975 England experienced the transition from high to low levels of both mortality and fertility. A major factor was the sharp decline in the death rate due to infectious diseases, which has fallen from about 11 per 1,000 to less than 1 per 1,000. By contrast, the death rate from other causes was 12 per 1,000 in 1850 and has not declined markedly. Scientific discoveries and medical breakthroughs did not, in general, contribute importantly to the early major decline in infectious disease mortality. === Ireland === In the 1980s and early 1990s, the Irish demographic status converged to the European norm. Mortality rose above the European Community average, and in 1991 Irish fertility fell to replacement level. The peculiarities of Ireland's past demography and its recent rapid changes challenge established theory. The recent changes have mirrored inward changes in Irish society, with respect to family planning, women in the work force, the sharply declining power of the Catholic Church, and the emigration factor. === France === France displays real divergences from the standard model of Western demographic evolution. The uniqueness of the French case arises from its specific demographic history, its historic cultural values, and its internal regional dynamics. France's demographic transition was unusual in that the mortality and the natality decreased at the same time, thus there was no demographic boom in the 19th century. France's demographic profile is similar to its European neighbors and to developed countries in general, yet it seems to be staving off the population decline of Western countries. With 62.9 million inhabitants in 2006, it was the second most populous country in the European Union, and it displayed a certain demographic dynamism, with a growth rate of 2.4% between 2000 and 2005, above the European average. More than two-thirds of that growth can be ascribed to a natural increase resulting from high fertility and birth rates. In contrast, France is one of the developed nations whose migratory balance is rather weak, which is an original feature at the European level. Several interrelated reasons account for such singularities, in particular the impact of pro-family policies accompanied by greater unmarried households and out-of-wedlock births. These general demographic trends parallel equally important changes in regional demographics. Since 1982, the same significant tendencies have occurred throughout mainland France: demographic stagnation in the least-populated rural regions and industrial regions in the northeast, with strong growth in the southwest and along the Atlantic coast, plus dynamism in metropolitan areas. Shifts in population between regions account for most of the differences in growth. The varying demographic evolution regions can be analyzed though the filter of several parameters, including residential facilities, economic growth, and urban dynamism, which yield several distinct regional profiles. The distribution of the French population therefore seems increasingly defined not only by interregional mobility but also by the residential preferences of individual households. These challenges, linked to configurations of population and the dynamics of distribution, inevitably raise the issue of town and country planning. The most recent census figures show that an outpouring of the urban population means that fewer rural areas are continuing to register a negative migratory flow – two-thirds of rural communities have shown some since 2000. The spatial demographic expansion of large cities amplifies the process of peri-urbanization yet is also accompanied by movement of selective residential flow, social selection, and sociospatial segregation based on income. === Asia === McNicoll (2006) examines the common features behind the striking changes in health and fertility in East and Southeast Asia in the 1960s–1990s, focusing on seven countries: Taiwan and South Korea ("tiger" economies), Thailand, Malaysia, and Indonesia ("second wave" countries), and China and Vietnam ("market-Leninist" economies). Demographic change can be seen as a by-product of social and economic development and, in some cases, accompanied by strong government pressure. An effective, often authoritarian, local administrative system can provide a framework for promotion and services in health, education, and family planning. Economic liberalization increased economic opportunities and risks for individuals, while also increasing the price and often reducing the quality of these services, all affecting demographic trends. ==== India ==== Goli and Arokiasamy (2013) indicate that India has a sustainable demographic transition beginning in the mid-1960s and a fertility transition beginning in post-1965. As of 2013, India is in the later half of the third stage of the demographic transition, with a population of 1.23 billion. It is nearly 40 years behind in the demographic transition process compared to EU countries, Japan, etc. The present demographic transition stage of India along with its higher population base will yield a rich demographic dividend in future decades. ==== Korea ==== Cha (2007) analyzes a panel data set to explore how industrial revolution, demographic transition, and human capital accumulation interacted in Korea from 1916 to 1938. Income growth and public investment in health caused mortality to fall, which suppressed fertility and promoted education. Industrialization, skill premium, and closing gender wage gap further induced parents to opt for child quality. Expanding demand for education was accommodated by an active public school building program. The interwar agricultural depression aggravated traditional income inequality, raising fertility and impeding the spread of mass schooling. Landlordism collapsed in the wake of de-colonization, and the consequent reduction in inequality accelerated human and physical capital accumulation, hence leading to growth in South Korea. ==== China ==== China experienced a demographic transition with high death rate and low fertility rate from 1959 to 1961 due to the great famine. However, as a result of the economic improvement, the birth rate increased and mortality rate declined in China before the early 1970s. In the 1970s, China's birth rate fell at an unprecedented rate, which had not been experienced by any other population in a comparable time span. The birth rate fell from 6.6 births per women before 1970 to 2.2 births per women in 1980.The rapid fertility decline in China was caused by government policy: in particular the "later, longer, fewer" policy of the early 1970s and in the late 1970s the one-child policy was also enacted which highly influence China demographic transition. As the demographic dividend gradually disappeared, the government abandoned the one-child policy in 2011 and fully lifted the two-child policy from 2015.The two-child policy has had some positive effects on the fertility which causes fertility constantly to increase until 2018.However fertility started to decline after 2018 and meanwhile there was no significant change in mortality in recent 30 years. === Madagascar === Campbell has studied the demography of 19th-century Madagascar in the light of demographic transition theory. Both supporters and critics of the theory hold to an intrinsic opposition between human and "natural" factors, such as climate, famine, and disease, influencing demography. They also suppose a sharp chronological divide between the precolonial and colonial eras, arguing that whereas "natural" demographic influences were of greater importance in the former period, human factors predominated thereafter. Campbell argues that in 19th-century Madagascar the human factor, in the form of the Merina state, was the predominant demographic influence. However, the impact of the state was felt through natural forces, and it varied over time. In the late 18th and early 19th centuries Merina state policies stimulated agricultural production, which helped to create a larger and healthier population and laid the foundation for Merina military and economic expansion within Madagascar. From 1820, the cost of such expansionism led the state to increase its exploitation of forced labor at the expense of agricultural production and thus transformed it into a negative demographic force. Infertility and infant mortality, which were probably more significant influences on overall population levels than the adult mortality rate, increased from 1820 due to disease, malnutrition, and stress, all of which stemmed from state forced labor policies. Available estimates indicate little if any population growth for Madagascar between 1820 and 1895. The demographic "crisis" in Africa, ascribed by critics of the demographic transition theory to the colonial era, stemmed in Madagascar from the policies of the imperial Merina regime, which in this sense formed a link to the French regime of the colonial era. Campbell thus questions the underlying assumptions governing the debate about historical demography in Africa and suggests that the demographic impact of political forces be reevaluated in terms of their changing interaction with "natural" demographic influences. === Russia === Russia entered stage two of the transition in the 18th century, simultaneously with the rest of Europe, though the effect of transition remained limited to a modest decline in death rates and steady population growth. The population of Russia nearly quadrupled during the 19th century, from 30 million to 133 million, and continued to grow until the First World War and the turmoil that followed. Russia then quickly transitioned through stage three. Though fertility rates rebounded initially and almost reached 7 children/woman in the mid-1920s, they were depressed by the 1931–33 famine, crashed due to the Second World War in 1941, and only rebounded to a sustained level of 3 children/woman after the war. By 1970 Russia was firmly in stage four, with crude birth rates and crude death rates on the order of 15/1000 and 9/1000 respectively. Bizarrely, however, the birth rate entered a state of constant flux, repeatedly surpassing the 20/1000 as well as falling below 12/1000. In the 1980s and 1990s, Russia underwent a unique demographic transition; observers call it a "demographic catastrophe": the number of deaths exceeded the number of births, life expectancy fell sharply (especially for males) and the number of suicides increased. From 1992 through 2011, the number of deaths exceeded the number of births; from 2011 onwards, the opposite has been the case. === United States === Greenwood and Seshadri (2002) show that from 1800 to 1940 there was a demographic shift from a mostly rural US population with high fertility, with an average of seven children born per white woman, to a minority (43%) rural population with low fertility, with an average of two births per white woman. This shift resulted from technological progress. A sixfold increase in real wages made children more expensive in terms of forgone opportunities to work and increases in agricultural productivity reduced rural demand for labor, a substantial portion of which traditionally had been performed by children in farm families. A simplification of the DTM theory proposes an initial decline in mortality followed by a later drop in fertility. The changing demographics of the U.S. in the last two centuries did not parallel this model. Beginning around 1800, there was a sharp fertility decline; at this time, an average woman usually produced seven births per lifetime, but by 1900 this number had dropped to nearly four. A mortality decline was not observed in the U.S. until almost 1900—a hundred years after the drop in fertility. However, this late decline occurred from a very low initial level. During the 17th and 18th centuries, crude death rates in much of colonial North America ranged from 15 to 25 deaths per 1000 residents per year (levels of up to 40 per 1000 being typical during stages one and two). Life expectancy at birth was on the order of 40 and, in some places, reached 50, and a resident of 18th century Philadelphia who reached age 20 could have expected, on average, additional 40 years of life. This phenomenon is explained by the pattern of colonization of the United States. Sparsely populated interior of the country allowed ample room to accommodate all the "excess" people, counteracting mechanisms (spread of communicable diseases due to overcrowding, low real wages and insufficient calories per capita due to the limited amount of available agricultural land) which led to high mortality in the Old World. With low mortality but stage 1 birth rates, the United States necessarily experienced exponential population growth (from less than 4 million people in 1790, to 23 million in 1850, to 76 million in 1900). The only area where this pattern did not hold was the American South. High prevalence of deadly endemic diseases such as malaria kept mortality as high as 45–50 per 1000 residents per year in 18th century North Carolina. In New Orleans, mortality remained so high (mainly due to yellow fever) that the city was characterized as the "death capital of the United States" – at the level of 50 per 1000 population or higher – well into the second half of the 19th century. Today, the U.S. is recognized as having both low fertility and mortality rates. Specifically, birth rates stand at 14 per 1000 per year and death rates at 8 per 1000 per year. == Critical evaluation == Because the DTM is only a model, it cannot necessarily predict the future, but it does suggest an underdeveloped country's future birth and death rates, together with the total population size. Most particularly, of course, the DTM makes no comment on change in population due to migration. It is not necessarily applicable at very high levels of development. DTM does not account for recent phenomena such as AIDS; in these areas HIV has become the leading source of mortality. Some trends in waterborne bacterial infant mortality are also disturbing in countries like Malawi, Sudan and Nigeria; for example, progress in the DTM clearly arrested and reversed between 1975 and 2005. DTM assumes that population changes are induced by industrial changes and increased wealth, without taking into account the role of social change in determining birth rates, e.g., the education of women. In recent decades more work has been done on developing the social mechanisms behind it. DTM assumes that the birth rate is independent of the death rate. Nevertheless, demographers maintain that there is no historical evidence for society-wide fertility rates rising significantly after high mortality events. Notably, some historic populations have taken many years to replace lives after events such as the Black Death. Some have claimed that DTM does not explain the early fertility declines in much of Asia in the second half of the 20th century or the delays in fertility decline in parts of the Middle East. Nevertheless, the demographer John C. Caldwell has suggested that the reason for the rapid decline in fertility in some developing countries compared to Western Europe, the United States, Canada, Australia and New Zealand is mainly due to government programs and a massive investment in education both by governments and parents. DTM does not well explain the impact of government policies on birth rate. In some developing countries, governments often implement some policies to control the growth of fertility rate. China, for example, underwent a fertility transition in 1970, and the Chinese experience was largely influenced by government policy. In particular the "later, longer, fewer" policy of 1970 and one birth policy was enacted in 1979 which all encouraged people to have fewer children in later life. The fertility transition indeed stimulated economic growth and influenced the demographic transition in China. == Second demographic transition == The Second Demographic Transition (SDT) is a conceptual framework first formulated in 1986 by Ron Lesthaeghe and Dirk van de Kaa.: 181  SDT addressed the changes in the patterns of sexual and reproductive behavior which occurred in North America and Western Europe in the period from about 1963, when the birth control pill and other cheap effective contraceptive methods such as the IUD were adopted by the general population, to the present. Combined with the sexual revolution and the increased role of women in society and the workforce the resulting changes have profoundly affected the demographics of industrialized countries resulting in a sub-replacement fertility level. The changes, including increased numbers of women choosing to not marry or have children, increased cohabitation outside marriage, increased childbearing by single mothers, increased participation by women in higher education and professional careers, and other changes are associated with increased individualism and autonomy, particularly of women. Motivations have changed from traditional and economic ones to those of self-realization. In 2015, Nicholas Eberstadt, political economist at the American Enterprise Institute in Washington, described the Second Demographic Transition as one in which "long, stable marriages are out, and divorce or separation are in, along with serial cohabitation and increasingly contingent liaisons."S. Philip Morgan thought future development orientation for SDT is Social demographers should explore a theory that is not based on stages, a theory that does not set a single line, a development path for some final stage—in the case of SDT, a hypothesis that looks like the advanced Western countries that most embrace postmodern values. However, the Second Demographic Transition (SDT) theory has not proposed a single line or teleological evolution based on phases, as was the case for the theories of the First Demographic Transition (FDT). Instead, and this is strikingly in evidence in Lesthaeghe's empirical studies, major attention is being paid to historical path dependency, heterogeneity in the SDT patterns of development, forms of family and lineage organisation, economic and especially ideational developments. For instance, the European pattern of almost simultaneous manifestation of all SDT demographic characteristics is not being replicated elsewhere. The Latin American countries experienced a major growth in pre-marital cohabitation in which the upper social classes were catching up with pre-existing higher levels among the less educated and some ethnic groups. But so far, the other major SDT indicator, namely fertility postponement is largely absent. The opposite holds for Asian patriarchal societies which have traditionally strong rules of arranged endogamous marriage and male dominance. In industrialised East Asian societies a major postponement of union formation and parenthood took place, leading to an expansion of numbers of singles and to very low levels of sub-replacement fertility. In such historically patriarchal societies, free partner choice is to be avoided, and hence there is a strong stigma against pre-marital cohabitation. However, after the turn of the century it was noted that cohabitation did develop in Japan, China, Taiwan and the Philippines. The proportions are still moderate, and pregnancies in cohabiting unions are typically followed by shot-gun marriages or abortions. Parenthood among cohabitants is still very rare. Finally, Hindu and Muslim countries can reach replacement level fertility, but no significant fertility postponement or take off of pre-marital cohabitation have occurred. Hence they are completing the FDT and are not in any type of initiation phase of the SDT. Sub-Saharan African populations exhibit yet another sui generis pattern. These societies have exogamous union formation and weaker marriage institutions. Under these conditions cohabitation seems to grow both among poorer and wealthier population segments alike. Among the former cohabitation reflects the "Pattern of Disadvantage" and among the latter cohabitation is a means of avoiding inflated bride price. However, Sub-Saharan African populations have not yet completed the FDT fertility transition, and several West-African ones have barely started it. Hence, there is a striking disconnection between evolutions of fertility and of partnership formation. The conclusion is that the unfolding of the SDT is characterised by just as much pattern heterogeneity as was the by now historical FDT. == See also == == Footnotes == == References == Carrying capacity Caldwell, John C. (1976). "Toward a restatement of demographic transition theory". Population and Development Review. 2 (3/4): 321–66. doi:10.2307/1971615. JSTOR 1971615. ————————; Bruce K Caldwell; Pat Caldwell; Peter F McDonald; Thomas Schindlmayr (2006). Demographic Transition Theory. Dordrecht, the Netherlands: Springer. p. 418. ISBN 978-1-4020-4373-4. Chesnais, Jean-Claude. The Demographic Transition: Stages, Patterns, and Economic Implications: A Longitudinal Study of Sixty-Seven Countries Covering the Period 1720–1984. Oxford U. Press, 1993. 633 pp. Coale, Ansley J. 1973. "The demographic transition," IUSSP Liege International Population Conference. Liege: IUSSP. Volume 1: 53–72. ————————; Anderson, Barbara A; Härm, Erna (1979). Human Fertility in Russia since the Nineteenth Century. Princeton, NJ: Princeton University Press.. Coale, Ansley J; Watkins, Susan C, eds. (1987). The Decline of Fertility in Europe. Princeton, NJ: Princeton University Press.. Davis, Kingsley (1945). "The World Demographic Transition". Annals of the American Academy of Political and Social Science. 237 (237): 1–11. doi:10.1177/000271624523700102. JSTOR 1025490. S2CID 145140681.. Classic article that introduced concept of transition. Davis, Kingsley. 1963. "The theory of change and response in modern demographic history." Population Index 29(October): 345–66. Kunisch, Sven; Boehm, Stephan A.; Boppel, Michael (eds): From Grey to Silver: Managing the Demographic Change Successfully, Springer-Verlag, Berlin Heidelberg 2011, ISBN 978-3-642-15593-2 Friedlander, Dov; S Okun, Barbara; Segal, Sharon (1999). "The Demographic Transition Then and Now: Processes, Perspectives, and Analyses". Journal of Family History. 24 (4): 493–533. doi:10.1177/036319909902400406. ISSN 0363-1990. PMID 11623954. S2CID 36680992., full text in Ebsco. Galor, Oded (2005). "The Demographic Transition and the Emergence of Sustained Economic Growth" (PDF). Journal of the European Economic Association. 3 (2–3): 494–504. doi:10.1162/jeea.2005.3.2-3.494. hdl:10419/80187. ———————— (2008). "The Demographic Transition". New Palgrave Dictionary of Economics (2nd ed.). Macmillan.. Gillis, John R., Louise A. Tilly, and David Levine, eds. The European Experience of Declining Fertility, 1850–1970: The Quiet Revolution. 1992. Greenwood, Jeremy; Seshadri, Ananth (2002). "The US Demographic Transition". American Economic Review. 92 (2): 153–59. CiteSeerX 10.1.1.13.6505. doi:10.1257/000282802320189168. JSTOR 3083393. Harbison, Sarah F.; Robinson, Warren C. (2002). "Policy Implications of the Next World Demographic Transition". Studies in Family Planning. 33 (1): 37–48. doi:10.1111/j.1728-4465.2002.00037.x. JSTOR 2696331. PMID 11974418. Hirschman, Charles (1994). "Why fertility changes". Annual Review of Sociology. 20: 203–233. doi:10.1146/annurev.so.20.080194.001223. PMID 12318868. Jones, GW, ed. (1997). The Continuing Demographic Transition. et al. Korotayev, Andrey; Malkov, Artemy; Khaltourina, Daria (2006). Introduction to Social Macrodynamics: Compact Macromodels of the World System Growth. Moscow, Russia: URSS. p. 128. ISBN 978-5-484-00414-0. Kirk, Dudley (1996). "The Demographic Transition". Population Studies. 50 (3): 361–87. doi:10.1080/0032472031000149536. JSTOR 2174639. PMID 11618374. Borgerhoff, Luttbeg B; Borgerhoff Mulder, M; Mangel, MS (2000). "To marry or not to marry? A dynamic model of marriage behavior and demographic transition". In Cronk, L; Chagnon, NA; Irons, W (eds.). Human behavior and adaptation: An anthropological perspective. New York: Aldine Transaction. p. 528. ISBN 978-0-202-02044-0. Landry, Adolphe, 1982 [1934], La révolution démographique – Études et essais sur les problèmes de la population, Paris, INED-Presses Universitaires de France McNicoll, Geoffrey (2006). "Policy Lessons of the East Asian Demographic Transition". Population and Development Review. 32 (1): 1–25. doi:10.1111/j.1728-4457.2006.00103.x. JSTOR 20058849. Mercer, Alexander (2014), Infections, Chronic Disease, and the Epidemiological Transition. Rochester, NY: University of Rochester Press/Rochester Studies in Medical History, ISBN 978-1-58046-508-3 Montgomery, Keith. "The Demographic Transition". Geography. Archived from the original on 2019-06-05. Retrieved 2014-04-25.. Notestein, Frank W. 1945. "Population — The Long View," in Theodore W. Schultz, Ed., Food for the World. Chicago: University of Chicago Press. Saito, Oasamu (1996). "Historical Demography: Achievements and Prospects". Population Studies. 50 (3): 537–53. doi:10.1080/0032472031000149606. ISSN 0032-4728. JSTOR 2174646. PMID 11618380.. Soares, Rodrigo R., and Bruno L. S. Falcão. "The Demographic Transition and the Sexual Division of Labor," Journal of Political Economy, Vol. 116, No. 6 (Dec., 2008), pp. 1058–104 Szreter, Simon (1993). "The Idea of Demographic Transition and the Study of Fertility: A Critical Intellectual History". Population and Development Review. 19 (4): 659–701. doi:10.2307/2938410. JSTOR 2938410.. ————————; Nye, Robert A; van Poppel, Frans (2003). "Fertility and Contraception During the Demographic Transition: Qualitative and Quantitative Approaches". Journal of Interdisciplinary History. 34 (2): 141–54. doi:10.1162/002219503322649453. ISSN 0022-1953. S2CID 54023512., full text in Project Muse and Ebsco Thompson, Warren S (1929). "Population". American Journal of Sociology. 34 (6): 959–75. doi:10.1086/214874. S2CID 222441259. After the next World War, we will see Germany lose more women and children and soon start again from a developing stage. World Bank, Fertility Rate
Wikipedia/Demographic_transition
Proto-industrialization is the regional development, alongside commercial agriculture, of rural handicraft production for external markets. Cottage industries in parts of Europe between the 16th and 19th centuries had long been a niche topic of study. In the early 1970s, some economic historians introduced the label "proto-industrialization", arguing that these developments were the main cause of the economic and demographic growth and social change that occurred in Europe over this period, and of the Industrial Revolution that followed. Several theories were proposed to explain the mechanisms of this proposed causation. Proto-industrialization theories have been challenged by other historians. They stress the importance of other factors that are downplayed in proto-industrialization theories. Empirical studies have demonstrated a variety of economic and demographic responses to proto-industrialization. In several cases it led to de-industrialization. Later researchers suggested that similar conditions had arisen in other parts of the world, including Mughal India and Song China. A proto-industrial and even partially industrial economy has moreover been suggested for the Roman Empire between the 1st and 4th centuries AD. == Theories == The concept was developed and named by Franklin Mendels in his 1969 doctoral dissertation on the rural linen industry in 18th-century Flanders and popularized in his 1972 article based on that work. Mendels argued that using surplus labor, initially available during slow periods of the agricultural seasons, increased rural incomes, broke the monopolies of urban guild systems and weakened rural traditions that had limited population growth. The resulting increase in population led to further growth in production, in a self-sustaining process that, Mendels claimed, created the labour, capital and entrepreneurial skill that led to industrialization. Other historians expanded on these ideas in the 1970s and 1980s. In their 1979 book, Peter Kriedte, Hans Medick and Jürgen Schlumbohm expanded the theory into a broad account of the transformation of European society from feudalism to industrial capitalism. They viewed proto-industrialization as part of the second phase in this transformation, following the weakening of the manorial system in the High Middle Ages. Later historians identified similar situations in other parts of the world, including India, China, Japan and the former Muslim world. The applicability of proto-industrialization in Europe has since been challenged. Martin Daunton, for example, argues that proto-industrialisation "excludes too much" to fully explain the expansion of industry: not only do proponents of proto-industrialisation ignore the vital town-based industries in pre-industrial economies, but also ignores "rural and urban industry based upon non-domestic organisation"; referring to how mines, mills, forges and furnaces fit into the agrarian economy. Clarkson has criticized the tendency to categorize all types of pre-industrial manufacturing as proto-industries. Sheilagh Ogilvie discussed the historiography of proto-industrialisation, and observed that scholars have re-evaluated pre-factory industrial production, but have seen it emerge as a phenomenon of its own rather than just a precursor to industrialisation. According to Ogilvie, a major perspective "emphasizes long-term continuities in the economic and social development of Europe between the medieval period and the nineteenth century." Some scholars have defended the original conceptualisation of proto-industrialisation or extended it. == Europe == Mendels' initial coining of "proto-industrialisation" referred to commercial activities in 18th-century Flanders and much study focused on the region. Sheilagh Ogilvie wrote, "Proto-industries arose in almost every part of Europe in the two or three centuries before industrialization." Rural proto-industries were often affected by guilds, which retained major influence over rural manufacturing in Switzerland (until the early 17th century), France and Westphalia (until the later 17th century), Bohemia and Saxony (until the early 18th century), Austria, Catalonia, and the Rhine area (until the later 18th century) and Sweden and Württemberg (into the 19th century). In other areas of Europe, guilds excluded all forms of proto-industry, including in Castile and parts of northern Italy. Political struggles occurred between proto-industries and regional guilds that sought to control them, as well against urban privileges or customs privileges. Bas van Bavel argued that some non-agriculture activities in the Low Countries reached a proto-industrial extent as early as the 13th century, though with regional and temporal differences, with a peak in the 16th century. Van Basel observes that Flanders and Holland developed as urbanised regions (a third of Flanders' population being urban in the 15th century, and over half of Holland's population in the 16th century) with a commercialised countryside and developed export markets. Flanders saw the predominance of labor-intensive rural activities such as textile production, while Holland saw the predominance of capital-intensive urban activities such as shipbuilding. Proto-industrial activities in Holland included "glue-production, lime-burning, brick work, peat digging, barging, shipbuilding, and textile industries" targeted for export. Historian Julie Marfany also put forward a theory of proto-industrialisation observing proto-industrial textile production in Igualada, Catalonia from 1680, and its demographic effects — including increased population growth that compared to the later industrial revolution. Marfany also suggests that a somewhat alternate mode of capitalism developed due to differences in the family unit compared to Northern Europe. == Mughal Empire == Some historians have identified proto-industrialization in the early modern South Asia, mainly in the wealthiest and largest subdivision of Mughal Empire, the Bengal Subah. The eastern part of Bengal (today's modern Bangladesh) was globally prominent in industries such as textile manufacturing and shipbuilding, and it was a major exporter of silk and cotton textiles, steel, saltpeter, and agricultural and industrial produce in the world. The region singlehandedly accounted for 40% of Dutch imports outside Europe. == Song China == Economic development in the Song dynasty (960–1279) has often been compared to proto-industrialization or an early capitalism. The commercial expansion began in the Northern Song dynasty and was catalysed by migrations in the Southern Song dynasty. With the growth of the production of non-agricultural goods in a cottage industry context (such as silk), and the production of cash crops that were sold instead of consumed (such as tea), market forces were extended into the life of ordinary people. There was a rise of industrial and commercial sectors, and profit-making commercialisation emerged. There were parallel government and private enterprises in iron and steel production, while there was strict government control of some industries such as sulfur and saltpetre production. Historian Robert Hartwell estimated that per capita iron output in Song China rose sixfold between 806 and 1078 based on Song-era receipts. Hartwell estimated that China's industrial output in 1080 resembled that of Europe in 1700. An arrangement of allowing competitive industry to flourish in some regions while setting up its opposite of strict government-regulated and monopolized production and trade in others was prominent in iron manufacturing as in other sectors. In the beginning of the Song, the government supported competitive silk mills and brocade workshops in the eastern provinces and in the capital city of Kaifeng. However, at the same time the government established strict legal prohibition on the merchant trade of privately produced silk in Sichuan province. This prohibition dealt an economic blow to Sichuan that caused a small rebellion (which was subdued), yet Song Sichuan was well known for its independent industries producing timber and cultivated oranges. Many of the economic gains were lost during the Yuan dynasty, taking centuries to recover. Coal mining was a cutting-edge sector in the Song era, but declined with the Mongol conquest. Iron production recovered to an extent during Yuan, based mainly on charcoal and wood. == See also == Barbegal aqueduct and mills Pre-industrial society Putting-out system Sprouts of capitalism Venetian Arsenal History of the cotton industry in Catalonia == References == === Works cited === == Further reading == Hudson, P. (1990). "Proto-industrialisation". Recent Findings of Research in Economics and Social History. 10: 1–4.
Wikipedia/Proto-industrialization
The Argentine energy crisis was a natural gas supply shortage experienced by Argentina in 2004. After the recession triggered by the Argentine economic crisis (1999-2002), Argentina's energy demands grew quickly as industry recovered, but extraction and transportation of natural gas, a cheap and relatively abundant fossil fuel, did not match the surge. According to estimates, 50% of the electricity generated in Argentina depends on gas-powered plants. The national energy matrix has no emergency reserves and by 2004 it was functioning at the top of its capacity. At this point, barely emerging from the seasonal low demand caused by summer, many industrial facilities and power plants started suffering intermittent cuts in their supply of natural gas. Between February and May the cuts amounted to an average of 9.5 million m³ a day, about 13% of industrial demand, and by the end of May they grew to a maximum of 22 million m³. The most seriously affected regions were the capital, certain regions of the province of Buenos Aires, and the province of La Pampa. As winter approached, the Argentine government announced that it would restrict natural gas exports in order to preserve the supply for internal consumption, both domestic and industrial, in compliance with the Hydrocarbons Law. These export cuts would seriously harm Chile and affect Uruguay and Brazil. The Chilean Minister of Economy and Energy, Jorge Rodríguez, warned Argentina that supply contracts with Chilean companies must be fulfilled. This caused a mild diplomatic crisis. Chile imports more than 90% of its natural gas from Argentina and depends heavily on it to generate electricity; it has shifted the focus from coal and oil towards gas, and had five gas pipelines built for the specific purpose of getting gas from Argentina. == Causes == The energy crisis was blamed on a number of factors. Former Argentine President Néstor Kirchner attributed it on lack of investment on the part of the private companies that extract the resource (such as Repsol YPF), and the concomitant lack of pressure from past governments on those companies. The private corporations contended that their profits after the collapse of the Argentine economy were severely hurt by the freezing of domestic and industrial fees since 2002. Natural gas remained at the same price during the inflationary process caused by the devaluation of the Argentine peso, while the prices of gasoline and diesel were adjusted upwards, which increased the demand for gas as a cheap alternative fuel and at the same time discouraged its production. In addition to this, a larger part of the supply of natural gas was required to compensate for a smaller yield of hydroelectricity. The exporters complained that heavy export tariffs compounded with the price freezing and prevented them from investing on more surveyance and further exploitation, thus leaving them unable to keep up with demand. However, the government and critics of the neoliberal model of the Menem administration point out that the privatized companies obtained huge profits during the 1990s. == Remedies == In order to diminish the impact of the crisis, three measures were suggested: buying natural gas from Bolivia, which has abundant reserves of it; directly buying electricity from Brazil, which generates a large part of it using hydroelectric power plants; and importing oil from Venezuela. For historical reasons, Bolivia would not sell natural gas to Chile. Moreover, it lacks the infrastructure to convey it. A projected gas pipeline that would transport massive amounts of gas to Argentina was delayed by the critical political situation in Bolivia during 2003. Moreover, some people and organizations in Bolivia have expressed strong disagreement about the idea of exporting gas, calling the energy crisis "a fiction". The Venezuelan Chávez administration, which at the time was politically close to the Argentine government, signed energy accords that including sending fuel oil tankers to Argentina at reduced costs, through PDVSA (the Venezuelan state oil company). Fuel oil (imported or otherwise) is, in any case, considerably more expensive than natural gas. In addition to industrial supply, Argentina employs Compressed Natural Gas for stoves, ovens, etc., and as fuel for over 1.4 million natural gas vehicles. While the possibility of restricting domestic usage was considered, it was deemed unnecessary and disruptive. As a response to the 2001 economic crisis, electricity tariffs were converted to the Argentine peso and frozen in January 2002 through the Public Emergency and Exchange Regime Law. Together with high inflation (see Economy of Argentina) and the devaluation of the peso, many companies in the sector had to deal with high levels of debt in foreign currency under a scenario in which their revenues remained stable while their costs increased. This situation has led to severe underinvestment and unavailability to keep up with an increasing demand, factors that contributed to the 2003-2004 energy crisis. Since 2003, the government has been in the process of introducing modifications that allow for tariff increases. Industrial and commercial consumers' tariffs have already been raised (near 100% in nominal terms and 50% in real terms), but residential tariffs still remain the same. Nevertheless, the national government even tried to profit from the crisis by creating a new oil company, Enarsa, with 53% of state control and full exploitation rights over offshore areas. == Winter 2005 == As 2004 passed with no major disruptions, some people claimed that the so-called "energy crisis" had in fact turned out a minor complication, inflated by the government and the media. In a broader context, though, it is still true that investments on exploitation of energy resources, as well as energy production and distribution, are insufficient. In March 2005, President Kirchner admitted that "for a long time the possibility will remain that we must move on the brink [of a crisis]". However, the government also pointed out that remedies are on the works, and that Argentina is better prepared than in 2004 to face problems with energy generation. In the meantime, fuel oil supply from Venezuela has continued, amounting to 50 million tonnes sent in two ships (in April and May) by PDVSA, in a coordinated effort with the Brazilian oil company Petrobras and the Electrical Market Management Company of Argentina (Cammesa). Analysts and officials, such as former President of Uruguay Jorge Batlle, have remarked that a full-fledged protocol for energetic integration of Mercosur should be outlined and brought into action as soon as possible to coordinate energy production and distribution in the region. == References == Argentina: crisis energética (26 March 2004) (in Spanish) Argentina teme crisis energética (9 May 2005) (in Spanish) Peligro... crisis energética (in Spanish) Crisis energética - Ficción asesina (in Spanish) ¿Cómo impacta la crisis energética argentina en el mercado eléctrico? - A paper by Carlos Santiago Valquez, Instituto de Economía y Finanzas, Facultad de Ciencias Económicas, Universidad Nacional de Córdoba, Argentina. (in Spanish) Ley Nº 17.319 - Text of the Hydrocarbons Law of Argentina. (in Spanish) IANGV - Worldwide statistics on natural gas vehicles. (in Spanish) == See also == Energy crisis
Wikipedia/2004_Argentine_energy_crisis
A labour revolt or workers' uprising is a period of civil unrest characterised by strong labour militancy and strike activity. The history of labour revolts often provides the historical basis for many advocates of Marxism, communism, socialism, anarchism, and human rights, with many instances occurring around the world in both the 19th and 20th centuries. == Labour revolts in France == The Canut Revolts in Lyons, France, were the first clearly defined worker uprisings of the Industrial Revolution. The First occurred in November 1831 and was followed by later revolts in 1834 and 1848. Following the closure of the national workshops after the 1848 revolution in Paris, there was an uprising in Paris involving 100,000 insurgents involved in a three-day battle with the army, volunteers and reserve forces. The Paris Commune in France (1871) is hailed by both anarchists and Socialists as the first assumption of power by the working class, but controversy of the policies implemented in the Commune helped the split between the two groups. == Labour revolts in the United States == The earliest revolts in the United States include the pockets of rebellion by slaves and servants acting together in actual uprisings or planned revolts throughout its colonial period. For instance, there was the case of the 1712 incident where 23 slaves who killed nine whites in New York to avenge their harsh treatment. Slaves have also joined farmers in several uprisings against the social system wherein royal authorities, proprietors, trading companies and large land owners were in charge. One of the most dramatic was the uprising in the Royal colony of Virginia in 1676 led by Nathaniel Bacon against the corrupt royal governor, Sir William Berkeley. The Great Railroad Strike of 1877 and the 1877 Shamokin Uprising occurred in the United States. It is considered the bloodiest labor-management confrontation in U.S. history. The uprising was in response to the railroad executives decision to cut wages and lay off employees due to the economic downturn caused by the panic of 1873. The strike began in July 1877 when workers of the Baltimore & Ohio Railroad blocked railway traffic after the company imposed a 10 percent pay cut. It sparked similar movements among railroad workers everywhere and an estimated 100,000 workers joined the uprising nationwide. The revolt included riots and destruction of railroad property and was met with violent crackdowns. These Revolts led to the progression of labour movements in the United States, therefore in return led to fairer wages, better working conditions, and the overall well being of workers. The Battle of Blair Mountain in Logan County, West Virginia, U.S. (1921), was the largest organised armed uprising in American Labour History since the Civil War, and had a major impact on labour legislation in the United States. The confrontation was so violent that the then President Warren Harding ordered the aerial bombing of entrenched miner positions. == Labour revolts in Russia, Germany and Eastern Europe == The Revolution of 1905 in led to the creation of the Saint Petersburg Soviet or worker's council which became the model for most Communist Revolutionary Activity. The Soviet was revived in the Russian Revolution and the model was repeated in the German Revolution of 1918–19, The Bavarian Soviet Republic and the Hungarian Soviet Republic. Some revolutionary activity within the Eastern Bloc resembled Labour Revolts, such as the Uprising of 1953 in East Germany, the Hungarian Revolution of 1956, the Polish 1970 protests although many communists would dispute this as 'Counter-Revolutionary' activity. == Labour revolts in Great Britain == A Red Clydeside was a period of labour and political militancy in the city of Glasgow, Scotland, between the 1910s and the 1930s. Most famously, this resulted in raising the red flag in the Battle of George Square. == Labour revolts in Spain == The Asturian miners' strike of 1934 == Labour revolts elsewhere == Some observers claimed that the protests of 1968 were part of a "revolutionary wave", with much of the activity motivated by students. Gwangju massacre in South Korea, 1980 The Nghe-Tinh Revolt 1930–31 French Indochina Brazilian Anarchist Uprising 1917–18 Saigon Commune, Vietnam 1945 == See also == List of peasant revolts Proletarian revolution General strike Cuno strikes Slave rebellion == References == == External links == The Nghe-Tinh Revolt 1917–1918: The Brazilian anarchist uprising
Wikipedia/Industrial_unrest
Crisis theory, concerning the causes and consequences of the tendency for the rate of profit to fall in a capitalist system, is associated with Marxian critique of political economy, and was further popularised through Marxist economics. == History == Earlier analysis by Jean Charles Léonard de Sismondi provided the first suggestions of the systemic roots of Crisis. "The distinctive feature of Sismondi's analysis is that it is geared to an explicit dynamic model in the modern sense of this phrase ... Sismondi's great merit is that he used, systematically and explicitly, a schema of periods, that is, that he was the first to practice the particular method of dynamics that is called period analysis". Marx praised and built on Sismondi's theoretical insights. Rosa Luxemburg and Henryk Grossman both subsequently drew attention to both Sismondi's work on the nature of capitalism, and as a reference point for Karl Marx. Grossman in particular pointed out how Sismondi had contributed to the development of a series of Marx's concepts including crises as a necessary feature of capitalism, arising from its contradictions between forces and relations of production, use and exchange value, production and consumption, capital and wage labor. His "inkling ... that the bourgeois forms are only transitory" was also distinctive. John Stuart Mill in his Of the Tendency of Profits to a Minimum which forms Chapter III of Book IV of his Principles of Political Economy and Chapter V, Consequences of the Tendency of Profits to a Minimum, provides a conspectus of the then accepted understanding of a number of the key elements, after David Ricardo, but without Karl Marx's theoretical working out of the theory that Frederick Engels posthumously published in Capital, Volume III. Marx's crisis theory, embodied in " ... the law of profitability did not appear until the publication of [Capital] Volume Three in 1894. Grundrisse was not available to anybody until well into the 20th Century ... " and therefore was only partially understood even among leading Marxists at the beginning of the twentieth-century. His notes, 'Books of Crisis' [Notebooks B84, B88 and B91] remain unpublished and have seldom been referred to. A relatively small group including Rosa Luxemburg and Lenin attempted to defend the revolutionary implications of the theory, while others, first Eduard Bernstein and then Rudolf Hilferding, argued against its continued applicability, and thereby founded one of the mainstreams of revision of the interpretation of Marx's ideas after Marx. Henry Hyndman had written a short history of the crises in the 19th century in 1892, attempting to present, popularise and defend Marx's theory of crisis in lectures delivered in 1893 and 1894 and published in 1896. Max Beer also asserted the centrality of Marx's crisis theory in his pedagogic contributions The Life and Teaching of Karl Marx 1925 and his A Guide to the Study of Marx: An Introductory Course for Classes and Study Circles 1924. In the late 1920s and early 30s, Max Beer worked at the Institut fur Sozialforschung and was a friend of Henryk Grossman. It was Henryk Grossman in 1929 who later most successfully rescued Marx's theoretical presentation ... 'he was the first Marxist to systematically explore the tendency for the organic composition of capital to rise and hence for the rate of profit to fall as a fundamental feature of Marx's explanation of economic crises in Capital.' Apparently entirely independently Samezō Kuruma was also in 1929 drawing attention to the decisive importance of Crisis theory in Marx's writings, and made the explicit connection between Crisis theory and the theory of imperialism. Following the extensive setbacks to independent working class politics, the widespread destruction both of people, property and capital value, the 1930s and '40s saw attempts to reformulate Marx's analysis with less revolutionary consequences, for example in Joseph Schumpeter's concept of creative destruction and his presentation of Marx's crisis theory as a prefiguration of aspects of what Schumpeter, and others, championed as merely a theory of business cycle. "... more than any other economist [Marx] identified cycles with the process of production and operation of additional plant and equipment". A survey of the competing theories of crisis in the different strands of political economy and economics was provided by Anwar Shaikh in 1978 and by Ernest Mandel in his 'Introduction' to the Penguin edition of Marx's Capital Volume III particularly in the section 'Marxist theories of crisis' (p. 38 et seq) where it appears that Mandel says more about the theoretical confusion on this question at that time, even among thoughtful and influential Marxists, than offering an excursus or introduction to Marx's crisis theory. There have been attempts particularly in periods of capitalist growth and expansion, most notably in the long Post-War Boom to both explain the phenomenon and to argue that Marx's strong statements of its 'lawlike' fundamental character under capitalism have been overcome in practice, in theory or both. As a result, there have been persistent challenges to this aspect of Marx's theoretical achievement and reputation. Keynesians argue that a "crisis" may refer to an especially sharp bust cycle of the regular boom and bust pattern of "chaotic" capitalist development, which, if no countervailing action is taken, could continue to develop into a recession or depression. It continues to be argued in terms of historical materialism theory, that such crises will repeat until objective and subjective factors combine to precipitate a resolution of the underlying class struggle in a social transition to the new mode of production either by sudden collapse in a final crisis, or gradual erosion of the basing on competition and the emerging dominance of cooperation. == Causes of crises == The concept of periodic crises within capitalism dates back to the works of the Utopian socialists Charles Fourier and Robert Owen and the Swiss economist Léonard de Sismondi. Karl Marx considered his crisis theory to be his most substantial theoretical achievement. He presents it in its most developed form as Law of Tendency for the Rate of Profit to Fall combined with a discussion of various counter tendencies, which may slow or modify its impact." Roman Rosdolsky observed that "Marx concludes by saying that the law of the tendency of the rate of profit to fall is in every respect the most important law of modern political economy ... despite its simplicity, it has never before been grasped and even less consciously articulated ... It is from the historical standpoint the most important law." A key characteristic of these theoretical factors is that none of them are natural or accidental in origin but instead arise from systemic elements of capitalism as a mode of production and basic social order. In Marx's words, "The real barrier of capitalist production is capital itself". The law of the falling rate of profit, the unexpected consequence of the profit motive, is described by Marx as a "two-faced law with the same causes for a decrease in the rate of profits and a simultaneous increase of the mass of profits". "In short, the same development of the social productivity of labour expresses itself in the course of capitalist development on the one hand in a tendency to a progressive fall of the rate of profit, and on the other hand in a progressive increase of the absolute mass of the appropriated surplus value, or profit; so that on the whole a relative decrease of variable capital and profit is accompanied by an absolute increase of both." == Crisis, or cycles? Alternative Marxist theories of crises == Source: In 1929 the Communist Academy in Moscow published "The Capitalist Cycle: An Essay on the Marxist Theory of the Cycle", a 1927 report by Bolshevik theoretician Pavel Maksakovsky to the seminar on the theory of reproduction at the Institute of Red Professors of the Communist Academy. This work explains the connection between crises and regular business cycles based on the cyclical dynamic disequilibrium of the reproduction schemes in volume 2 of Capital. This work rejects the various theories elaborated by "Marxian" academics. In particular it explains that the collapse in profits following a boom and crisis is not the result of any long term tendency but is rather a cyclical phenomenon. The recovery following a depression is based on replacement of labor-intensive techniques that have become uneconomic at the low prices and profit margins following the crash. This new investment in less labor-intensive technology takes market share from competitors by producing at lower cost while also lowering the average rate of profit and thus explains the actual mechanism for both economic growth with improved technology and a long run tendency for the rate of profit to fall. The recovery eventually leads to another boom because the lag for gestation of fixed capital investment results in prices that continue such investment until eventually the completed projects deliver overproduction and a crash. There is a long history of interpreting crisis theory, rather as a theory of cycles than of crisis. An example in 2013 by Peter D. Thomas and Geert Reuten, "Crisis and the Rate of Profit in Marx's Laboratory" suggests controversially that even Marx's own critical analysis can be claimed to have transitioned from the former toward the latter. == Similarities (and differences) in the work of J. S. Mill & Marx == There are several elements in Marx's presentation which attest to his familiarity with Mill's formulations, notably Mill's treatment of what Marx would subsequently call counteracting tendencies: destruction of capital through commercial revulsions §5, improvements in production §6, importation of cheap necessaries and instruments §7, and emigration of capital §8. "In Marx's system, as in Mill's the falling rate of profit is a long-run tendency precisely because of the "counteracting influences at work which thwart and annul the effects of this general law, leaving to it merely the character of a tendency." These counteracting forces are as follows: (1) An increase in the intensity of exploitation (via intensification of labor or the extension of the working day); (2) Depression of wages below their value ... (3) Cheapening of the elements of constant capital (via increased productivity); (4) Relative overproduction (which keeps many workers employed in relatively backward industries, such as luxury goods, where the organic composition of capital is low); (5) Foreign trade (which offers cheaper commodities and more profitable channels of investment); and (6) The increase of "stock capital" (interest bearing capital, whose low rate of return is not averaged with others). Again, like Mill, Marx indicates the post-crisis waste of capital which restores profitability, but this is not mentioned specifically as a counter-tendency until the cyclical nature of the system is demonstrated. On the other hand, Mill does not refer to depression of wages below their value, relative overpopulation, or the increase in "stock capital". But on the most important counter-tendencies, that is, the effects of increasing productivity at home in cheapening commodities and of foreign trade in providing both cheaper goods and greater profits, Marx and Mill are in accord." == Application == It is a tenet of many Marxist groupings that crises are inevitable and will be increasingly severe until the contradictions inherent in the mismatch between the relations of production and the development of productive forces reach the final point of failure, determined by the quality of their leadership, the development of the consciousness of the various social classes, and other "subjective factors". Thus, according to this theory, the degree of "tuning" necessary for intervention in otherwise "perfect" market mechanisms will become more and more extreme as the time in which the capitalist order is a progressive factor in the development of productive forces recedes further and further into the past. But the subjective factors are the explanation for why purely objective factors such as the severity of a crisis, the rate of exploitation, etc., do not alone determine the revolutionary upsurge. A common example is the contrast of the oppression of the working classes in France in centuries prior to 1789 which although greater did not lead to social revolution as it did once the complete correlation of forces appeared. Kuruma in his 1929 Introduction to the Study of Crisis ends by noting "... my use of the term "theory of crisis" is not limited to the theory of economic crisis. This term naturally also encompasses the study of the necessity of imperialist world war as the explosion of the contradictions peculiar to modern capitalism. Imperialist world war itself is precisely crisis in its highest form. Thus, the theory of imperialism must be an extension of the theory of crisis." David Yaffe, in his application of the theory in the conditions of the end of the Post War Boom in the early 1970s, made an influential link to the expanding role of the state's interventions into economic relations as a politically critical element in attempts by capital to counteract the tendency and find new ways to make the working class pay for the crisis. == Influence == Crisis theory is central to Marx's writings; it helps underpin Marxists' understanding of a need for systemic change. It is controversial; Roman Rosdolsky said "The assertion that Marx did not propose a 'breakdown theory' is primarily attributable to the revisionist interpretation of Marx before and after the First World War. Rosa Luxemburg, Henryk Grossman [and Samezō Kuruma] rendered inestimable theoretical services by insisting, as against the revisionists, on the breakdown theory." More recently David Yaffe 1972,1978 and Tony Allen et al. 1978,1981 in using the theory to explain the conditions at the end of the post-war boom of the 1970s and 1980s re-introduced the theory to a new generation and gained new readers for Grossman's 1929 presentation of Marx's Crisis theory. Rosa Luxemburg lectured on the 'History of Theories of Economic Crises' at the SPD's Party School in Berlin (possibly in 1911, since the typescript includes a reference to statistics from 1911). Henryk Grossman's re-presentation of both the central importance of the theory for Marx and the working out of its elements in a partially mathematical form was published in 1929. Central to the argument is the claim that, within a given business cycle, the accumulation of surplus from year to year leads to a kind of top-heaviness, in which a relatively fixed number of workers have to add profit to an ever-larger lump of investment capital. This observation leads to what is known as Marx's law of the tendency of the rate of profit to fall. Unless certain countervailing possibilities are available, the growth of capital out-paces the growth of labour, so the profits of economic activity have to be shared out more thinly among capitals, i.e., at a lower profit rate. When countervailing tendencies are unavailable or exhausted, the system requires the destruction of capital values in order to return to profitability. Hence creating the underlying preconditions for post-war boom. Paul Mattick's Economic Crisis and Crisis Theory (published by Merlin Press in 1981) is an accessible introduction and discussion derived from Grossman's work. François Chesnais's (1984, chapter Marx's Crisis Theory Today, in Christopher Freeman ed. Design, Innovation and Long Cycles in Economic Development Frances Pinter, London), discussed the continuing relevance of the theory. Andrew Kliman has made major new contributions with a thorough and trenchant philosophical and logical defence of the consistency of the theory in Marx's work, against a number of the criticisms proposed against important aspects of Marx's theory since the seventies. Francois Chesnais has provided an important exploration of the 'fictitious capital' or 'Finance Capital' aspects of the theory in a review of both historical and contemporary empirical research. Guglielmo Carchedi and Michael Roberts in their edited collection World in Crisis [2018] provide a valuable review of the empirical analyses that support and defend the thesis, with contributions from authors in the UK, Greece, Spain, Argentina, Mexico, Brazil, Australia and Japan. == Difference between Marxist economists and Keynesians == Keynesian economics, which attempts a "middle way" between laissez-faire, unadulterated capitalism and state guidance and partial control of economic activity, such as in the French dirigisme or the policies of the Golden Age of Capitalism, attempts to address such crises with the policy of having the state actively supplying the deficiencies of unaltered markets. Marx and Keynesians approach and apply the concept of economic crisis in distinct and opposite ways. The Keynesian approach attempts to stay strictly within the economic sphere and describes 'boom' and 'bust' cycles that balance out. Marx observed and theorised economic crisis as necessarily developing out of the contradictions arising from the dynamics of capitalist production relations. "Where Marx differs from Keynes is precisely on the question of the falling rate of profit. It is not the propensity to consume or subjective expectations about future profitability that is crucial for Marx. It is the rate of exploitation and the social productivity of labour that are the key considerations and these in relation to the existing capital stock. While for Keynes the low marginal productivity of capital has its cause in an over-abundance of capital in relation to profit expectations, and therefore to a 'potential' over-production of commodities (the capitalist will not invest). For Marx the overproduction of capital is only relative to the social productivity of labour and the existing exploitation conditions. It represents an insufficient mass of surplus-value in relation to total capital. So that for Marx the crisis is, and can only be, resolved by expanding profitable production and accumulation, while for Keynes, it can supposedly be remedied by increasing 'effective demand' and this allows for government induced-production." Yaffe noted in 1972 that "... passages in Volume III referring to the underconsumption of the masses in no way can be interpreted as an underconsumptionist theory of crisis. The citation usually given in support of an 'underconsumptionist theory of crisis' is Marx's statement that "The last cause of all real crises always remains the poverty and restricted consumption of the masses as compared to the tendency of capitalist production to develop the productive forces in such a way, that only the absolute power of consumption of the entire society would be their limit." The above passage contains within it no more than a description or a restatement of the capitalist relations of production. Marx called it a tautology to explain the crisis by lack of effective consumption ..." Other explanations have been formulated, and much debated, including: The tendency of the rate of profit to fall. The accumulation of capital, the general advancement of techniques and scale of production, and the inexorable trend to oligopoly by the victors of capitalist market competition, all involve a general tendency for the degree of capital intensity, i.e., the "organic composition of capital" of production to rise. All else constant, this is claimed to lead to a fall in the rate of profit, which would slow down accumulation. Full employment profit squeeze. Capital accumulation can pull up the demand for labor power, raising wages. If wages rise "too high," it hurts the rate of profit, causing a recession. The interaction between the employment rate and the wage share has been mathematically formalised by the Goodwin model. Overproduction. If the capitalists win the class struggle to push wages down and labor effort up, raising the rate of surplus value, then a capitalist economy faces regular problems of excess producer supply and thus inadequate aggregate demand and its corollary the underconsumptionist theory. On which Engels comments "the underconsumption of the masses, the restriction of the consumption of the masses to what is necessary for their maintenance and reproduction, is not a new phenomenon. It has existed as long as there have been exploiting and exploited classes. The underconsumption of the masses is a necessary condition of all forms of society based on exploitation, consequently also of the capitalist form; but it is the capitalist form of production which first gives rise to crises. The underconsumption of the masses is therefore also a prerequisite condition for crises, and plays in them a role which has long been recognised. But it tells us just as little why crises exist today as why they did not exist before" The Post Keynesian economics debt-crisis theory of Hyman Minsky. A variety of theories of Monopoly Capitalism have also been propounded as attempts to explain through exogenous factors, why the tendency might not become apparently manifest in periods of capital accumulation, under various historical circumstances. == See also == == References == == Further reading == Allen, Tony et al. [1978] The Recession: Capitalist Offensive and the Working Class RCP 3, July 1978, Junius Allen, Tony [1981] World in Recession in RCP 7, July 1981, Junius Balomenos, Christos [2023] Did Marx Have Second Thoughts about the Law of the Falling Rate of Profit? An Archival Rejection of Heinrich's Arguments in Science & Society, Vol. 87, No. 4, October 2023, 556–581 Balomenos, Christos [2024] Did Engels’ editing of Capital, Volume 3 distort Marx's analysis of the ‘tendency of the rate of profit to fall’? in Capital & Class 1–21 2024 Bell, Peter and Cleaver, Harry [1982] Marx's Theory of Crisis as a Theory of Class Struggle first published in 'Research in Political Economy', Vol 5(5): 189–261, 1982 Brooks, Mick [2012] Capitalist Crisis Theory and Practice: A Marxist Analysis of the Great Recession 2007–11 eXpedia ISBN 978-83-934266-0-7 Bullock, Paul and Yaffe, David [1975] Inflation, the Crisis and the Post-War Boom RC 3/4 November 1975, RCG Guglielmo Carchedi & Michael Roberts eds. [2018] World in Crisis: A Global Analysis of Marx's Law of Profitability Haymarket Books, Chicago, Illinois Chesnais, François [1984] Marx's Crisis Theory Today in Christopher Freeman ed. Design, Innovation and Long Cycles in Economic Development 2nd ed. 1984 Frances Pinter, London Chesnais, François [Feb 2012] World Economy - The Roots of the World Economic Crisis in International Viewpoint Online magazine : IV445 Chesnais, François [first ed 2016] Finance Capital Today: Corporations and Banks in the Lasting Global Slump Haymarket Books Chicago, IL, 2017 Clarke, Simon [1994] Marx's Theory of Crisis Macmillan Day, Richard B. [1981] The 'Crisis' and the 'Crash': Soviet Studies of the West (1917–1939) NLB Day, Richard B. & Daniel Gaido (trans. & eds) [2012] Discovering Imperialism: Social Democracy to World War I, Haymarket Grossman, Henryk [1922] The Theory of Economic Crises Grossman, Henryk [1929,1992] The Law of Accumulation and Breakdown of the Capitalist System Pluto, new full English translation: Jairus Banaji : Henryk Grossman Works, Volume 3: The Law of Accumulation and Breakdown of the Capitalist System, Being Also a Theory of Crises: Edited and introduced Rick Kuhn (Historical Materialism Book Series) Brill 2021, Haymarket 2022. Grossman, Henryk [1941] Marx, Classical Political Economy and the Problem of Dynamics Grossman, Henryk [1932,2013] Fifty years of struggle over Marxism 1883‐1932 Grossman, Henryk [2017] Capitalism's Contradictions: Studies in Economic Theory before and after Marx Ed. Rick Kuhn Trans. Birchall, Kuhn, O'Callaghan. Haymarket, Chicago H.M. Hyndman [1892] Commercial Crises of the Nineteenth Century, London H.M. Hyndman [1896] Economics of Socialism, The Twentieth Century Press, London Kliman, Andrew [2007] Reclaiming "Marx's 'Capital': A Refutation of the Myth of Inconsistency, Lexington, Lanham Kliman, Andrew [2011] The Failure of Capitalist Production: Underlying Causes of the Great Recession, London, Pluto Kliman, Andrew [2015] The Great Recession and Marx's Crisis Theory. American Journal of Economics and Sociology, 74: 236–277. The Great Recession and Marx's Crisis Theory - Kliman - 2015 - The American Journal of Economics and Sociology - Wiley Online Library Kuhn, Rick Economic Crisis and Socialist Revolution: Henryk Grossman's Law of accumulation, Its First Critics and His Responses, postprint, originally published in Paul Zarembka and Susanne Soederberg (eds) Neoliberalism in Crisis, Accumulation, and Rosa Luxemburg's Legacy Elsevier Jai, Amsterdam, Research in Political Economy, 21, 2004 pp. 181–221 ISSN 0161-7230 (series). ISBN 0762310987 Kuhn, Rick [2007] Henryk Grossman and the Recovery of Marxism Urbana and Chicago: University of Illinois Press. ISBN 0-252-07352-5 Kuhn, Rick [2007] Henryk Grossman Capitalist Expansion and Imperialism Archived 2017-11-06 at the Wayback Machine in ISR Issue 56 November–December 2007 Kuhn, Rick [2013] Marxist crisis theory to 1932 and to the present: reflections on Henryk Grossman's 'Fifty years of struggle over Marxism' paper to Society of Heterodox Economists Conference, University of New South Wales, Sydney, 2–3 December 2013 Kuhn, Rick [2017] Introduction: Grossman and His Studies of Economic Theory in Henryk Grossman [2017] Capitalism's Contradictions: Studies in Economic Theory before and after Marx Haymarket, Chicago Kuruma, Samezō [1929] An Introduction to the Study of Crisis Sep. 1929 issue of Journal of the Ohara Institute for Social Research, (vol. VI, no. 1) Translated by Michael Schauerte Kuruma, Samezō [1930] An Inquiry into Marx's Theory of Crisis Sep. 1930 issue of the Journal of the Ohara Institute for Social Research, (Vol. VII, No. 2) Translated by Michael Schauerte Kuruma, Samezō [1936] An Overview of Marx's Theory of Crisis first published in August 1936 issue of 'Journal of the Ohara Institute for Social Research'. Translated by Michael Schauerte Kuruma Samezō [2024] In Pursuit of Marx's Theory of Crisis trans: Edward Michael Schauerte Historical Materialism, Brill: Chicago Lenin V.I. [1916] Imperialism, the Highest Stage of Capitalism Luxemburg, Rosa [2013] (Peter Hudis ed.) The Complete Works of Rosa Luxemburg: Volume I: Economic Writings 1, Verso Marini, Ruy Mauro (2022) The Dialectics of Dependency Trans. and Introduction Latimer, Amanda Monthly Review Press, New York. Marx, Karl Marx's Economic Manuscript of 1864–1865 Edited & introduced by Fred Moseley Translated Ben Fowkes, Haymarket 2017 Mattick, Paul [1974] Marx and Keynes Merlin Mattick, Paul [1981] Economic Crisis and Crisis Theory Archived 2011-09-28 at the Wayback Machine Merlin Press Mattick, Paul [2008]. Review of David Harvey's The Limits to Capital in Historical Materialism 16 (4):213-224. Norfield, Tony [2016] The City: London and the Global Power of Finance, Verso, London Pradella, Lucia [2009] Globalisation and the Critique of Political Economy: New insights from Marx's writings. Routledge' Michael Roberts|Roberts, Michael 2018 Marx 200 - a review of Marx's economics 200 years after his birth Lulu.com Rosdolsky, Roman [1980] The Making of Marx's 'Capital' Pluto Rubin, Isaak Illich [1979] A History of Economic Thought, InkLinks, London Shaikh, Anwar [1978] An Introduction to the History of Crisis Theories in 'U.S. Capitalism in Crisis', URPE, New York Schauerte, E. Michael [2007] Kuruma, Samezō's Life As A Marxist Economist in 'Transitions in Latin America and in Poland and Syria: Research in Political Economy', Vol 24 281–294 Shaxson, Nicholas 2012 Treasure Islands: Tax Havens and the Men Who Stole The World Vintage Books, London Shoul, Bernice [1947] The Marxian Theory of Capitalist Breakdown Joseph A. Schumpeter History of Economic Analysis Allen & Unwin 1954 Ticktin, Hillel, 'A Marxist Political Economy of Capitalist Instability and the Current Crisis', Critique, Vol.37. Vort-Ronald, Pat, [1974] Marxist Theory of Economic Crisis, Australian Left Review, 1(43), 1974, 6-13. Yaffe, David [1972] The Marxian Theory of Crisis, Capital and the State, Bulletin of the Conference of Socialist Economists, Winter 1972, pp 5–58 Yaffe, David [1978] The State and the Capitalist Crisis 2nd ed RCG Reprint == External links == Capital, Volume 1, "Chapter 1" by Karl Marx "Crisis of Capitalism" by MIA Encyclopedia of Marxism Chesnais, François [1984] Marx's Crisis Theory Today in Christopher Freeman ed. Design, Innovation and Long Cycles in Economic Development 2nd ed. 1984 Frances Pinter, London. Audio Recording of Francois Chesnais's presentation published in the above Marx's Crisis Theory Today [1983](audio .mp3) "Economic crisis and the responsibility of socialists" by Rick Kuhn "Crisis and Hope: Theirs and Ours" Noam Chomsky, 2009 A Critique of Crisis Theory From a Marxist perspective Current specialist blog and discussion with resources by Sam Williams from January 2009 For a short video presentation of the theory, Cliff Bowman's video introduction to 'Marx's Theory of Economic Crisis', Cranfield University, School of Management, posted to YouTube 2009 [Andy Higginbottom|Higginbottom, Andy] 2020 Lecture series on Capital Vol3 Capital Vol. 3: The Andy Higginbottom lectures series
Wikipedia/Crisis_theory
The Industrial Age is a period of history that encompasses the changes in economic and social organization that began around 1760 in Great Britain and later in other countries, characterized chiefly by the replacement of hand tools with power-driven machines such as the power loom and the steam engine, and by the concentration of industry in large establishments. While it is commonly believed that the Industrial Age was supplanted by the Information Age in the late 20th century, a view that has become common since the Revolutions of 1989, much of the Third World economy is still based on manufacturing, although mobile phones are now commonplace even in the poorest of countries, enabling access to global information networks. Even though many developing countries remain largely industrial, the Information Age is increasingly on the ground. == Origins == Huge changes in agricultural methods made the Industrial Revolution possible. This agricultural revolution started with changes in farming in the Netherlands, later developed by the British. The Industrial Age began in Great Britain in the mid-18th century and was fueled by coal mining from places such as Wales and County Durham. The Industrial Revolution began in Great Britain because it had the factors of production, land (all natural resources), capital, and labour. Britain had plenty of harbors that enabled trade, Britain had access to capital, such as goods and money, for example, tools, machinery, equipment, and inventory. Britain, lastly, had an abundance of labor, or industrial workers in this case. There are many other conditions that help show why the Industrial Revolution began in Great Britain. The British Isles and colonies overseas represented huge markets that created a large demand for British goods. Britain also had one of the largest spheres of influence due to its massive navy and merchant marine. The British government's concern for commercial interests was also important. The steam engine allowed for steamboats and the locomotives, which made transportation much faster. By the mid-19th century the Industrial Revolution had spread to Continental Europe and North America, and since then it has spread to most of the world. == The textile industry == The cotton industry was the first industry to go through mechanization, the use of automatic machinery to increase production. The domestic system sprouted as a result of when businesses began importing raw cotton, employing spinners and weavers to make it into cloth from their home. James Hargreaves invented the spinning jenny, which could produce eight times as much thread as a single spinning wheel, and Richard Arkwright made it driven by water. Later Arkwright opened a spinning mill which marked the beginning of the factory system. In 1785, Edmund Cartwright invented a loom which was powered by water. == Steam engines == In 1712, Thomas Newcomen produced the first successful steam engine, and in 1769, James Watt patented the modern steam engine. As a result, steam replaced water as industry's major power source. The steam engine allowed for steamboats and the locomotives, which made transportation much faster. By the mid-19th century the Industrial Revolution had spread to Continental Europe and North America, and since then it has spread to most of the world. The Industrial Age is defined by mass production, broadcasting, the rise of the nation state, power, modern medicine and running water. The quality of human life has increased dramatically during the Industrial Age. Life expectancy today worldwide is more than twice as high as it was when the Industrial Revolution began. == See also == Information Age Imagination Age == References ==
Wikipedia/Industrial_Age
In finance and economics, the nominal interest rate or nominal rate of interest is the rate of interest stated on a loan or investment, without any adjustments for inflation. == Examples of adjustments or fees == An adjustment for inflation (in contrast with the real interest rate). Compound interest (also referred to as the nominal annual rate). == Nominal versus real interest rate == The concept of real interest rate is useful to account for the impact of inflation. In the case of a loan, it is this real interest that the lender effectively receives. For example, if the lender is receiving 8 percent from a loan and the inflation rate is also 8 percent, then the (effective) real rate of interest is zero: despite the increased nominal amount of currency received, the lender would have no monetary value benefit from such a loan because each unit of currency would be devalued due to inflation by the same factor as the nominal amount gets increased. The relationship between the real interest value r {\displaystyle r} , the nominal interest rate value R {\displaystyle R} , and the inflation rate value i {\displaystyle i} is given by ( 1 + r ) = ( 1 + R ) / ( 1 + i ) {\displaystyle (1+r)=(1+R)/(1+i)} or r = ( R − i ) / ( 1 + i ) {\displaystyle r=(R-i)/(1+i)} When the inflation rate i {\displaystyle i} is low, the real interest rate is approximately given by the nominal interest rate minus the inflation rate, i.e., r ≈ R − i {\displaystyle r\approx R-i\,} In this analysis, the nominal rate is the stated rate, and the real interest rate is the interest after the expected losses due to inflation. Since the future inflation rate can only be estimated, the ex ante and ex post (before and after the fact) real interest rates may be different; the premium paid to actual inflation (higher or lower). == Nominal versus effective interest rate == The nominal interest rate, also known as an annual percentage rate or APR, is the periodic interest rate multiplied by the number of periods per year. For example, a nominal annual interest rate of 12% based on monthly compounding means a 1% interest rate per month (compounded). A nominal interest rate for compounding periods less than a year is always lower than the equivalent rate with annual compounding (this immediately follows from elementary algebraic manipulations of the formula for compound interest). Note that a nominal rate without the compounding frequency is not fully defined: for any interest rate, the effective interest rate cannot be specified without knowing the compounding frequency and the rate. Although some conventions are used where the compounding frequency is understood, consumers in particular may fail to understand the importance of knowing the effective rate. Nominal interest rates are not comparable unless their compounding periods are the same; effective interest rates correct for this by "converting" nominal rates into annual compound interest. In many cases, depending on local regulations, interest rates as quoted by lenders and in advertisements are based on nominal, not effective interest rates, and hence may understate the interest rate compared to the equivalent effective annual rate. Confusingly, in the context of inflation, 'nominal' has a different meaning. A nominal rate can mean a rate before adjusting for inflation, and a real rate is a constant-prices rate. The Fisher equation is used to convert between real and nominal rates. To avoid confusion about the term nominal which has these different meanings, some finance textbooks use the term 'Annualised Percentage Rate' or APR rather than 'nominal rate' when they are discussing the difference between effective rates and APR's. The term should not be confused with simple interest (as opposed to compound interest) which is not compounded. The effective interest rate is always calculated as if compounded annually. The effective rate is calculated in the following way, where r is the effective rate, i the nominal rate (as a decimal, e.g. 12% = 0.12), and n the number of compounding periods per year (for example, 12 for monthly compounding): r = ( 1 + i / n ) n − 1 {\displaystyle r\ =\ (1+i/n)^{n}-1} == Examples == === Monthly compounding === Example 1: A nominal interest rate of 6% compounded monthly is equivalent to an effective interest rate of 6.17%. Example 2: 6% annually is credited as 6%/12 = 0.5% every month. After one year, the initial capital is increased by the factor (1+0.005)12 ≈ 1.0617. === Daily compounding === A loan with daily compounding has a substantially higher rate in effective annual terms. For a loan with a 10% nominal annual rate and daily compounding, the effective annual rate is 10.516%. For a loan of $10,000 (paid at the end of the year in a single lump sum), the borrower would pay $51.56 more than one who was charged 10% interest, compounded annually. == References == == External links == Convert an Effective Interest Rate to an Annual Percentage Rate Convert an Annual Percentage Rate to an Effective Interest Rate Online Interest Calculator
Wikipedia/Nominal_interest_rate
Moderate realism (also called immanent realism) is a position in the debate on the metaphysics of universals which holds that there is no realm in which universals exist (in opposition to Platonic realism, which asserts the existence of abstract objects), nor do they really exist within particulars as universals, but rather universals really exist within particulars as particularised, and multiplied. == Overview == Moderate realism is opposed to both the theory of Platonic forms and nominalism. Nominalists deny the existence of universals altogether, even as particularised and multiplied within particulars. Moderate realism, however, is considered a midpoint between Platonic realism and nominalism as it holds that the universals are located in space and time although they do not have separate realms. Aristotle espoused a form of moderate realism as did Thomas Aquinas, Bonaventure, and Duns Scotus (cf. Scotist realism). Moderate realism is anti-realist about abstract objects, just like conceptualism is (their difference being that conceptualism denies the mind-independence of universals, while moderate realism does not). Aristotle's position, as expounded by Aquinas, denies the existence of the realm of Forms and that the world around constitutes the only world where nothing is existing precisely according to our universal concepts. == Modern theories == A more recent and influential version of immanent realism has been advanced by Willard Van Orman Quine, in works such as "Posits and Reality" (1955), and D. M. Armstrong, in works such as his Universals: An Opinionated Introduction (1989, p. 8). For Quine, any object proposed by theory is considered real, stressing that "everything to which we concede existence is a posit from the standpoint of a description of the theory-building process", considering the idea that the theory withstood rigorous testing. According to Armstrong, universals are independent of the mind, and this is critical in accounting for causation and nomic connection. == See also == Abstract object Conceptualist realism Hylomorphism In re structuralism Instantiation principle Medieval realism Model-dependent realism Nominalism Object (philosophy) Platonic form Strong realism Universal (metaphysics) == References ==
Wikipedia/Moderate_realism
In philosophy and early physics, horror vacui (Latin: horror of the vacuum) or plenism ()—commonly stated as "nature abhors a vacuum", for example by Spinoza—is a hypothesis attributed to Aristotle, later criticized by the atomism of Epicurus and Lucretius, that nature contains no vacuums because the denser surrounding material continuum would immediately fill the rarity of an incipient void. Aristotle also argued against the void in a more abstract sense: since a void is merely nothingness, following his teacher Plato, nothingness cannot rightly be said to exist. Furthermore, insofar as a void would be featureless, it could neither be encountered by the senses nor could its supposition lend additional explanatory power. Hero of Alexandria challenged the theory in the first century AD, but his attempts to create an artificial vacuum failed. The theory was debated in the context of 17th-century fluid mechanics, by Thomas Hobbes and Robert Boyle, among others, and through the early 18th century by Sir Isaac Newton and Gottfried Leibniz. == Origin == As advanced by Aristotle in Physics: In a void, no one could say why a thing once set in motion should stop anywhere; for why should it stop here rather than here? So that a thing will either be at rest or must be moved ad infinitum, unless something more powerful gets in its way. Further, things are now thought to move into the void because it yields; but in a void this quality is present equally everywhere, so that things should move in all directions. Further, the truth of what we assert is plain from the following considerations. We see the same weight or body moving faster than another for two reasons, either because there is a difference in what it moves through, as between water, air, and earth, or because, other things being equal, the moving body differs from the other owing to excess of weight or of lightness. Now the medium causes a difference because it impedes the moving thing, most of all if it is moving in the opposite direction, but in a secondary degree even if it is at rest; and especially a medium that is not easily divided, i.e. a medium that is somewhat dense. A, then, will move through B in time G, and through D, which is thinner, in time E (if the length of B is equal to D), in proportion to the density of the hindering body. For let B be water and D air; then by so much as air is thinner and more incorporeal than water, A will move through D faster than through B. Let the speed have the same ratio to the speed, then, that air has to water. Then if air is twice as thin, the body will traverse B in twice the time that it does D, and the time G will be twice the time E. And always, by so much as the medium is more incorporeal and less resistant and more easily divided, the faster will be the movement. Now there is no ratio in which the void is exceeded by body, as there is no ratio of 0 to a number. For if 4 exceeds 3 by 1, and 2 by more than 1, and 1 by still more than it exceeds 2, still there is no ratio by which it exceeds 0; for that which exceeds must be divisible into the excess + that which is exceeded, so that will be what it exceeds 0 by + 0. For this reason, too, a line does not exceed a point unless it is composed of points! Similarly the void can bear no ratio to the full, and therefore neither can movement through the one to movement through the other, but if a thing moves through the thickest medium such and such a distance in such and such a time, it moves through the void with a speed beyond any ratio. For let Z be void, equal in magnitude to B and to D. Then if A is to traverse and move through it in a certain time, H, a time less than E, however, the void will bear this ratio to the full. But in a time equal to H, A will traverse the part O of A. And it will surely also traverse in that time any substance Z which exceeds air in thickness in the ratio which the time E bears to the time H. For if the body Z be as much thinner than D as E exceeds H, A, if it moves through Z, will traverse it in a time inverse to the speed of the movement, i.e. in a time equal to H. If, then, there is no body in Z, A will traverse Z still more quickly. But we supposed that its traverse of Z when Z was void occupied the time H. So that it will traverse Z in an equal time whether Z be full or void. But this is impossible. It is plain, then, that if there is a time in which it will move through any part of the void, this impossible result will follow: it will be found to traverse a certain distance, whether this be full or void, in an equal time; for there will be some body which is in the same ratio to the other body as the time is to the time. == Etymology == Plenism means "fullness", from Latin plēnum, English "plenty", cognate via Proto-Indo-European to "full". In Ancient Greek, the term for the void is τὸ κενόν (to kenón). == History == The idea was restated as "Natura abhorret vacuum" by François Rabelais in his series of books titled Gargantua and Pantagruel in the 1530s. The theory was supported and restated by Galileo Galilei in the early 17th century as "Resistenza del vacuo". Galileo was surprised by the fact that water could not rise above a certain level in an aspiration tube in his suction pump, leading him to conclude that there is a limit to the phenomenon. René Descartes proposed a plenic interpretation of atomism to eliminate the void, which he considered incompatible with his concept of space. The theory was rejected by later scientists, such as Galileo's pupil Evangelista Torricelli, who repeated his experiment with mercury. Blaise Pascal successfully repeated Galileo's and Torricelli's experiment and foresaw no reason why a perfect vacuum could not be achieved in principle. Scottish philosopher Thomas Carlyle mentioned Pascal's experiment in the Edinburgh Encyclopædia in an 1823 article titled "Pascal". == See also == == References ==
Wikipedia/Horror_vacui_(physics)
Aristotle's theory of universals is Aristotle's classical solution to the problem of universals, sometimes known as the hylomorphic theory of immanent realism. universals are the characteristics or qualities that ordinary objects or things have in common. They can be identified in the types, properties, or relations observed in the world. For example, imagine there is a bowl of red apples resting on a table. Each apple in that bowl will have many similar qualities, such as their red coloring or "redness". They will share some degree of the quality of "ripeness" depending on their age. They may also be at varying degrees of age, which will affect their color, but they will all share a universal "appleness". These qualities are the universals that the apples hold in common. The problem of universals asks three questions. Do universals exist? If they exist, where do they exist? Also, if they exist, how do we obtain knowledge of them? In Aristotle's view, universals are incorporeal and universal, but only exist only where they are instantiated; they exist only in things. Aristotle said that a universal is identical in each of its instances. All red things are similar in that there is the same universal, redness, in each thing. There is no Platonic Form of redness, standing apart from all red things; instead, each red thing has a copy of the same property, redness. For the Aristotelian, knowledge of the universals is not obtained from a supernatural source. It is obtained from experience by means of active intellect. == Overview == In Aristotle's view, universals can be instantiated multiple times. He states that one and the same universal, such as applehood, appears in every real apple. A common sense challenge would be to inquire what remains exactly the same in all these different things, since the theory is claiming that something remains the same. Stating that different beautiful things, such as the Pacific Ocean, The Eiffel Tower, or the nighttime sky is beautiful is just to say that each thing is the same (qualitatively) in terms of beauty. Aristotle is talking about a category of being here that is not a thing but a quality. A common defense of Aristotle's realism is therefore that we should not expect universals to behave like ordinary physical objects. To say the same universal, beautiful, occurs simultaneously in all these things is no more strange than saying that each thing is beautiful. A second issue is whether Aristotelian universals are abstract: if they are, then the theory must deal with how to abstract the concept of redness from one or more red things. Aristotle argued that people form concepts and make generalizations in the manner of a young child, who is just on the verge of grasping a generic concept such as human being. In his view, the child is gathering his or her memories of various encounters with individual humans, searching for the essential similarity that stands out, on reflection, in every instance. Today, it might be said that one mentally extracts from each thing the quality that they all have in common. When the child grasps the concept of a human being, he or she has learnt to ignore the accidental details of each person's past experiences and individual differences, and has paid attention to the relevant quality that they all have in common, namely, humanity. In Aristotle's view, the universal humanity is a natural kind defined by the essential properties that all humans have in common. == See also == Hylomorphism Aristotelian realist philosophy of mathematics Metaphysics (Aristotle) == References == == External links == Stanford Encyclopedia of Philosophy: Aristotle's Metaphysics. 10. Substances and Universals Stanford Encyclopedia of Philosophy: The Medieval Problem of Universals. 4. Boethius’ Aristotelian Solution Meaning and the Problem of Universals, A Kant-Friesian Approach M. J. Cresswell: What is Aristotle's theory of universals? (subscription required)
Wikipedia/Aristotle's_theory_of_universals
Action at a distance is the concept in physics that an object's motion can be affected by another object without the two being in physical contact; that is, it is the concept of the non-local interaction of objects that are separated in space. Coulomb's law and Newton's law of universal gravitation are based on action at a distance. Historically, action at a distance was the earliest scientific model for gravity and electricity and it continues to be useful in many practical cases. In the 19th and 20th centuries, field models arose to explain these phenomena with more precision. The discovery of electrons and of special relativity led to new action at a distance models providing alternative to field theories. Under our modern understanding, the four fundamental interactions (gravity, electromagnetism, the strong interaction and the weak interaction) in all of physics are not described by action at a distance. == Categories of action == In the study of mechanics, action at a distance is one of three fundamental actions on matter that cause motion. The other two are direct impact (elastic or inelastic collisions) and actions in a continuous medium as in fluid mechanics or solid mechanics.: 338  Historically, physical explanations for particular phenomena have moved between these three categories over time as new models were developed. Action-at-a-distance and actions in a continuous medium may be easily distinguished when the medium dynamics are visible, like waves in water or in an elastic solid. In the case of electricity or gravity, no medium is required. In the nineteenth century, criteria like the effect of actions on intervening matter, the observation of a time delay, the apparent storage of energy, or even the possibility of a plausible mechanical model for action transmission were all accepted as evidence against action at a distance.: 198  Aether theories were alternative proposals to replace apparent action-at-a-distance in gravity and electromagnetism, in terms of continuous action inside an (invisible) medium called "aether".: 338  Direct impact of macroscopic objects seems visually distinguishable from action at a distance. If however the objects are constructed of atoms, and the volume of those atoms is not defined and atoms interact by electric and magnetic forces, the distinction is less clear. == Roles == The concept of action at a distance acts in multiple roles in physics and it can co-exist with other models according to the needs of each physical problem. One role is as a summary of physical phenomena, independent of any understanding of the cause of such an action. For example, astronomical tables of planetary positions can be compactly summarized using Newton's law of universal gravitation, which assumes the planets interact without contact or an intervening medium. As a summary of data, the concept does not need to be evaluated as a plausible physical model. Action at a distance also acts as a model explaining physical phenomena even in the presence of other models. Again in the case of gravity, hypothesizing an instantaneous force between masses allows the return time of comets to be predicted as well as predicting the existence of previously unknown planets, like Neptune.: 210  These triumphs of physics predated the alternative more accurate model for gravity based on general relativity by many decades. Introductory physics textbooks discuss central forces, like gravity, by models based on action-at-distance without discussing the cause of such forces or issues with it until the topics of relativity and fields are discussed. For example, see The Feynman Lectures on Physics on gravity. == History == === Early inquiries into motion === Action-at-a-distance as a physical concept requires identifying objects, distances, and their motion. In antiquity, ideas about the natural world were not organized in these terms. Objects in motion were modeled as living beings. Around 1600, the scientific method began to take root. René Descartes held a more fundamental view, developing ideas of matter and action independent of theology. Galileo Galilei wrote about experimental measurements of falling and rolling objects. Johannes Kepler's laws of planetary motion summarized Tycho Brahe's astronomical observations.: 132  Many experiments with electrical and magnetic materials led to new ideas about forces. These efforts set the stage for Newton's work on forces and gravity. === Newtonian gravity === In 1687 Isaac Newton published his Principia which combined his laws of motion with a new mathematical analysis able to reproduce Kepler's empirical results.: 134  His explanation was in the form of a law of universal gravitation: any two bodies are attracted by a force proportional to their mass and inversely proportional to the square of the distance between them.: 28  Thus the motions of planets were predicted by assuming forces working over great distances. This mathematical expression of the force did not imply a cause. Newton considered action-at-a-distance to be an inadequate model for gravity. Newton, in his words, considered action at a distance to be: so great an Absurdity that I believe no Man who has in philosophical Matters a competent Faculty of thinking can ever fall into it. Metaphysical scientists of the early 1700s strongly objected to the unexplained action-at-a-distance in Newton's theory. Gottfried Wilhelm Leibniz complained that the mechanism of gravity was "invisible, intangible, and not mechanical".: 339  Moreover, initial comparisons with astronomical data were not favorable. As mathematical techniques improved throughout the 1700s, the theory showed increasing success, predicting the date of the return of Halley's comet and aiding the discovery of planet Neptune in 1846. These successes and the increasingly empirical focus of science towards the 19th century led to acceptance of Newton's theory of gravity despite distaste for action-at-a-distance. === Electrical action at a distance === Electrical and magnetic phenomena also began to be explored systematically in the early 1600s. In William Gilbert's early theory of "electric effluvia," a kind of electric atmosphere, he rules out action-at-a-distance on the grounds that "no action can be performed by matter save by contact". However subsequent experiments, especially those by Stephen Gray showed electrical effects over distance. Gray developed an experiment call the "electric boy" demonstrating electric transfer without direct contact. Franz Aepinus was the first to show, in 1759, that a theory of action at a distance for electricity provides a simpler replacement for the electric effluvia theory.: 42  Despite this success, Aepinus himself considered the nature of the forces to be unexplained: he did "not approve of the doctrine which assumes the possibility of action at a distance", setting the stage for a shift to theories based on aether.: 549  By 1785 Charles-Augustin de Coulomb showed that two electric charges at rest experience a force inversely proportional to the square of the distance between them, a result now called Coulomb's law. The striking similarity to gravity strengthened the case for action at a distance, at least as a mathematical model. As mathematical methods improved, especially through the work of Pierre-Simon Laplace, Joseph-Louis Lagrange, and Siméon Denis Poisson, more sophisticated mathematical methods began to influence the thinking of scientists. The concept of potential energy applied to small test particles led to the concept of a scalar field, a mathematical model representing the forces throughout space. While this mathematical model is not a mechanical medium, the mental picture of such a field resembles a medium.: 197  === Fields as an alternative === Michael Faraday was the first who suggested that action at a distance was inadequate as an account of electric and magnetic forces, even in the form of a (mathematical) potential field.: 341  Faraday, an empirical experimentalist, cited three reasons in support of some medium transmitting electrical force: 1) electrostatic induction across an insulator depends on the nature of the insulator, 2) cutting a charged insulator causes opposite charges to appear on each half, and 3) electric discharge sparks are curved at an insulator. From these reasons he concluded that the particles of an insulator must be polarized, with each particle contributing to continuous action. He also experimented with magnets, demonstrating lines of force made visible by iron filings. However, in both cases his field-like model depends on particles that interact through an action-at-a-distance: his mechanical field-like model has no more fundamental physical cause than the long-range central field model.: 348  Faraday's observations, as well as others, led James Clerk Maxwell to a breakthrough formulation in 1865, a set of equations that combined electricity and magnetism, both static and dynamic, and which included electromagnetic radiation – light.: 253  Maxwell started with elaborate mechanical models but ultimately produced a purely mathematical treatment using dynamical vector fields. The sense that these fields must be set to vibrate to propagate light set off a search of a medium of propagation; the medium was called the luminiferous aether or the aether.: 279  In 1873 Maxwell addressed action at a distance explicitly. He reviews Faraday's lines of force, carefully pointing out that Faraday himself did not provide a mechanical model of these lines in terms of a medium. Nevertheless the many properties of these lines of force imply these "lines must not be regarded as mere mathematical abstractions". Faraday himself viewed these lines of force as a model, a "valuable aid" to the experimentalist, a means to suggest further experiments. In distinguishing between different kinds of action Faraday suggested three criteria: 1) do additional material objects alter the action?, 2) does the action take time, and 3) does it depend upon the receiving end? For electricity, Faraday knew that all three criteria were met for electric action, but gravity was thought to only meet the third one. After Maxwell's time a fourth criteria, the transmission of energy, was added, thought to also apply to electricity but not gravity. With the advent of new theories of gravity, the modern account would give gravity all of the criteria except dependence on additional objects. === Fields fade into spacetime === The success of Maxwell's field equations led to numerous efforts in the later decades of the 19th century to represent electrical, magnetic, and gravitational fields, primarily with mechanical models.: 279  No model emerged that explained the existing phenomena. In particular no good model for stellar aberration, the shift in the position of stars with the Earth's relative velocity. The best models required the ether to be stationary while the Earth moved, but experimental efforts to measure the effect of Earth's motion through the aether found no effect. In 1892 Hendrik Lorentz proposed a modified aether based on the emerging microscopic molecular model rather than the strictly macroscopic continuous theory of Maxwell.: 326  Lorentz investigated the mutual interaction of a moving solitary electrons within a stationary aether.: 393  He rederived Maxwell's equations in this way but, critically, in the process he changed them to represent the wave in the coordinates moving electrons. He showed that the wave equations had the same form if they were transformed using a particular scaling factor, γ = 1 1 − ( u 2 / c 2 ) . {\displaystyle \gamma ={\frac {1}{\sqrt {1-(u^{2}/c^{2})}}}.} where u {\displaystyle u} is the velocity of the moving electrons and c {\displaystyle c} is the speed of light. Lorentz noted that if this factor were applied as a length contraction to moving matter in a stationary ether, it would eliminate any effect of motion through the ether, in agreement with experiment. In 1899, Henri Poincaré questioned the existence of an aether, showing that the principle of relativity prohibits the absolute motion assumed by proponents of the aether model. He named the transformation used by Lorentz the Lorentz transformation but interpreted it as a transformation between two inertial frames with relative velocity u {\displaystyle u} . This transformation makes the electromagnetic equations look the same in every uniformly moving inertial frame. Then, in 1905, Albert Einstein demonstrated that the principle of relativity, applied to the simultaneity of time and the constant speed of light, precisely predicts the Lorentz transformation. This theory of special relativity quickly became the modern concept of spacetime. Thus the aether model, initially so very different from action at a distance, slowly changed to resemble simple empty space.: 393  In 1905, Poincaré proposed gravitational waves, emanating from a body and propagating at the speed of light, as being required by the Lorentz transformations and suggested that, in analogy to an accelerating electrical charge producing electromagnetic waves, accelerated masses in a relativistic field theory of gravity should produce gravitational waves. However, until 1915 gravity stood apart as a force still described by action-at-a-distance. In that year, Einstein showed that a field theory of spacetime, general relativity, consistent with relativity can explain gravity. New effects resulting from this theory were dramatic for cosmology but minor for planetary motion and physics on Earth. Einstein himself noted Newton's "enormous practical success". == Modern action at a distance == In the early decades of the 20th century, Karl Schwarzschild, Hugo Tetrode, and Adriaan Fokker independently developed non-instantaneous models for action at a distance consistent with special relativity. In 1949 John Archibald Wheeler and Richard Feynman built on these models to develop a new field-free theory of electromagnetism. While Maxwell's field equations are generally successful, the Lorentz model of a moving electron interacting with the field encounters mathematical difficulties: the self-energy of the moving point charge within the field is infinite.: 187  The Wheeler–Feynman absorber theory of electromagnetism avoids the self-energy issue.: 213  They interpret Abraham–Lorentz force, the apparent force resisting electron acceleration, as a real force returning from all the other existing charges in the universe. The Wheeler–Feynman theory has inspired new thinking about the arrow of time and about the nature of quantum non-locality. The theory has implications for cosmology; it has been extended to quantum mechanics. A similar approach has been applied to develop an alternative theory of gravity consistent with general relativity. John G. Cramer has extended the Wheeler–Feynman ideas to create the transactional interpretation of quantum mechanics. == "Spooky action at a distance" == Albert Einstein wrote to Max Born about issues in quantum mechanics in 1947 and used a phrase translated as "spooky action at a distance", and in 1964, John Stewart Bell proved that quantum mechanics predicted stronger statistical correlations in the outcomes of certain far-apart measurements than any local theory possibly could. The phrase has been picked up and used as a description for the cause of small non-classical correlations between physically separated measurement of entangled quantum states. The correlations are predicted by quantum mechanics (the Bell theorem) and verified by experiments (the Bell test). Rather than a postulate like Newton's gravitational force, this use of "action-at-a-distance" concerns observed correlations which cannot be explained with localized particle-based models. Describing these correlations as "action-at-a-distance" requires assuming that particles became entangled and then traveled to distant locations, an assumption that is not required by quantum mechanics. == Force in quantum field theory == Quantum field theory does not need action at a distance. At the most fundamental level, only four forces are needed. Each force is described as resulting from the exchange of specific bosons. Two are short range: the strong interaction mediated by mesons and the weak interaction mediated by the weak boson; two are long range: electromagnetism mediated by the photon and gravity hypothesized to be mediated by the graviton.: 132  However, the entire concept of force is of secondary concern in advanced modern particle physics. Energy forms the basis of physical models and the word action has shifted away from implying a force to a specific technical meaning, an integral over the difference between potential energy and kinetic energy.: 173  == See also == Central force – Mechanical force towards or away from a point Principle of locality – Physical principle that only immediate surroundings can influence an object Quantum nonlocality – Deviations from local realism == References == == External links == This article incorporates text from a free content work. Licensed under CC-BY-SA. Text taken from Newton’s action at a distance – Different views​, Nicolae Sfetcu.
Wikipedia/Action_at_a_distance_(physics)
In magnetostatics, Ampère's force law describes the force of attraction or repulsion between two current-carrying wires. The physical origin of this force is that each wire generates a magnetic field, following the Biot–Savart law, and the other wire experiences a magnetic force as a consequence, following the Lorentz force law. == Equation == === Special case: Two straight parallel wires === The best-known and simplest example of Ampère's force law, which underlaid (before 20 May 2019) the definition of the ampere, the SI unit of electric current, states that the magnetic force per unit length between two straight parallel conductors is F m L = 2 k A I 1 I 2 r , {\displaystyle {\frac {F_{m}}{L}}=2k_{\rm {A}}{\frac {I_{1}I_{2}}{r}},} where k A {\displaystyle k_{\rm {A}}} is the magnetic force constant from the Biot–Savart law, F m / L {\displaystyle F_{m}/L} is the total force on either wire per unit length of the shorter (the longer is approximated as infinitely long relative to the shorter), r {\displaystyle r} is the distance between the two wires, and I 1 {\displaystyle I_{1}} , I 2 {\displaystyle I_{2}} are the direct currents carried by the wires. This is a good approximation if one wire is sufficiently longer than the other, so that it can be approximated as infinitely long, and if the distance between the wires is small compared to their lengths (so that the one infinite-wire approximation holds), but large compared to their diameters (so that they may also be approximated as infinitely thin lines). The value of k A {\displaystyle k_{\rm {A}}} depends upon the system of units chosen, and the value of k A {\displaystyle k_{\rm {A}}} decides how large the unit of current will be. In the SI system, k A = d e f μ 0 4 π {\displaystyle k_{\rm {A}}\ {\overset {\underset {\mathrm {def} }{}}{=}}\ {\frac {\mu _{0}}{4\pi }}} which is in SI units 0.99999999987(16)×10−7 H/m. Here μ 0 {\displaystyle \mu _{0}} is the magnetic constant which in SI units is === General case === The general formulation of the magnetic force for arbitrary geometries is based on iterated line integrals and combines the Biot–Savart law and Lorentz force in one equation as shown below. F 12 = μ 0 4 π ∫ L 1 ∫ L 2 I 1 d ℓ 1 × ( I 2 d ℓ 2 × r ^ 21 ) | r | 2 , {\displaystyle \mathbf {F} _{12}={\frac {\mu _{0}}{4\pi }}\int _{L_{1}}\int _{L_{2}}{\frac {I_{1}d{\boldsymbol {\ell }}_{1}\ \times \ (I_{2}d{\boldsymbol {\ell }}_{2}\ \times \ {\hat {\mathbf {r} }}_{21})}{|r|^{2}}},} where F 12 {\displaystyle \mathbf {F} _{12}} is the total magnetic force felt by wire 1 due to wire 2 (usually measured in newtons), I 1 {\displaystyle I_{1}} and I 2 {\displaystyle I_{2}} are the currents running through wires 1 and 2, respectively (usually measured in amperes), The double line integration sums the force upon each element of wire 1 due to the magnetic field of each element of wire 2, d ℓ 1 {\displaystyle d{\boldsymbol {\ell }}_{1}} and d ℓ 2 {\displaystyle d{\boldsymbol {\ell }}_{2}} are infinitesimal vectors associated with wire 1 and wire 2 respectively (usually measured in metres); see line integral for a detailed definition, The vector r ^ 21 {\displaystyle {\hat {\mathbf {r} }}_{21}} is the unit vector pointing from the differential element on wire 2 towards the differential element on wire 1, and |r| is the distance separating these elements, The multiplication × is a vector cross product, The sign of I n {\displaystyle I_{n}} is relative to the orientation d ℓ n {\displaystyle d{\boldsymbol {\ell }}_{n}} (for example, if d ℓ 1 {\displaystyle d{\boldsymbol {\ell }}_{1}} points in the direction of conventional current, then I 1 > 0 {\displaystyle I_{1}>0} ). To determine the force between wires in a material medium, the magnetic constant is replaced by the actual permeability of the medium. For the case of two separate closed wires, the law can be rewritten in the following equivalent way by expanding the vector triple product and applying Stokes' theorem: F 12 = − μ 0 4 π ∫ L 1 ∫ L 2 ( I 1 d ℓ 1 ⋅ I 2 d ℓ 2 ) r ^ 21 | r | 2 . {\displaystyle \mathbf {F} _{12}=-{\frac {\mu _{0}}{4\pi }}\int _{L_{1}}\int _{L_{2}}{\frac {(I_{1}d{\boldsymbol {\ell }}_{1}\ \mathbf {\cdot } \ I_{2}d{\boldsymbol {\ell }}_{2})\ {\hat {\mathbf {r} }}_{21}}{|r|^{2}}}.} In this form, it is immediately obvious that the force on wire 1 due to wire 2 is equal and opposite the force on wire 2 due to wire 1, in accordance with Newton's third law of motion. == Historical background == The form of Ampere's force law commonly given was derived by James Clerk Maxwell in 1873 and is one of several expressions consistent with the original experiments of André-Marie Ampère and Carl Friedrich Gauss. The x-component of the force between two linear currents I and I', as depicted in the adjacent diagram, was given by Ampère in 1825 and Gauss in 1833 as follows: d F x = k I I ′ d s ′ ∫ d s cos ⁡ ( x d s ) cos ⁡ ( r d s ′ ) − cos ⁡ ( r x ) cos ⁡ ( d s d s ′ ) r 2 . {\displaystyle dF_{x}=kII'ds'\int ds{\frac {\cos(xds)\cos(rds')-\cos(rx)\cos(dsds')}{r^{2}}}.} Following Ampère, a number of scientists, including Wilhelm Weber, Rudolf Clausius, Maxwell, Bernhard Riemann, Hermann Grassmann, and Walther Ritz, developed this expression to find a fundamental expression of the force. Through differentiation, it can be shown that: cos ⁡ ( x d s ) cos ⁡ ( r d s ′ ) r 2 = − cos ⁡ ( r x ) ( cos ⁡ ε − 3 cos ⁡ ϕ cos ⁡ ϕ ′ ) r 2 . {\displaystyle {\frac {\cos(x\,ds)\cos(r\,ds')}{r^{2}}}=-\cos(rx){\frac {(\cos \varepsilon -3\cos \phi \cos \phi ')}{r^{2}}}.} and also the identity: cos ⁡ ( r x ) cos ⁡ ( d s d s ′ ) r 2 = cos ⁡ ( r x ) cos ⁡ ε r 2 . {\displaystyle {\frac {\cos(rx)\cos(ds\,ds')}{r^{2}}}={\frac {\cos(rx)\cos \varepsilon }{r^{2}}}.} With these expressions, Ampère's force law can be expressed as: d F x = k I I ′ d s ′ ∫ d s ′ cos ⁡ ( r x ) 2 cos ⁡ ε − 3 cos ⁡ ϕ cos ⁡ ϕ ′ r 2 . {\displaystyle dF_{x}=kII'ds'\int ds'\cos(rx){\frac {2\cos \varepsilon -3\cos \phi \cos \phi '}{r^{2}}}.} Using the identities: ∂ r ∂ s = cos ⁡ ϕ , ∂ r ∂ s ′ = − cos ⁡ ϕ ′ . {\displaystyle {\frac {\partial r}{\partial s}}=\cos \phi ,{\frac {\partial r}{\partial s'}}=-\cos \phi '.} and ∂ 2 r ∂ s ∂ s ′ = − cos ⁡ ε + cos ⁡ ϕ cos ⁡ ϕ ′ r . {\displaystyle {\frac {\partial ^{2}r}{\partial s\partial s'}}={\frac {-\cos \varepsilon +\cos \phi \cos \phi '}{r}}.} Ampère's results can be expressed in the form: d 2 F = k I I ′ d s d s ′ r 2 ( ∂ r ∂ s ∂ r ∂ s ′ − 2 r ∂ 2 r ∂ s ∂ s ′ ) . {\displaystyle d^{2}F={\frac {kII'dsds'}{r^{2}}}\left({\frac {\partial r}{\partial s}}{\frac {\partial r}{\partial s'}}-2r{\frac {\partial ^{2}r}{\partial s\partial s'}}\right).} As Maxwell noted, terms can be added to this expression, which are derivatives of a function Q(r) and, when integrated, cancel each other out. Thus, Maxwell gave "the most general form consistent with the experimental facts" for the force on ds arising from the action of ds': d 2 F x = k I I ′ d s d s ′ 1 r 2 [ ( ( ∂ r ∂ s ∂ r ∂ s ′ − 2 r ∂ 2 r ∂ s ∂ s ′ ) + r ∂ 2 Q ∂ s ∂ s ′ ) cos ⁡ ( r x ) + ∂ Q ∂ s ′ cos ⁡ ( x d s ) − ∂ Q ∂ s cos ⁡ ( x d s ′ ) ] . {\displaystyle d^{2}F_{x}=kII'dsds'{\frac {1}{r^{2}}}\left[\left(\left({\frac {\partial r}{\partial s}}{\frac {\partial r}{\partial s'}}-2r{\frac {\partial ^{2}r}{\partial s\partial s'}}\right)+r{\frac {\partial ^{2}Q}{\partial s\partial s'}}\right)\cos(rx)+{\frac {\partial Q}{\partial s'}}\cos(x\,ds)-{\frac {\partial Q}{\partial s}}\cos(x\,ds')\right].} Q is a function of r, according to Maxwell, which "cannot be determined, without assumptions of some kind, from experiments in which the active current forms a closed circuit." Taking the function Q(r) to be of the form: Q = − ( 1 + k ) 2 r {\displaystyle Q=-{\frac {(1+k)}{2r}}} We obtain the general expression for the force exerted on ds by ds' : d 2 F = − k I I ′ 2 r 2 [ ( 3 − k ) r ^ 1 ( d s d s ′ ) − 3 ( 1 − k ) r ^ 1 ( r ^ 1 d s ) ( r ^ 1 d s ′ ) − ( 1 + k ) d s ( r ^ 1 d s ′ ) − ( 1 + k ) d s ′ ( r ^ 1 d s ) ] . {\displaystyle d^{2}\mathbf {F} =-{\frac {kII'}{2r^{2}}}\left[\left(3-k\right){\hat {\mathbf {r} }}_{1}\left(d\mathbf {s} \,d\mathbf {s} '\right)-3\left(1-k\right){\hat {\mathbf {r} }}_{1}\left(\mathbf {\hat {r}} _{1}d\mathbf {s} \right)\left(\mathbf {\hat {r}} _{1}d\mathbf {s} '\right)-\left(1+k\right)d\mathbf {s} \left(\mathbf {\hat {r}} _{1}d\mathbf {s} '\right)-\left(1+k\right)d\mathbf {s} '\left(\mathbf {\hat {r}} _{1}d\mathbf {s} \right)\right].} Integrating around s' eliminates k and the original expression given by Ampère and Gauss is obtained. Thus, as far as the original Ampère experiments are concerned, the value of k has no significance. Ampère took k=−1; Gauss took k=+1, as did Grassmann and Clausius, although Clausius omitted the S component. In the non-ethereal electron theories, Weber took k=−1 and Riemann took k=+1. Ritz left k undetermined in his theory. If we take k = −1, we obtain the Ampère expression: d 2 F = − k I I ′ r 3 [ 2 r ( d s d s ′ ) − 3 r ( r d s ) ( r d s ′ ) ] {\displaystyle d^{2}\mathbf {F} =-{\frac {kII'}{r^{3}}}\left[2\mathbf {r} (d\mathbf {s} \,d\mathbf {s'} )-3\mathbf {r} (\mathbf {r} d\mathbf {s} )(\mathbf {r} d\mathbf {s'} )\right]} If we take k=+1, we obtain d 2 F = − k I I ′ r 3 [ r ( d s d s ′ ) − d s ( r d s ′ ) − d s ′ ( r d s ) ] {\displaystyle d^{2}\mathbf {F} =-{\frac {kII'}{r^{3}}}\left[\mathbf {r} \left(d\mathbf {s} \,d\mathbf {s'} \right)-d\mathbf {s} \left(\mathbf {r} \,d\mathbf {s} '\right)-d\mathbf {s} '\left(\mathbf {r} \,d\mathbf {s} \right)\right]} Using the vector identity for the triple cross product, we may express this result as d 2 F = k I I ′ r 3 [ ( d s × d s ′ × r ) + d s ′ ( r d s ) ] {\displaystyle d^{2}\mathbf {F} ={\frac {kII'}{r^{3}}}\left[\left(d\mathbf {s} \times d\mathbf {s'} \times \mathbf {r} \right)+d\mathbf {s} '(\mathbf {r} \,d\mathbf {s} )\right]} When integrated around ds' the second term is zero, and thus we find the form of Ampère's force law given by Maxwell: F = k I I ′ ∬ d s × ( d s ′ × r ) | r | 3 {\displaystyle \mathbf {F} =kII'\iint {\frac {d\mathbf {s} \times (d\mathbf {s} '\times \mathbf {r} )}{|r|^{3}}}} == Derivation of parallel straight wire case from general formula == Start from the general formula: F 12 = μ 0 4 π ∫ L 1 ∫ L 2 I 1 d ℓ 1 × ( I 2 d ℓ 2 × r ^ 21 ) | r | 2 , {\displaystyle \mathbf {F} _{12}={\frac {\mu _{0}}{4\pi }}\int _{L_{1}}\int _{L_{2}}{\frac {I_{1}d{\boldsymbol {\ell }}_{1}\ \times \ (I_{2}d{\boldsymbol {\ell }}_{2}\ \times \ {\hat {\mathbf {r} }}_{21})}{|r|^{2}}},} Assume wire 2 is along the x-axis, and wire 1 is at y=D, z=0, parallel to the x-axis. Let x 1 , x 2 {\displaystyle x_{1},x_{2}} be the x-coordinate of the differential element of wire 1 and wire 2, respectively. In other words, the differential element of wire 1 is at ( x 1 , D , 0 ) {\displaystyle (x_{1},D,0)} and the differential element of wire 2 is at ( x 2 , 0 , 0 ) {\displaystyle (x_{2},0,0)} . By properties of line integrals, d ℓ 1 = ( d x 1 , 0 , 0 ) {\displaystyle d{\boldsymbol {\ell }}_{1}=(dx_{1},0,0)} and d ℓ 2 = ( d x 2 , 0 , 0 ) {\displaystyle d{\boldsymbol {\ell }}_{2}=(dx_{2},0,0)} . Also, r ^ 21 = 1 ( x 1 − x 2 ) 2 + D 2 ( x 1 − x 2 , D , 0 ) {\displaystyle {\hat {\mathbf {r} }}_{21}={\frac {1}{\sqrt {(x_{1}-x_{2})^{2}+D^{2}}}}(x_{1}-x_{2},D,0)} and | r | = ( x 1 − x 2 ) 2 + D 2 {\displaystyle |r|={\sqrt {(x_{1}-x_{2})^{2}+D^{2}}}} Therefore, the integral is F 12 = μ 0 I 1 I 2 4 π ∫ L 1 ∫ L 2 ( d x 1 , 0 , 0 ) × [ ( d x 2 , 0 , 0 ) × ( x 1 − x 2 , D , 0 ) ] | ( x 1 − x 2 ) 2 + D 2 | 3 / 2 . {\displaystyle \mathbf {F} _{12}={\frac {\mu _{0}I_{1}I_{2}}{4\pi }}\int _{L_{1}}\int _{L_{2}}{\frac {(dx_{1},0,0)\ \times \ \left[(dx_{2},0,0)\ \times \ (x_{1}-x_{2},D,0)\right]}{|(x_{1}-x_{2})^{2}+D^{2}|^{3/2}}}.} Evaluating the cross-product: F 12 = μ 0 I 1 I 2 4 π ∫ L 1 ∫ L 2 d x 1 d x 2 ( 0 , − D , 0 ) | ( x 1 − x 2 ) 2 + D 2 | 3 / 2 . {\displaystyle \mathbf {F} _{12}={\frac {\mu _{0}I_{1}I_{2}}{4\pi }}\int _{L_{1}}\int _{L_{2}}dx_{1}dx_{2}{\frac {(0,-D,0)}{|(x_{1}-x_{2})^{2}+D^{2}|^{3/2}}}.} Next, we integrate x 2 {\displaystyle x_{2}} from − ∞ {\displaystyle -\infty } to + ∞ {\displaystyle +\infty } : F 12 = μ 0 I 1 I 2 4 π 2 D ( 0 , − 1 , 0 ) ∫ L 1 d x 1 . {\displaystyle \mathbf {F} _{12}={\frac {\mu _{0}I_{1}I_{2}}{4\pi }}{\frac {2}{D}}(0,-1,0)\int _{L_{1}}dx_{1}.} If wire 1 is also infinite, the integral diverges, because the total attractive force between two infinite parallel wires is infinity. In fact, what we really want to know is the attractive force per unit length of wire 1. Therefore, assume wire 1 has a large but finite length L 1 {\displaystyle L_{1}} . Then the force vector felt by wire 1 is: F 12 = μ 0 I 1 I 2 4 π 2 D ( 0 , − 1 , 0 ) L 1 . {\displaystyle \mathbf {F} _{12}={\frac {\mu _{0}I_{1}I_{2}}{4\pi }}{\frac {2}{D}}(0,-1,0)L_{1}.} As expected, the force that the wire feels is proportional to its length. The force per unit length is: F 12 L 1 = μ 0 I 1 I 2 2 π D ( 0 , − 1 , 0 ) . {\displaystyle {\frac {\mathbf {F} _{12}}{L_{1}}}={\frac {\mu _{0}I_{1}I_{2}}{2\pi D}}(0,-1,0).} The direction of the force is along the y-axis, representing wire 1 getting pulled towards wire 2 if the currents are parallel, as expected. The magnitude of the force per unit length agrees with the expression for F m L {\displaystyle {\frac {F_{m}}{L}}} shown above. == Notable derivations == Chronologically ordered: Ampère's original 1823 derivation: Assis, André Koch Torres; Chaib, J. P. M. C; Ampère, André-Marie (2015). Ampère's electrodynamics: analysis of the meaning and evolution of Ampère's force between current elements, together with a complete translation of his masterpiece: Theory of electrodynamic phenomena, uniquely deduced from experience (PDF). Montreal: Apeiron. ISBN 978-1-987980-03-5. Maxwell's 1873 derivation: Treatise on Electricity and Magnetism vol. 2, part 4, ch. 2 (§§502–527) Pierre Duhem's 1892 derivation: Duhem, Pierre Maurice Marie (9 September 2018). Ampère's Force Law: A Modern Introduction. Alan Aversa (trans.). doi:10.13140/RG.2.2.31100.03206/1. Retrieved 3 July 2019. (EPUB) translation of: Leçons sur l'électricité et le magnétisme vol. 3, appendix to book 14, pp. 309-332 (in French) Alfred O'Rahilly's 1938 derivation: Electromagnetic Theory: A Critical Examination of Fundamentals vol. 1, pp. 102–104 (cf. the following pages, too) == See also == Ampere Magnetic constant Lorentz force Ampère's circuital law Free space == References and notes == == External links == Ampère's force law Includes animated graphic of the force vectors.
Wikipedia/Ampère's_force_law
Science and Hypothesis (French: La Science et l'Hypothèse) is a book by French mathematician Henri Poincaré, first published in 1902. Aimed at a non-specialist readership, it deals with mathematics, space, physics and nature. It puts forward the theses that absolute truth in science is unattainable, and that many commonly held beliefs of scientists are held as convenient conventions rather than because they are more valid than the alternatives. In this book, Poincaré describes open scientific questions regarding the photo-electric effect, Brownian motion, and the relativity of physical laws in space. Reading this book inspired Albert Einstein's subsequent Annus Mirabilis papers published in 1905. A new translation was published in November 2017. == References ==
Wikipedia/La_Science_et_l'Hypothèse
In electromagnetism, the electromagnetic tensor or electromagnetic field tensor (sometimes called the field strength tensor, Faraday tensor or Maxwell bivector) is a mathematical object that describes the electromagnetic field in spacetime. The field tensor was developed by Arnold Sommerfeld after the four-dimensional tensor formulation of special relativity was introduced by Hermann Minkowski.: 22  The tensor allows related physical laws to be written concisely, and allows for the quantization of the electromagnetic field by the Lagrangian formulation described below. == Definition == The electromagnetic tensor, conventionally labelled F, is defined as the exterior derivative of the electromagnetic four-potential, A, a differential 1-form: F = d e f d A . {\displaystyle F\ {\stackrel {\mathrm {def} }{=}}\ \mathrm {d} A.} Therefore, F is a differential 2-form— an antisymmetric rank-2 tensor field—on Minkowski space. In component form, F μ ν = ∂ μ A ν − ∂ ν A μ . {\displaystyle F_{\mu \nu }=\partial _{\mu }A_{\nu }-\partial _{\nu }A_{\mu }.} where ∂ {\displaystyle \partial } is the four-gradient and A {\displaystyle A} is the four-potential. SI units for Maxwell's equations and the particle physicist's sign convention for the signature of Minkowski space (+ − − −), will be used throughout this article. === Relationship with the classical fields === The Faraday differential 2-form is given by F = ( E x / c ) d x ∧ d t + ( E y / c ) d y ∧ d t + ( E z / c ) d z ∧ d t + B x d y ∧ d z + B y d z ∧ d x + B z d x ∧ d y , {\displaystyle F=(E_{x}/c)\ dx\wedge dt+(E_{y}/c)\ dy\wedge dt+(E_{z}/c)\ dz\wedge dt+B_{x}\ dy\wedge dz+B_{y}\ dz\wedge dx+B_{z}\ dx\wedge dy,} where d t {\displaystyle dt} is the time element times the speed of light c {\displaystyle c} . This is the exterior derivative of its 1-form antiderivative A = A x d x + A y d y + A z d z − ( ϕ / c ) d t {\displaystyle A=A_{x}\ dx+A_{y}\ dy+A_{z}\ dz-(\phi /c)\ dt} , where ϕ ( x → , t ) {\displaystyle \phi ({\vec {x}},t)} has − ∇ → ϕ = E → {\displaystyle -{\vec {\nabla }}\phi ={\vec {E}}} ( ϕ {\displaystyle \phi } is a scalar potential for the irrotational/conservative vector field E → {\displaystyle {\vec {E}}} ) and A → ( x → , t ) {\displaystyle {\vec {A}}({\vec {x}},t)} has ∇ → × A → = B → {\displaystyle {\vec {\nabla }}\times {\vec {A}}={\vec {B}}} ( A → {\displaystyle {\vec {A}}} is a vector potential for the solenoidal vector field B → {\displaystyle {\vec {B}}} ). Note that { d F = 0 ⋆ d ⋆ F = J {\displaystyle {\begin{cases}dF=0\\{\star }d{\star }F=J\end{cases}}} where d {\displaystyle d} is the exterior derivative, ⋆ {\displaystyle {\star }} is the Hodge star, J = − J x d x − J y d y − J z d z + ρ d t {\displaystyle J=-J_{x}\ dx-J_{y}\ dy-J_{z}\ dz+\rho \ dt} (where J → {\displaystyle {\vec {J}}} is the electric current density, and ρ {\displaystyle \rho } is the electric charge density) is the 4-current density 1-form, is the differential forms version of Maxwell's equations. The electric and magnetic fields can be obtained from the components of the electromagnetic tensor. The relationship is simplest in Cartesian coordinates: E i = c F 0 i , {\displaystyle E_{i}=cF_{0i},} where c is the speed of light, and B i = − 1 / 2 ϵ i j k F j k , {\displaystyle B_{i}=-1/2\epsilon _{ijk}F^{jk},} where ϵ i j k {\displaystyle \epsilon _{ijk}} is the Levi-Civita tensor. This gives the fields in a particular reference frame; if the reference frame is changed, the components of the electromagnetic tensor will transform covariantly, and the fields in the new frame will be given by the new components. In contravariant matrix form with metric signature (+,-,-,-), F μ ν = [ 0 − E x / c − E y / c − E z / c E x / c 0 − B z B y E y / c B z 0 − B x E z / c − B y B x 0 ] . {\displaystyle F^{\mu \nu }={\begin{bmatrix}0&-E_{x}/c&-E_{y}/c&-E_{z}/c\\E_{x}/c&0&-B_{z}&B_{y}\\E_{y}/c&B_{z}&0&-B_{x}\\E_{z}/c&-B_{y}&B_{x}&0\end{bmatrix}}.} The covariant form is given by index lowering, F μ ν = η α ν F β α η μ β = [ 0 E x / c E y / c E z / c − E x / c 0 − B z B y − E y / c B z 0 − B x − E z / c − B y B x 0 ] . {\displaystyle F_{\mu \nu }=\eta _{\alpha \nu }F^{\beta \alpha }\eta _{\mu \beta }={\begin{bmatrix}0&E_{x}/c&E_{y}/c&E_{z}/c\\-E_{x}/c&0&-B_{z}&B_{y}\\-E_{y}/c&B_{z}&0&-B_{x}\\-E_{z}/c&-B_{y}&B_{x}&0\end{bmatrix}}.} The Faraday tensor's Hodge dual is G α β = 1 2 ϵ α β γ δ F γ δ = [ 0 − B x − B y − B z B x 0 E z / c − E y / c B y − E z / c 0 E x / c B z E y / c − E x / c 0 ] {\displaystyle {G^{\alpha \beta }={\frac {1}{2}}\epsilon ^{\alpha \beta \gamma \delta }F_{\gamma \delta }={\begin{bmatrix}0&-B_{x}&-B_{y}&-B_{z}\\B_{x}&0&E_{z}/c&-E_{y}/c\\B_{y}&-E_{z}/c&0&E_{x}/c\\B_{z}&E_{y}/c&-E_{x}/c&0\end{bmatrix}}}} From now on in this article, when the electric or magnetic fields are mentioned, a Cartesian coordinate system is assumed, and the electric and magnetic fields are with respect to the coordinate system's reference frame, as in the equations above. === Properties === The matrix form of the field tensor yields the following properties: Antisymmetry: F μ ν = − F ν μ {\displaystyle F^{\mu \nu }=-F^{\nu \mu }} Six independent components: In Cartesian coordinates, these are simply the three spatial components of the electric field (Ex, Ey, Ez) and magnetic field (Bx, By, Bz). Inner product: If one forms an inner product of the field strength tensor a Lorentz invariant is formed F μ ν F μ ν = 2 ( B 2 − E 2 c 2 ) {\displaystyle F_{\mu \nu }F^{\mu \nu }=2\left(B^{2}-{\frac {E^{2}}{c^{2}}}\right)} meaning this number does not change from one frame of reference to another. Pseudoscalar invariant: The product of the tensor F μ ν {\displaystyle F^{\mu \nu }} with its Hodge dual G μ ν {\displaystyle G^{\mu \nu }} gives a Lorentz invariant: G γ δ F γ δ = 1 2 ϵ α β γ δ F α β F γ δ = − 4 c B ⋅ E {\displaystyle G_{\gamma \delta }F^{\gamma \delta }={\frac {1}{2}}\epsilon _{\alpha \beta \gamma \delta }F^{\alpha \beta }F^{\gamma \delta }=-{\frac {4}{c}}\mathbf {B} \cdot \mathbf {E} \,} where ϵ α β γ δ {\displaystyle \epsilon _{\alpha \beta \gamma \delta }} is the rank-4 Levi-Civita symbol. The sign for the above depends on the convention used for the Levi-Civita symbol. The convention used here is ϵ 0123 = − 1 {\displaystyle \epsilon _{0123}=-1} . Determinant: det ( F ) = 1 c 2 ( B ⋅ E ) 2 {\displaystyle \det \left(F\right)={\frac {1}{c^{2}}}\left(\mathbf {B} \cdot \mathbf {E} \right)^{2}} which is proportional to the square of the above invariant. Trace: F = F μ μ = 0 {\displaystyle F={{F}^{\mu }}_{\mu }=0} which is equal to zero. === Significance === This tensor simplifies and reduces Maxwell's equations as four vector calculus equations into two tensor field equations. In electrostatics and electrodynamics, Gauss's law and Ampère's circuital law are respectively: ∇ ⋅ E = ρ ϵ 0 , ∇ × B − 1 c 2 ∂ E ∂ t = μ 0 J {\displaystyle \nabla \cdot \mathbf {E} ={\frac {\rho }{\epsilon _{0}}},\quad \nabla \times \mathbf {B} -{\frac {1}{c^{2}}}{\frac {\partial \mathbf {E} }{\partial t}}=\mu _{0}\mathbf {J} } and reduce to the inhomogeneous Maxwell equation: ∂ α F β α = − μ 0 J β {\displaystyle \partial _{\alpha }F^{\beta \alpha }=-\mu _{0}J^{\beta }} , where J α = ( c ρ , J ) {\displaystyle J^{\alpha }=(c\rho ,\mathbf {J} )} is the four-current. In magnetostatics and magnetodynamics, Gauss's law for magnetism and Maxwell–Faraday equation are respectively: ∇ ⋅ B = 0 , ∂ B ∂ t + ∇ × E = 0 {\displaystyle \nabla \cdot \mathbf {B} =0,\quad {\frac {\partial \mathbf {B} }{\partial t}}+\nabla \times \mathbf {E} =\mathbf {0} } which reduce to the Bianchi identity: ∂ γ F α β + ∂ α F β γ + ∂ β F γ α = 0 {\displaystyle \partial _{\gamma }F_{\alpha \beta }+\partial _{\alpha }F_{\beta \gamma }+\partial _{\beta }F_{\gamma \alpha }=0} or using the index notation with square brackets[note 1] for the antisymmetric part of the tensor: ∂ [ α F β γ ] = 0 {\displaystyle \partial _{[\alpha }F_{\beta \gamma ]}=0} Using the expression relating the Faraday tensor to the four-potential, one can prove that the above antisymmetric quantity turns to zero identically ( ≡ 0 {\displaystyle \equiv 0} ). This tensor equation reproduces the homogeneous Maxwell's equations. == Relativity == The field tensor derives its name from the fact that the electromagnetic field is found to obey the tensor transformation law, this general property of physical laws being recognised after the advent of special relativity. This theory stipulated that all the laws of physics should take the same form in all coordinate systems – this led to the introduction of tensors. The tensor formalism also leads to a mathematically simpler presentation of physical laws. The inhomogeneous Maxwell equation leads to the continuity equation: ∂ α J α = J α , α = 0 {\displaystyle \partial _{\alpha }J^{\alpha }=J^{\alpha }{}_{,\alpha }=0} implying conservation of charge. Maxwell's laws above can be generalised to curved spacetime by simply replacing partial derivatives with covariant derivatives: F [ α β ; γ ] = 0 {\displaystyle F_{[\alpha \beta ;\gamma ]}=0} and F α β ; α = μ 0 J β {\displaystyle F^{\alpha \beta }{}_{;\alpha }=\mu _{0}J^{\beta }} where the semicolon notation represents a covariant derivative, as opposed to a partial derivative. These equations are sometimes referred to as the curved space Maxwell equations. Again, the second equation implies charge conservation (in curved spacetime): J α ; α = 0 {\displaystyle J^{\alpha }{}_{;\alpha }\,=0} The stress-energy tensor of electromagnetism T μ ν = 1 μ 0 [ F μ α F ν α − 1 4 η μ ν F α β F α β ] , {\displaystyle T^{\mu \nu }={\frac {1}{\mu _{0}}}\left[F^{\mu \alpha }F^{\nu }{}_{\alpha }-{\frac {1}{4}}\eta ^{\mu \nu }F_{\alpha \beta }F^{\alpha \beta }\right]\,,} satisfies T α β , β + F α β J β = 0 . {\displaystyle {T^{\alpha \beta }}_{,\beta }+F^{\alpha \beta }J_{\beta }=0\,.} == Lagrangian formulation of classical electromagnetism == Classical electromagnetism and Maxwell's equations can be derived from the action: S = ∫ ( − 1 4 μ 0 F μ ν F μ ν − J μ A μ ) d 4 x {\displaystyle {\mathcal {S}}=\int \left(-{\begin{matrix}{\frac {1}{4\mu _{0}}}\end{matrix}}F_{\mu \nu }F^{\mu \nu }-J^{\mu }A_{\mu }\right)\mathrm {d} ^{4}x\,} where d 4 x {\displaystyle \mathrm {d} ^{4}x} is over space and time. This means the Lagrangian density is L = − 1 4 μ 0 F μ ν F μ ν − J μ A μ = − 1 4 μ 0 ( ∂ μ A ν − ∂ ν A μ ) ( ∂ μ A ν − ∂ ν A μ ) − J μ A μ = − 1 4 μ 0 ( ∂ μ A ν ∂ μ A ν − ∂ ν A μ ∂ μ A ν − ∂ μ A ν ∂ ν A μ + ∂ ν A μ ∂ ν A μ ) − J μ A μ {\displaystyle {\begin{aligned}{\mathcal {L}}&=-{\frac {1}{4\mu _{0}}}F_{\mu \nu }F^{\mu \nu }-J^{\mu }A_{\mu }\\&=-{\frac {1}{4\mu _{0}}}\left(\partial _{\mu }A_{\nu }-\partial _{\nu }A_{\mu }\right)\left(\partial ^{\mu }A^{\nu }-\partial ^{\nu }A^{\mu }\right)-J^{\mu }A_{\mu }\\&=-{\frac {1}{4\mu _{0}}}\left(\partial _{\mu }A_{\nu }\partial ^{\mu }A^{\nu }-\partial _{\nu }A_{\mu }\partial ^{\mu }A^{\nu }-\partial _{\mu }A_{\nu }\partial ^{\nu }A^{\mu }+\partial _{\nu }A_{\mu }\partial ^{\nu }A^{\mu }\right)-J^{\mu }A_{\mu }\\\end{aligned}}} The two middle terms in the parentheses are the same, as are the two outer terms, so the Lagrangian density is L = − 1 2 μ 0 ( ∂ μ A ν ∂ μ A ν − ∂ ν A μ ∂ μ A ν ) − J μ A μ . {\displaystyle {\mathcal {L}}=-{\frac {1}{2\mu _{0}}}\left(\partial _{\mu }A_{\nu }\partial ^{\mu }A^{\nu }-\partial _{\nu }A_{\mu }\partial ^{\mu }A^{\nu }\right)-J^{\mu }A_{\mu }.} Substituting this into the Euler–Lagrange equation of motion for a field: ∂ μ ( ∂ L ∂ ( ∂ μ A ν ) ) − ∂ L ∂ A ν = 0 {\displaystyle \partial _{\mu }\left({\frac {\partial {\mathcal {L}}}{\partial (\partial _{\mu }A_{\nu })}}\right)-{\frac {\partial {\mathcal {L}}}{\partial A_{\nu }}}=0} So the Euler–Lagrange equation becomes: − ∂ μ 1 μ 0 ( ∂ μ A ν − ∂ ν A μ ) + J ν = 0. {\displaystyle -\partial _{\mu }{\frac {1}{\mu _{0}}}\left(\partial ^{\mu }A^{\nu }-\partial ^{\nu }A^{\mu }\right)+J^{\nu }=0.\,} The quantity in parentheses above is just the field tensor, so this finally simplifies to ∂ μ F μ ν = μ 0 J ν {\displaystyle \partial _{\mu }F^{\mu \nu }=\mu _{0}J^{\nu }} That equation is another way of writing the two inhomogeneous Maxwell's equations (namely, Gauss's law and Ampère's circuital law) using the substitutions: 1 c E i = − F 0 i ϵ i j k B k = − F i j {\displaystyle {\begin{aligned}{\frac {1}{c}}E^{i}&=-F^{0i}\\\epsilon ^{ijk}B_{k}&=-F^{ij}\end{aligned}}} where i, j, k take the values 1, 2, and 3. === Hamiltonian form === The Hamiltonian density can be obtained with the usual relation, H ( ϕ i , π i ) = π i ϕ ˙ i ( ϕ i , π i ) − L . {\displaystyle {\mathcal {H}}(\phi ^{i},\pi _{i})=\pi _{i}{\dot {\phi }}^{i}(\phi ^{i},\pi _{i})-{\mathcal {L}}\,.} Here ϕ i = A i {\displaystyle \phi ^{i}=A^{i}} are the fields and the momentum density of the EM field is π i = T 0 i = 1 μ 0 F 0 α F i α = 1 μ 0 c E × B . {\displaystyle \pi _{i}=T_{0i}={\frac {1}{\mu _{0}}}F_{0}{}^{\alpha }F_{i\alpha }={\frac {1}{\mu _{0}c}}\mathbf {E} \times \mathbf {B} \,.} such that the conserved quantity associated with translation from Noether's theorem is the total momentum P = ∑ α m α x ˙ α + 1 μ 0 c ∫ V d 3 x E × B . {\displaystyle \mathbf {P} =\sum _{\alpha }m_{\alpha }{\dot {\mathbf {x} }}_{\alpha }+{\frac {1}{\mu _{0}c}}\int _{\mathcal {V}}\mathrm {d} ^{3}x\,\mathbf {E} \times \mathbf {B} \,.} The Hamiltonian density for the electromagnetic field is related to the electromagnetic stress-energy tensor T μ ν = 1 μ 0 [ F μ α F ν α − 1 4 η μ ν F α β F α β ] . {\displaystyle T^{\mu \nu }={\frac {1}{\mu _{0}}}\left[F^{\mu \alpha }F^{\nu }{}_{\alpha }-{\frac {1}{4}}\eta ^{\mu \nu }F_{\alpha \beta }F^{\alpha \beta }\right]\,.} as H = T 00 = 1 2 ( ϵ 0 E 2 + 1 μ 0 B 2 ) = 1 8 π ( E 2 + B 2 ) . {\displaystyle {\mathcal {H}}=T_{00}={\frac {1}{2}}\left(\epsilon _{0}\mathbf {E} ^{2}+{\frac {1}{\mu _{0}}}\mathbf {B} ^{2}\right)={\frac {1}{8\pi }}\left(\mathbf {E} ^{2}+\mathbf {B} ^{2}\right)\,.} where we have neglected the energy density of matter, assuming only the EM field, and the last equality assumes the CGS system. The momentum of nonrelativistic charges interarcting with the EM field in the Coulomb gauge ( ∇ ⋅ A = ∇ i A i = 0 {\displaystyle \nabla \cdot \mathbf {A} =\nabla _{i}A^{i}=0} ) is p α = m α x ˙ α + q α c A ( x α ) . {\displaystyle \mathbf {p} _{\alpha }=m_{\alpha }{\dot {\mathbf {x} }}_{\alpha }+{\frac {q_{\alpha }}{c}}\mathbf {A} (\mathbf {x} _{\alpha })\,.} The total Hamiltonian of the matter + EM field system is H = ∫ V d 3 x T 00 = H m a t + H e m . {\displaystyle H=\int _{\mathcal {V}}d^{3}x\,T_{00}=H_{\rm {mat}}+H_{\rm {em}}\,.} where for nonrelativistic point particles in the Coulomb gauge H m a t = ∑ α m α | x ˙ α | 2 + ∑ α < β q α q β | x α − x β | = ∑ α 1 2 m α [ p α − q α c A ( x α ) ] 2 + ∑ α < β q α q β | x α − x β | . {\displaystyle H_{\rm {mat}}=\sum _{\alpha }m_{\alpha }|{\dot {\mathbf {x} }}_{\alpha }|^{2}+\sum _{\alpha <\beta }{\frac {q_{\alpha }q_{\beta }}{|\mathbf {x} _{\alpha }-\mathbf {x} _{\beta }|}}=\sum _{\alpha }{\frac {1}{2m_{\alpha }}}\left[\mathbf {p} _{\alpha }-{\frac {q_{\alpha }}{c}}\mathbf {A} (\mathbf {x} _{\alpha })\right]^{2}+\sum _{\alpha <\beta }{\frac {q_{\alpha }q_{\beta }}{|\mathbf {x} _{\alpha }-\mathbf {x} _{\beta }|}}\,.} where the last term is identically 1 8 π ∫ V d 3 x E ∥ 2 {\displaystyle {\frac {1}{8\pi }}\int _{\mathcal {V}}d^{3}x\mathbf {E} _{\parallel }^{2}} where E ∥ i = ∇ i A 0 {\displaystyle {E}_{\parallel i}={\nabla _{i}}A_{0}} and H e m = 1 8 π ∫ V d 3 x ( E ⊥ 2 + B 2 ) . {\displaystyle H_{\rm {em}}={\frac {1}{8\pi }}\int _{\mathcal {V}}d^{3}x\left(\mathbf {E} _{\perp }^{2}+\mathbf {B} ^{2}\right)\,.} where and E ⊥ i = − 1 c ∂ 0 A i {\displaystyle {E}_{\perp i}=-{\frac {1}{c}}\partial _{0}A_{i}} . === Quantum electrodynamics and field theory === The Lagrangian of quantum electrodynamics extends beyond the classical Lagrangian established in relativity to incorporate the creation and annihilation of photons (and electrons): L = ψ ¯ ( i ℏ c γ α D α − m c 2 ) ψ − 1 4 μ 0 F α β F α β , {\displaystyle {\mathcal {L}}={\bar {\psi }}\left(i\hbar c\,\gamma ^{\alpha }D_{\alpha }-mc^{2}\right)\psi -{\frac {1}{4\mu _{0}}}F_{\alpha \beta }F^{\alpha \beta },} where the first part in the right hand side, containing the Dirac spinor ψ {\displaystyle \psi } , represents the Dirac field. In quantum field theory it is used as the template for the gauge field strength tensor. By being employed in addition to the local interaction Lagrangian it reprises its usual role in QED. == See also == Classification of electromagnetic fields Covariant formulation of classical electromagnetism Electromagnetic stress–energy tensor Gluon field strength tensor Ricci calculus Riemann–Silberstein vector == Notes == == References == Brau, Charles A. (2004). Modern Problems in Classical Electrodynamics. Oxford University Press. ISBN 0-19-514665-4. Jackson, John D. (1999). Classical Electrodynamics. John Wiley & Sons, Inc. ISBN 0-471-30932-X. Peskin, Michael E.; Schroeder, Daniel V. (1995). An Introduction to Quantum Field Theory. Perseus Publishing. ISBN 0-201-50397-2.
Wikipedia/Faraday_tensor
Sinusoidal plane-wave solutions are particular solutions to the wave equation. The general solution of the electromagnetic wave equation in homogeneous, linear, time-independent media can be written as a linear superposition of plane-waves of different frequencies and polarizations. The treatment in this article is classical but, because of the generality of Maxwell's equations for electrodynamics, the treatment can be converted into the quantum mechanical treatment with only a reinterpretation of classical quantities (aside from the quantum mechanical treatment needed for charge and current densities). The reinterpretation is based on the theories of Max Planck and the interpretations by Albert Einstein of those theories and of other experiments. The quantum generalization of the classical treatment can be found in the articles on photon polarization and photon dynamics in the double-slit experiment. == Explanation == Experimentally, every light signal can be decomposed into a spectrum of frequencies and wavelengths associated with sinusoidal solutions of the wave equation. Polarizing filters can be used to decompose light into its various polarization components. The polarization components can be linear, circular or elliptical. == Plane waves == The plane sinusoidal solution for an electromagnetic wave traveling in the z direction is E ( r , t ) = ( E 0 , x cos ⁡ ( k z − ω t + α x ) E 0 , y cos ⁡ ( k z − ω t + α y ) 0 ) = E 0 , x cos ⁡ ( k z − ω t + α x ) x ^ + E 0 , y cos ⁡ ( k z − ω t + α y ) y ^ {\displaystyle {\begin{aligned}\mathbf {E} (\mathbf {r} ,t)&={\begin{pmatrix}E_{0,x}\cos \left(kz-\omega t+\alpha _{x}\right)\\E_{0,y}\cos \left(kz-\omega t+\alpha _{y}\right)\\0\end{pmatrix}}\\[1ex]&=E_{0,x}\cos \left(kz-\omega t+\alpha _{x}\right)\,{\hat {\mathbf {x} }}\;+\;E_{0,y}\cos \left(kz-\omega t+\alpha _{y}\right)\,{\hat {\mathbf {y} }}\end{aligned}}} for the electric field and c B ( r , t ) = z ^ × E ( r , t ) = ( − E 0 , y cos ⁡ ( k z − ω t + α y ) − E 0 , x cos ⁡ ( k z − ω t + α x ) 0 ) = − E 0 , y cos ⁡ ( k z − ω t + α y ) x ^ + E 0 , x cos ⁡ ( k z − ω t + α x ) y ^ {\displaystyle {\begin{aligned}c\,\mathbf {B} (\mathbf {r} ,t)&={\hat {\mathbf {z} }}\times \mathbf {E} (\mathbf {r} ,t)\\[1ex]&={\begin{pmatrix}-E_{0,y}\cos \left(kz-\omega t+\alpha _{y}\right)\\{\hphantom {-}}E_{0,x}\cos \left(kz-\omega t+\alpha _{x}\right)\\0\end{pmatrix}}\\[1ex]&=-E_{0,y}\cos \left(kz-\omega t+\alpha _{y}\right){\hat {\mathbf {x} }}\;+\;E_{0,x}\cos \left(kz-\omega t+\alpha _{x}\right){\hat {\mathbf {y} }}\end{aligned}}} for the magnetic field, where k is the wavenumber, ω = c k {\displaystyle \omega =ck} ω {\displaystyle \omega } is the angular frequency of the wave, and c {\displaystyle c} is the speed of light. The hats on the vectors indicate unit vectors in the x, y, and z directions. r = (x, y, z) is the position vector (in meters). The plane wave is parameterized by the amplitudes E 0 , x = | E | cos ⁡ θ E 0 , y = | E | sin ⁡ θ {\displaystyle {\begin{aligned}E_{0,x}&=\left|\mathbf {E} \right|\cos \theta \\[1.56ex]E_{0,y}&=\left|\mathbf {E} \right|\sin \theta \end{aligned}}} and phases α x , α y {\displaystyle \alpha _{x},\alpha _{y}} where θ = d e f tan − 1 ⁡ ( E 0 , y E 0 , x ) . {\displaystyle \theta \ {\stackrel {\mathrm {def} }{=}}\ \tan ^{-1}\left({\frac {E_{0,y}}{E_{0,x}}}\right).} and | E | 2 = d e f ( E 0 , x ) 2 + ( E 0 , y ) 2 . {\displaystyle \left|\mathbf {E} \right|^{2}\ {\stackrel {\mathrm {def} }{=}}\ \left(E_{0,x}\right)^{2}+\left(E_{0,y}\right)^{2}.} == Polarization state vector == === Jones vector === All the polarization information can be reduced to a single vector, called the Jones vector, in the x-y plane. This vector, while arising from a purely classical treatment of polarization, can be interpreted as a quantum state vector. The connection with quantum mechanics is made in the article on photon polarization. The vector emerges from the plane-wave solution. The electric field solution can be rewritten in complex notation as E ( r , t ) = | E | R e ⁡ [ | ψ ⟩ e i ( k z − ω t ) ] {\displaystyle \mathbf {E} (\mathbf {r} ,t)=|\mathbf {E} |\,\operatorname {\mathcal {R_{e}}} \left[|\psi \rangle e^{i(kz-\omega t)}\right]} where | ψ ⟩ = d e f ( ψ x ψ y ) = ( cos ⁡ ( θ ) e i α x sin ⁡ ( θ ) e i α y ) {\displaystyle |\psi \rangle \ {\stackrel {\mathrm {def} }{=}}\ {\begin{pmatrix}\psi _{x}\\\psi _{y}\end{pmatrix}}={\begin{pmatrix}\cos(\theta )e^{i\alpha _{x}}\\\sin(\theta )e^{i\alpha _{y}}\end{pmatrix}}} is the Jones vector in the x-y plane. The notation for this vector is the bra–ket notation of Dirac, which is normally used in a quantum context. The quantum notation is used here in anticipation of the interpretation of the Jones vector as a quantum state vector. === Dual Jones vector === The Jones vector has a dual given by ⟨ ψ | = d e f ( ψ x ∗ ψ y ∗ ) = ( cos ⁡ ( θ ) e − i α x sin ⁡ ( θ ) e − i α y ) . {\displaystyle \langle \psi |\ {\stackrel {\mathrm {def} }{=}}\ {\begin{pmatrix}\psi _{x}^{*}&\psi _{y}^{*}\end{pmatrix}}={\begin{pmatrix}\cos(\theta )e^{-i\alpha _{x}}&\sin(\theta )e^{-i\alpha _{y}}\end{pmatrix}}.} === Normalization of the Jones vector === A Jones vector represents a specific wave with a specific phase, amplitude and state of polarization. When one is using a Jones vector simply to indicate a state of polarization, then it is customary for it to be normalized. That requires that the inner product of the vector with itself to be unity: ⟨ ψ | ψ ⟩ = ( ψ x ∗ ψ y ∗ ) ( ψ x ψ y ) = 1. {\displaystyle \langle \psi |\psi \rangle ={\begin{pmatrix}\psi _{x}^{*}&\psi _{y}^{*}\end{pmatrix}}{\begin{pmatrix}\psi _{x}\\\psi _{y}\end{pmatrix}}=1.} An arbitrary Jones vector can simply be scaled to achieve this property. All normalized Jones vectors represent a wave of the same intensity (within a particular isotropic medium). Even given a normalized Jones vector, multiplication by a pure phase factor will result in a different normalized Jones vector representing the same state of polarization. == Polarization states == === Linear polarization === In general, the wave is linearly polarized when the phase angles α x , α y {\displaystyle \alpha _{x},\alpha _{y}} are equal, α x = α y = d e f α . {\displaystyle \alpha _{x}=\alpha _{y}\ {\stackrel {\mathrm {def} }{=}}\ \alpha .} This represents a wave polarized at an angle θ {\displaystyle \theta } with respect to the x axis. In that case the Jones vector can be written | ψ ⟩ = ( cos ⁡ θ sin ⁡ θ ) e i α . {\displaystyle |\psi \rangle ={\begin{pmatrix}\cos \theta \\\sin \theta \end{pmatrix}}e^{i\alpha }.} === Elliptical and circular polarization === The general case in which the electric field is not confined to one direction but rotates in the x-y plane is called elliptical polarization. The state vector is given by | ψ ⟩ = ( ψ x ψ y ) = ( cos ⁡ ( θ ) e i α x sin ⁡ ( θ ) e i α y ) = e i α ( cos ⁡ ( θ ) sin ⁡ ( θ ) e i Δ α ) . {\displaystyle |\psi \rangle ={\begin{pmatrix}\psi _{x}\\\psi _{y}\end{pmatrix}}={\begin{pmatrix}\cos(\theta )e^{i\alpha _{x}}\\\sin(\theta )e^{i\alpha _{y}}\end{pmatrix}}=e^{i\alpha }{\begin{pmatrix}\cos(\theta )\\\sin(\theta )e^{i\Delta \alpha }\end{pmatrix}}.} In the special case of Δ α = 0 {\displaystyle \Delta \alpha =0} , this reduces to linear polarization. Circular polarization corresponds to the special cases of θ = ± π / 4 {\displaystyle \theta =\pm \pi /4} with Δ α = π / 2 {\displaystyle \Delta \alpha =\pi /2} . The two circular polarization states are thus given by the Jones vectors: | ψ ⟩ = ( ψ x ψ y ) = e i α 1 2 ( 1 ± i ) . {\displaystyle |\psi \rangle ={\begin{pmatrix}\psi _{x}\\\psi _{y}\end{pmatrix}}=e^{i\alpha }{\frac {1}{\sqrt {2}}}{\begin{pmatrix}1\\\pm i\end{pmatrix}}.} == See also == Fourier series Transverse mode Transverse wave Maxwell's equations Electromagnetic wave equation Mathematical descriptions of the electromagnetic field Polarization from an atomic transition: linear and circular Archived 2010-04-17 at the Wayback Machine == References == Jackson, John D. (1998). Classical Electrodynamics (3rd ed.). Wiley. ISBN 0-471-30932-X.
Wikipedia/Sinusoidal_plane-wave_solutions_of_the_electromagnetic_wave_equation
In general relativity, the Raychaudhuri equation, or Landau–Raychaudhuri equation, is a fundamental result describing the motion of nearby bits of matter. The equation is important as a fundamental lemma for the Penrose–Hawking singularity theorems and for the study of exact solutions in general relativity, but has independent interest, since it offers a simple and general validation of our intuitive expectation that gravitation should be a universal attractive force between any two bits of mass–energy in general relativity, as it is in Newton's theory of gravitation. The equation was discovered independently by the Indian physicist Amal Kumar Raychaudhuri and the Soviet physicist Lev Landau. == Mathematical statement == Given a timelike unit vector field X → {\displaystyle {\vec {X}}} (which can be interpreted as a family or congruence of nonintersecting world lines via the integral curve, not necessarily geodesics), Raychaudhuri's equation in D {\displaystyle D} spacetime dimensions can be written as θ ˙ = − θ 2 D − 1 − 2 σ 2 + 2 ω 2 − E [ X → ] a a + X ˙ a ; a {\displaystyle {\dot {\theta }}=-{\frac {\theta ^{2}}{D-1}}-2\sigma ^{2}+2\omega ^{2}-{E[{\vec {X}}]^{a}}_{a}+{{\dot {X}}^{a}}_{;a}} where 2 σ 2 = σ m n σ m n , 2 ω 2 = ω m n ω m n {\displaystyle 2\sigma ^{2}=\sigma _{mn}\,\sigma ^{mn},\;2\omega ^{2}=\omega _{mn}\,\omega ^{mn}} are (non-negative) quadratic invariants of the shear tensor σ a b = θ a b − 1 D − 1 θ h a b {\displaystyle \sigma _{ab}=\theta _{ab}-{\frac {1}{D-1}}\,\theta \,h_{ab}} and the vorticity tensor ω a b = h m a h n b X [ m ; n ] {\displaystyle \omega _{ab}={h^{m}}_{a}\,{h^{n}}_{b}X_{[m;n]}} respectively. Here, θ a b = h m a h n b X ( m ; n ) {\displaystyle \theta _{ab}={h^{m}}_{a}\,{h^{n}}_{b}X_{(m;n)}} is the expansion tensor, θ {\displaystyle \theta } is its trace, called the expansion scalar, and h a b = g a b + X a X b {\displaystyle h_{ab}=g_{ab}+X_{a}\,X_{b}} is the projection tensor onto the hyperplanes orthogonal to X → {\displaystyle {\vec {X}}} . Also, dot denotes differentiation with respect to proper time counted along the world lines in the congruence. Finally, the trace of the tidal tensor E [ X → ] a b {\displaystyle E[{\vec {X}}]_{ab}} can also be written as E [ X → ] a a = R m n X m X n {\displaystyle {E[{\vec {X}}]^{a}}_{a}=R_{mn}\,X^{m}\,X^{n}} This quantity is sometimes called the Raychaudhuri scalar. == Intuitive significance == The expansion scalar measures the fractional rate at which the volume of a small ball of matter changes with respect to time as measured by a central comoving observer (and so it may take negative values). In other words, the above equation gives us the evolution equation for the expansion of the timelike congruence. If the derivative (with respect to proper time) of this quantity turns out to be negative along some world line (after a certain event), then any expansion of a small ball of matter (whose center of mass follows the world line in question) must be followed by recollapse. If not, continued expansion is possible. The shear tensor measures any tendency of an initially spherical ball of matter to become distorted into an ellipsoidal shape. The vorticity tensor measures any tendency of nearby world lines to twist about one another (if this happens, our small blob of matter is rotating, as happens to fluid elements in an ordinary fluid flow which exhibits nonzero vorticity). The right hand side of Raychaudhuri's equation consists of two types of terms: terms which promote (re)-collapse initially nonzero expansion scalar, nonzero shearing, positive trace of the tidal tensor; this is precisely the condition guaranteed by assuming the strong energy condition, which holds for the most important types of solutions, such as physically reasonable fluid solutions, terms which oppose (re)-collapse nonzero vorticity, corresponding to Newtonian centrifugal forces, positive divergence of the acceleration vector (e.g., outward pointing acceleration due to a spherically symmetric explosion, or more prosaically, due to body forces on fluid elements in a ball of fluid held together by its own self-gravitation). Usually one term will win out. However, there are situations in which a balance can be achieved. This balance may be: stable: in the case of hydrostatic equilibrium of a ball of perfect fluid (e.g. in a model of a stellar interior), the expansion, shear, and vorticity all vanish, and a radial divergence in the acceleration vector (the necessary body force on each blob of fluid being provided by the pressure of surrounding fluid) counteracts the Raychaudhuri scalar, which for a perfect fluid in four dimensions is E [ X → ] a a = 4 π ( μ + 3 p ) {\displaystyle E[{\vec {X}}]^{a}{}_{a}=4\pi (\mu +3p)} in geometrized units. In Newtonian gravitation, the trace of the tidal tensor is 4 π μ {\displaystyle 4\pi \mu } ; in general relativity, the tendency of pressure to oppose gravity is partially offset by this term, which under certain circumstances can become important. unstable: for example, the world lines of the dust particles in the Gödel solution have vanishing shear, expansion, and acceleration, but constant vorticity just balancing a constant Raychuadhuri scalar due to nonzero vacuum energy ("cosmological constant"). == Focusing theorem == Suppose the strong energy condition holds in some region of our spacetime, and let X → {\displaystyle {\vec {X}}} be a timelike geodesic unit vector field with vanishing vorticity, or equivalently, which is hypersurface orthogonal. For example, this situation can arise in studying the world lines of the dust particles in cosmological models which are exact dust solutions of the Einstein field equation (provided that these world lines are not twisting about one another, in which case the congruence would have nonzero vorticity). Then Raychaudhuri's equation becomes θ ˙ = − θ 2 D − 1 − 2 σ 2 − E [ X → ] a a {\displaystyle {\dot {\theta }}=-{\frac {\theta ^{2}}{D-1}}-2\sigma ^{2}-{E[{\vec {X}}]^{a}}_{a}} Now the right hand side is always negative or zero, so the expansion scalar never increases in time. Since the last two terms are non-negative, we have θ ˙ ≤ − θ 2 D − 1 {\displaystyle {\dot {\theta }}\leq -{\frac {\theta ^{2}}{D-1}}} Integrating this inequality with respect to proper time τ {\displaystyle \tau } gives 1 θ ≥ 1 θ 0 + τ D − 1 {\displaystyle {\frac {1}{\theta }}\geq {\frac {1}{\theta _{0}}}+{\frac {\tau }{D-1}}} If the initial value θ 0 {\displaystyle \theta _{0}} of the expansion scalar is negative, this means that our geodesics must converge in a caustic ( θ {\displaystyle \theta } goes to minus infinity) within a proper time of at most ( D − 1 ) / | θ 0 | {\displaystyle (D-1)/|\theta _{0}|} after the measurement of the initial value θ 0 {\displaystyle \theta _{0}} of the expansion scalar. This need not signal an encounter with a curvature singularity, but it does signal a breakdown in our mathematical description of the motion of the dust. == Optical equations == There is also an optical (or null) version of Raychaudhuri's equation for null geodesic congruences. θ ^ ˙ = − 1 D − 2 θ ^ 2 − 2 σ ^ 2 + 2 ω ^ 2 − T μ ν U μ U ν {\displaystyle {\dot {\widehat {\theta }}}=-{\frac {1}{D-2}}{\widehat {\theta }}^{2}-2{\widehat {\sigma }}^{2}+2{\widehat {\omega }}^{2}-T_{\mu \nu }U^{\mu }U^{\nu }} . Here, the hats indicate that the expansion, shear and vorticity are only with respect to the transverse directions. When the vorticity is zero, then assuming the null energy condition, caustics will form before the affine parameter reaches ( D − 2 ) / θ ^ 0 {\displaystyle (D-2)/{\widehat {\theta }}_{0}} . === Applications === The event horizon is defined as the boundary of the causal past of null infinity. Such boundaries are generated by null geodesics. The affine parameter goes to infinity as we approach null infinity, and no caustics form until then. So, the expansion of the event horizon has to be nonnegative. As the expansion gives the rate of change of the logarithm of the area density, this means the event horizon area can never go down, at least classically, assuming the null energy condition. == See also == Congruence (general relativity), for a derivation of the kinematical decomposition and of Raychaudhuri's equation Gravitational singularity Penrose–Hawking singularity theorems for an application of the focusing theorem == Notes == == References == Poisson, Eric (2004). A Relativist's Toolkit: The Mathematics of Black Hole Mechanics. Cambridge: Cambridge University Press. ISBN 0-521-83091-5. See chapter 2 for an excellent discussion of Raychaudhuri's equation for both timelike and null geodesics, as well as the focusing theorem. Carroll, Sean M. (2004). Spacetime and Geometry: An Introduction to General Relativity. San Francisco: Addison-Wesley. ISBN 0-8053-8732-3. See appendix F. Stephani, Hans; Kramer, Dietrich; MacCallum, Malcolm; Hoenselaers, Cornelius; Hertl, Eduard (2003). Exact Solutions to Einstein's Field Equations (2nd ed.). Cambridge: Cambridge University Press. ISBN 0-521-46136-7. See chapter 6 for a very detailed introduction to geodesic congruences, including the general form of Raychaudhuri's equation. Hawking, Stephen & Ellis, G. F. R. (1973). The Large Scale Structure of Space-Time. Cambridge: Cambridge University Press. ISBN 0-521-09906-4. See section 4.1 for a discussion of the general form of Raychaudhuri's equation. Raychaudhuri, A. K. (1955). "Relativistic cosmology I.". Phys. Rev. 98 (4): 1123–1126. Bibcode:1955PhRv...98.1123R. doi:10.1103/PhysRev.98.1123. hdl:10821/7599. Raychaudhuri's paper introducing his equation. Dasgupta, Anirvan; Nandan, Hemwati & Kar, Sayan (2009). "Kinematics of geodesic flows in stringy black hole backgrounds". Phys. Rev. D. 79 (12): 124004. arXiv:0809.3074. Bibcode:2009PhRvD..79l4004D. doi:10.1103/PhysRevD.79.124004. S2CID 118628925. See section IV for derivation of the general form of Raychaudhuri equations for three kinematical quantities (namely expansion scalar, shear and rotation). Kar, Sayan & SenGupta, Soumitra (2007). "The Raychaudhuri equations: A Brief review". Pramana. 69 (1): 49–76. arXiv:gr-qc/0611123. Bibcode:2007Prama..69...49K. doi:10.1007/s12043-007-0110-9. S2CID 119438891. See for a review on Raychaudhuri equations. == External links == The Meaning of Einstein's Field Equation by John C. Baez and Emory F. Bunn. Raychaudhuri's equation takes center stage in this well known (and highly recommended) semi-technical exposition of what Einstein's equation says. Raychaudhuri, A.K. (1979). Theoretical Cosmology. Oxford Science Publications. Clarendon Press. ISBN 978-0-19-851462-6.
Wikipedia/Raychaudhuri_equation
In general relativity, the Hamilton–Jacobi–Einstein equation (HJEE) or Einstein–Hamilton–Jacobi equation (EHJE) is an equation in the Hamiltonian formulation of geometrodynamics in superspace, cast in the "geometrodynamics era" around the 1960s, by Asher Peres in 1962 and others. It is an attempt to reformulate general relativity in such a way that it resembles quantum theory within a semiclassical approximation, much like the correspondence between quantum mechanics and classical mechanics. It is named for Albert Einstein, Carl Gustav Jacob Jacobi, and William Rowan Hamilton. The EHJE contains as much information as all ten Einstein field equations (EFEs). It is a modification of the Hamilton–Jacobi equation (HJE) from classical mechanics, and can be derived from the Einstein–Hilbert action using the principle of least action in the ADM formalism. == Background and motivation == === Correspondence between classical and quantum physics === In classical analytical mechanics, the dynamics of the system is summarized by the action S. In quantum theory, namely non-relativistic quantum mechanics (QM), relativistic quantum mechanics (RQM), as well as quantum field theory (QFT), with varying interpretations and mathematical formalisms in these theories, the behavior of a system is completely contained in a complex-valued probability amplitude Ψ (more formally as a quantum state ket |Ψ⟩ – an element of a Hilbert space). Using the polar form of the wave function, so making a Madelung transformation: Ψ = ρ e i S / ℏ {\displaystyle \Psi ={\sqrt {\rho }}e^{iS/\hbar }} the phase of Ψ is interpreted as the action, and the modulus √ρ = √Ψ*Ψ = |Ψ| is interpreted according to the Copenhagen interpretation as the probability density function. The reduced Planck constant ħ is the quantum of angular momentum. Substitution of this into the quantum general Schrödinger equation (SE): i ℏ ∂ Ψ ∂ t = H ^ Ψ , {\displaystyle i\hbar {\frac {\partial \Psi }{\partial t}}={\hat {H}}\Psi \,,} and taking the limit ħ → 0 yields the classical HJE: − ∂ S ∂ t = H , {\displaystyle -{\frac {\partial S}{\partial t}}=H\,,} which is one aspect of the correspondence principle. === Shortcomings of four-dimensional spacetime === On the other hand, the transition between quantum theory and general relativity (GR) is difficult to make; one reason is the treatment of space and time in these theories. In non-relativistic QM, space and time are not on equal footing; time is a parameter while position is an operator. In RQM and QFT, position returns to the usual spatial coordinates alongside the time coordinate, although these theories are consistent only with SR in four-dimensional flat Minkowski space, and not curved space nor GR. It is possible to formulate quantum field theory in curved spacetime, yet even this still cannot incorporate GR because gravity is not renormalizable in QFT. Additionally, in GR particles move through curved spacetime with a deterministically known position and momentum at every instant, while in quantum theory, the position and momentum of a particle cannot be exactly known simultaneously; space x and momentum p, and energy E and time t, are pairwise subject to the uncertainty principles Δ x Δ p ≥ ℏ 2 , Δ E Δ t ≥ ℏ 2 , {\displaystyle \Delta x\Delta p\geq {\frac {\hbar }{2}},\quad \Delta E\Delta t\geq {\frac {\hbar }{2}}\,,} which imply that small intervals in space and time mean large fluctuations in energy and momentum are possible. Since in GR mass–energy and momentum–energy is the source of spacetime curvature, large fluctuations in energy and momentum mean the spacetime "fabric" could potentially become so distorted that it breaks up at sufficiently small scales. There is theoretical and experimental evidence from QFT that vacuum does have energy since the motion of electrons in atoms is fluctuated, this is related to the Lamb shift. For these reasons and others, at increasingly small scales, space and time are thought to be dynamical up to the Planck length and Planck time scales. In any case, a four-dimensional curved spacetime continuum is a well-defined and central feature of general relativity, but not in quantum mechanics. == Equation == One attempt to find an equation governing the dynamics of a system, in as close a way as possible to QM and GR, is to reformulate the HJE in three-dimensional curved space understood to be "dynamic" (changing with time), and not four-dimensional spacetime dynamic in all four dimensions, as the EFEs are. The space has a metric (see Metric space for details). The metric tensor in general relativity is an essential object, since proper time, arc length, geodesic motion in curved spacetime, and other things, all depend on the metric. The HJE above is modified to include the metric, although it is only a function of the 3d spatial coordinates r, (for example r = (x, y, z) in Cartesian coordinates) without the coordinate time t: g i j = g i j ( r ) . {\displaystyle g_{ij}=g_{ij}(\mathbf {r} )\,.} In this context gij is referred to as the "metric field" or simply "field". === General equation (free curved space) === For a free particle in curved "empty space" or "free space", i.e. in the absence of matter other than the particle itself, the equation can be written: where g is the determinant of the metric tensor and R the Ricci scalar curvature of the 3d geometry (not including time), and the "δ" instead of "d" denotes the variational derivative rather than the ordinary derivative. These derivatives correspond to the field momenta "conjugate to the metric field": π i j ( r ) = π i j = δ S δ g i j , {\displaystyle \pi ^{ij}(\mathbf {r} )=\pi ^{ij}={\frac {\delta S}{\delta g_{ij}}}\,,} the rate of change of action with respect to the field coordinates gij(r). The g and π here are analogous to q and p = ∂S/∂q, respectively, in classical Hamiltonian mechanics. See canonical coordinates for more background. The equation describes how wavefronts of constant action propagate in superspace - as the dynamics of matter waves of a free particle unfolds in curved space. Additional source terms are needed to account for the presence of extra influences on the particle, which include the presence of other particles or distributions of matter (which contribute to space curvature), and sources of electromagnetic fields affecting particles with electric charge or spin. Like the Einstein field equations, it is non-linear in the metric because of the products of the metric components, and like the HJE it is non-linear in the action due to the product of variational derivatives in the action. The quantum mechanical concept, that action is the phase of the wavefunction, can be interpreted from this equation as follows. The phase has to satisfy the principle of least action; it must be stationary for a small change in the configuration of the system, in other words for a slight change in the position of the particle, which corresponds to a slight change in the metric components; g i j → g i j + δ g i j , {\displaystyle g_{ij}\rightarrow g_{ij}+\delta g_{ij}\,,} the slight change in phase is zero: δ S = ∫ δ S δ g i j ( r ) δ g i j ( r ) d 3 r = 0 , {\displaystyle \delta S=\int {\frac {\delta S}{\delta g_{ij}(\mathbf {r} )}}\delta g_{ij}(\mathbf {r} )\mathrm {d} ^{3}\mathbf {r} =0\,,} (where d3r is the volume element of the volume integral). So the constructive interference of the matter waves is a maximum. This can be expressed by the superposition principle; applied to many non-localized wavefunctions spread throughout the curved space to form a localized wavefunction: Ψ = ∑ n c n ψ n , {\displaystyle \Psi =\sum _{n}c_{n}\psi _{n}\,,} for some coefficients cn, and additionally the action (phase) Sn for each ψn must satisfy: δ S = S n + 1 − S n = 0 , {\displaystyle \delta S=S_{n+1}-S_{n}=0\,,} for all n, or equivalently, S 1 = S 2 = ⋯ = S n = ⋯ . {\displaystyle S_{1}=S_{2}=\cdots =S_{n}=\cdots \,.} Regions where Ψ is maximal or minimal occur at points where there is a probability of finding the particle there, and where the action (phase) change is zero. So in the EHJE above, each wavefront of constant action is where the particle could be found. This equation still does not "unify" quantum mechanics and general relativity, because the semiclassical Eikonal approximation in the context of quantum theory and general relativity has been applied, to provide a transition between these theories. == Applications == The equation takes various complicated forms in: Quantum gravity Quantum cosmology == See also == Foliation Quantum geometry Quantum spacetime Calculus of variations The equation is also related to the Wheeler–DeWitt equation. Peres metric == References == === Notes === === Further reading === ==== Books ==== J.L. Lopes (1977). Quantum mechanics, a half century later: Papers of a Colloquium on Fifty Years of Quantum Mechanics. Strasbourg, France: Springer, Kluwer Academic Publishers. ISBN 978-90-277-0784-0. C. Rovelli (2004). Quantum Gravity. Cambridge University Press. ISBN 978-0-521-83733-0. C. Kiefer (2012). Quantum Gravity (3rd ed.). Oxford University Press. ISBN 978-0-19-958520-5. J.K. Glikman (1999). Towards Quantum Gravity: Proceedings of the XXXV International Winter School on Theoretical Physics. Polanica, Poland: Springer. p. 224. ISBN 978-3-540-66910-4. L.Z. Fang; R. Ruffini (1987). Quantum cosmology. Advanced Series in Astrophysics and Cosmology. Vol. 3. World Scientific. ISBN 978-9971-5-0312-3. ==== Selected papers ==== T. Banks (1984). "TCP, Quantum Gravity, The Cosmological Constant and all that ..." (PDF). Stanford, USA. (Equation A.3 in the appendix). B. K. Darian (1997). "Solving the Hamilton-Jacobi equation for gravitationally interacting electromagnetic and scalar fields". Classical and Quantum Gravity. 15 (1). Canada, USA: 143–152. arXiv:gr-qc/9707046v2. Bibcode:1998CQGra..15..143D. doi:10.1088/0264-9381/15/1/010. S2CID 250879669. J. R. Bond; D. S. Salopek (1990). "Nonlinear evolution of long-wavelength metric fluctuations in inflationary models". Phys. Rev. D. Canada (USA), Illinois (USA). Sang Pyo Kim (1996). "Classical spacetime from quantum gravity". Classical and Quantum Gravity. 13 (6). Kunsan, Korea: IoP: 1377–1382. arXiv:gr-qc/9601049. Bibcode:1996CQGra..13.1377K. doi:10.1088/0264-9381/13/6/011. S2CID 250877590. S.R. Berbena; A.V. Berrocal; J. Socorro; L.O. Pimentel (2006). "The Einstein-Hamilton-Jacobi equation: Searching the classical solution for barotropic FRW". Guanajuato and Autónoma Metropolitana (Mexico). arXiv:gr-qc/0607123. Bibcode:2007RMxFS..53b.115B.
Wikipedia/Hamilton–Jacobi–Einstein_equation
In electromagnetism, the Lorentz force is the force exerted on a charged particle by electric and magnetic fields. It is the fundamental force that governs the motion of charged particles in electromagnetic fields and underlies many physical phenomena, from the operation of electric motors and particle accelerators to the behavior of plasmas. The force has two components. The electric force acts in the direction of the electric field for positive charges and opposite to it for negative charges, tending to accelerate the particle in a straight line. The magnetic force is perpendicular to both the particle's velocity and the magnetic field, and it causes the particle to move along a curved trajectory, often circular or helical in form, depending on the directions of the fields. Variations on the force law describe the magnetic force on a current-carrying wire (sometimes called Laplace force), and the electromotive force in a wire loop moving through a magnetic field (an aspect of Faraday's law of induction). Historians suggest that the law is implicit in a paper by James Clerk Maxwell, published in 1865. Hendrik Lorentz arrived at a complete derivation in 1895, identifying the contribution of the electric force a few years after Oliver Heaviside correctly identified the contribution of the magnetic force. == Definition == === Charged particle === The force F acting on a particle of electric charge q with instantaneous velocity v, due to an external electric field E and magnetic field B, is given by (SI definition of quantities): where × is the vector cross product (all boldface quantities are vectors). In terms of Cartesian components, we have: F x = q ( E x + v y B z − v z B y ) , F y = q ( E y + v z B x − v x B z ) , F z = q ( E z + v x B y − v y B x ) . {\displaystyle {\begin{aligned}F_{x}&=q\left(E_{x}+v_{y}B_{z}-v_{z}B_{y}\right),\\[0.5ex]F_{y}&=q\left(E_{y}+v_{z}B_{x}-v_{x}B_{z}\right),\\[0.5ex]F_{z}&=q\left(E_{z}+v_{x}B_{y}-v_{y}B_{x}\right).\end{aligned}}} In general, the electric and magnetic fields are functions of the position and time. Therefore, explicitly, the Lorentz force can be written as: F ( r ( t ) , r ˙ ( t ) , t , q ) = q [ E ( r , t ) + r ˙ ( t ) × B ( r , t ) ] {\displaystyle \mathbf {F} \left(\mathbf {r} (t),{\dot {\mathbf {r} }}(t),t,q\right)=q\left[\mathbf {E} (\mathbf {r} ,t)+{\dot {\mathbf {r} }}(t)\times \mathbf {B} (\mathbf {r} ,t)\right]} in which r is the position vector of the charged particle, t is time, and the overdot is a time derivative. A positively charged particle will be accelerated in the same linear orientation as the E field, but will curve perpendicularly to both the instantaneous velocity vector v and the B field according to the right-hand rule (in detail, if the fingers of the right hand are extended to point in the direction of v and are then curled to point in the direction of B, then the extended thumb will point in the direction of F). The term qE is called the electric force, while the term q(v × B) is called the magnetic force. According to some definitions, the term "Lorentz force" refers specifically to the formula for the magnetic force, with the total electromagnetic force (including the electric force) given some other (nonstandard) name. This article will not follow this nomenclature: in what follows, the term Lorentz force will refer to the expression for the total force. The magnetic force component of the Lorentz force manifests itself as the force that acts on a current-carrying wire in a magnetic field. In that context, it is also called the Laplace force. The Lorentz force is a force exerted by the electromagnetic field on the charged particle, that is, it is the rate at which linear momentum is transferred from the electromagnetic field to the particle. Associated with it is the power which is the rate at which energy is transferred from the electromagnetic field to the particle. That power is v ⋅ F = q v ⋅ E . {\displaystyle \mathbf {v} \cdot \mathbf {F} =q\,\mathbf {v} \cdot \mathbf {E} .} Notice that the magnetic field does not contribute to the power because the magnetic force is always perpendicular to the velocity of the particle and does no work. === Continuous charge distribution === For a continuous charge distribution in motion, the Lorentz force equation becomes: d F = d q ( E + v × B ) {\displaystyle \mathrm {d} \mathbf {F} =\mathrm {d} q\left(\mathbf {E} +\mathbf {v} \times \mathbf {B} \right)} where d F {\displaystyle \mathrm {d} \mathbf {F} } is the force on a small piece of the charge distribution with charge d q {\displaystyle \mathrm {d} q} . If both sides of this equation are divided by the volume of this small piece of the charge distribution d V {\displaystyle \mathrm {d} V} , the result is: f = ρ ( E + v × B ) {\displaystyle \mathbf {f} =\rho \left(\mathbf {E} +\mathbf {v} \times \mathbf {B} \right)} where f {\displaystyle \mathbf {f} } is the force density (force per unit volume) and ρ {\displaystyle \rho } is the charge density (charge per unit volume). Next, the current density corresponding to the motion of the charge continuum is J = ρ v {\displaystyle \mathbf {J} =\rho \mathbf {v} } so the continuous analogue to the equation is The total force is the volume integral over the charge distribution: F = ∫ ( ρ E + J × B ) d V . {\displaystyle \mathbf {F} =\int \left(\rho \mathbf {E} +\mathbf {J} \times \mathbf {B} \right)\mathrm {d} V.} By eliminating ρ {\displaystyle \rho } and J {\displaystyle \mathbf {J} } , using Maxwell's equations, and manipulating using the theorems of vector calculus, this form of the equation can be used to derive the Maxwell stress tensor σ {\displaystyle {\boldsymbol {\sigma }}} , in turn this can be combined with the Poynting vector S {\displaystyle \mathbf {S} } to obtain the electromagnetic stress–energy tensor T used in general relativity. In terms of σ {\displaystyle {\boldsymbol {\sigma }}} and S {\displaystyle \mathbf {S} } , another way to write the Lorentz force (per unit volume) is f = ∇ ⋅ σ − 1 c 2 ∂ S ∂ t {\displaystyle \mathbf {f} =\nabla \cdot {\boldsymbol {\sigma }}-{\dfrac {1}{c^{2}}}{\dfrac {\partial \mathbf {S} }{\partial t}}} where ∇ ⋅ {\displaystyle \nabla \cdot } denotes the divergence of the tensor field and c {\displaystyle c} is the speed of light. Rather than the amount of charge and its velocity in electric and magnetic fields, this equation relates the energy flux (flow of energy per unit time per unit distance) in the fields to the force exerted on a charge distribution. See Covariant formulation of classical electromagnetism for more details. The density of power associated with the Lorentz force in a material medium is J ⋅ E . {\displaystyle \mathbf {J} \cdot \mathbf {E} .} If we separate the total charge and total current into their free and bound parts, we get that the density of the Lorentz force is f = ( ρ f − ∇ ⋅ P ) E + ( J f + ∇ × M + ∂ P ∂ t ) × B . {\displaystyle \mathbf {f} =\left(\rho _{f}-\nabla \cdot \mathbf {P} \right)\mathbf {E} +\left(\mathbf {J} _{f}+\nabla \times \mathbf {M} +{\frac {\partial \mathbf {P} }{\partial t}}\right)\times \mathbf {B} .} where: ρ f {\displaystyle \rho _{f}} is the density of free charge; P {\displaystyle \mathbf {P} } is the polarization density; J f {\displaystyle \mathbf {J} _{f}} is the density of free current; and M {\displaystyle \mathbf {M} } is the magnetization density. In this way, the Lorentz force can explain the torque applied to a permanent magnet by the magnetic field. The density of the associated power is ( J f + ∇ × M + ∂ P ∂ t ) ⋅ E . {\displaystyle \left(\mathbf {J} _{f}+\nabla \times \mathbf {M} +{\frac {\partial \mathbf {P} }{\partial t}}\right)\cdot \mathbf {E} .} === Formulation in the Gaussian system === The above-mentioned formulae use the conventions for the definition of the electric and magnetic field used with the SI, which is the most common. However, other conventions with the same physics (i.e. forces on e.g. an electron) are possible and used. In the conventions used with the older CGS-Gaussian units, which are somewhat more common among some theoretical physicists as well as condensed matter experimentalists, one has instead F = q G ( E G + v c × B G ) , {\displaystyle \mathbf {F} =q_{\mathrm {G} }\left(\mathbf {E} _{\mathrm {G} }+{\frac {\mathbf {v} }{c}}\times \mathbf {B} _{\mathrm {G} }\right),} where c is the speed of light. Although this equation looks slightly different, it is equivalent, since one has the following relations: q G = q S I 4 π ε 0 , E G = 4 π ε 0 E S I , B G = 4 π / μ 0 B S I , c = 1 ε 0 μ 0 . {\displaystyle q_{\mathrm {G} }={\frac {q_{\mathrm {SI} }}{\sqrt {4\pi \varepsilon _{0}}}},\quad \mathbf {E} _{\mathrm {G} }={\sqrt {4\pi \varepsilon _{0}}}\,\mathbf {E} _{\mathrm {SI} },\quad \mathbf {B} _{\mathrm {G} }={\sqrt {4\pi /\mu _{0}}}\,{\mathbf {B} _{\mathrm {SI} }},\quad c={\frac {1}{\sqrt {\varepsilon _{0}\mu _{0}}}}.} where ε0 is the vacuum permittivity and μ0 the vacuum permeability. In practice, the subscripts "G" and "SI" are omitted, and the used convention (and unit) must be determined from context. == History == Early attempts to quantitatively describe the electromagnetic force were made in the mid-18th century. It was proposed that the force on magnetic poles, by Johann Tobias Mayer and others in 1760, and electrically charged objects, by Henry Cavendish in 1762, obeyed an inverse-square law. However, in both cases the experimental proof was neither complete nor conclusive. It was not until 1784 when Charles-Augustin de Coulomb, using a torsion balance, was able to definitively show through experiment that this was true. Soon after the discovery in 1820 by Hans Christian Ørsted that a magnetic needle is acted on by a voltaic current, André-Marie Ampère that same year was able to devise through experimentation the formula for the angular dependence of the force between two current elements. In all these descriptions, the force was always described in terms of the properties of the matter involved and the distances between two masses or charges rather than in terms of electric and magnetic fields. The modern concept of electric and magnetic fields first arose in the theories of Michael Faraday, particularly his idea of lines of force, later to be given full mathematical description by Lord Kelvin and James Clerk Maxwell. From a modern perspective it is possible to identify in Maxwell's 1865 formulation of his field equations a form of the Lorentz force equation in relation to electric currents, although in the time of Maxwell it was not evident how his equations related to the forces on moving charged objects. J. J. Thomson was the first to attempt to derive from Maxwell's field equations the electromagnetic forces on a moving charged object in terms of the object's properties and external fields. Interested in determining the electromagnetic behavior of the charged particles in cathode rays, Thomson published a paper in 1881 wherein he gave the force on the particles due to an external magnetic field as F = q 2 v × B . {\displaystyle \mathbf {F} ={\frac {q}{2}}\mathbf {v} \times \mathbf {B} .} Thomson derived the correct basic form of the formula, but, because of some miscalculations and an incomplete description of the displacement current, included an incorrect scale-factor of a half in front of the formula. Oliver Heaviside invented the modern vector notation and applied it to Maxwell's field equations; he also (in 1885 and 1889) had fixed the mistakes of Thomson's derivation and arrived at the correct form of the magnetic force on a moving charged object. Finally, in 1895, Hendrik Lorentz derived the modern form of the formula for the electromagnetic force which includes the contributions to the total force from both the electric and the magnetic fields. Lorentz began by abandoning the Maxwellian descriptions of the ether and conduction. Instead, Lorentz made a distinction between matter and the luminiferous aether and sought to apply the Maxwell equations at a microscopic scale. Using Heaviside's version of the Maxwell equations for a stationary ether and applying Lagrangian mechanics (see below), Lorentz arrived at the correct and complete form of the force law that now bears his name. == Lorentz force law as the definition of E and B == In many textbook treatments of classical electromagnetism, the Lorentz force law is used as the definition of the electric and magnetic fields E and B. To be specific, the Lorentz force is understood to be the following empirical statement: The electromagnetic force F on a test charge at a given point and time is a certain function of its charge q and velocity v, which can be parameterized by exactly two vectors E and B, in the functional form: F = q ( E + v × B ) {\displaystyle \mathbf {F} =q(\mathbf {E} +\mathbf {v} \times \mathbf {B} )} This is valid, even for particles approaching the speed of light (that is, magnitude of v, |v| ≈ c). So the two vector fields E and B are thereby defined throughout space and time, and these are called the "electric field" and "magnetic field". The fields are defined everywhere in space and time with respect to what force a test charge would receive regardless of whether a charge is present to experience the force. == Trajectories of particles due to the Lorentz force == In many cases of practical interest, the motion in a magnetic field of an electrically charged particle (such as an electron or ion in a plasma) can be treated as the superposition of a relatively fast circular motion around a point called the guiding center and a relatively slow drift of this point. The drift speeds may differ for various species depending on their charge states, masses, or temperatures, possibly resulting in electric currents or chemical separation. == Significance of the Lorentz force == While the modern Maxwell's equations describe how electrically charged particles and currents or moving charged particles give rise to electric and magnetic fields, the Lorentz force law completes that picture by describing the force acting on a moving point charge q in the presence of electromagnetic fields. The Lorentz force law describes the effect of E and B upon a point charge, but such electromagnetic forces are not the entire picture. Charged particles are possibly coupled to other forces, notably gravity and nuclear forces. Thus, Maxwell's equations do not stand separate from other physical laws, but are coupled to them via the charge and current densities. The response of a point charge to the Lorentz law is one aspect; the generation of E and B by currents and charges is another. In real materials the Lorentz force is inadequate to describe the collective behavior of charged particles, both in principle and as a matter of computation. The charged particles in a material medium not only respond to the E and B fields but also generate these fields. Complex transport equations must be solved to determine the time and spatial response of charges, for example, the Boltzmann equation or the Fokker–Planck equation or the Navier–Stokes equations. For example, see magnetohydrodynamics, fluid dynamics, electrohydrodynamics, superconductivity, stellar evolution. An entire physical apparatus for dealing with these matters has developed. See for example, Green–Kubo relations and Green's function (many-body theory). == Force on a current-carrying wire == When a wire carrying an electric current is placed in an external magnetic field, each of the moving charges, which comprise the current, experiences the Lorentz force, and together they can create a macroscopic force on the wire (sometimes called the Laplace force). By combining the Lorentz force law above with the definition of electric current, the following equation results, in the case of a straight stationary wire in a homogeneous field: F = I ℓ × B , {\displaystyle \mathbf {F} =I{\boldsymbol {\ell }}\times \mathbf {B} ,} where ℓ is a vector whose magnitude is the length of the wire, and whose direction is along the wire, aligned with the direction of the conventional current I. If the wire is not straight, the force on it can be computed by applying this formula to each infinitesimal segment of wire d ℓ {\displaystyle \mathrm {d} {\boldsymbol {\ell }}} , then adding up all these forces by integration. This results in the same formal expression, but ℓ should now be understood as the vector connecting the end points of the curved wire with direction from starting to end point of conventional current. Usually, there will also be a net torque. If, in addition, the magnetic field is inhomogeneous, the net force on a stationary rigid wire carrying a steady current I is given by integration along the wire, F = I ∫ ( d ℓ × B ) . {\displaystyle \mathbf {F} =I\int (\mathrm {d} {\boldsymbol {\ell }}\times \mathbf {B} ).} One application of this is Ampère's force law, which describes how two current-carrying wires can attract or repel each other, since each experiences a Lorentz force from the other's generated magnetic field. Another application is an induction motor. The stator winding AC current generates a moving magnetic field which induces a current in the rotor. The subsequent Lorentz force F {\displaystyle \mathbf {F} } acting on the rotor creates a torque, making the motor spin. Hence, though the Lorentz force law does not apply when the magnetic field B {\displaystyle \mathbf {B} } is generated by the current I {\displaystyle I} , it does apply when the current I {\displaystyle I} is induced by the movement of magnetic field B {\displaystyle \mathbf {B} } . == Electromotive force == The magnetic force (qv × B) component of the Lorentz force is responsible for motional electromotive force (or motional EMF), the phenomenon underlying many electrical generators. When a conductor is moved through a magnetic field, the magnetic field exerts opposite forces on electrons and nuclei in the wire, and this creates the EMF. The term "motional EMF" is applied to this phenomenon, since the EMF is due to the motion of the wire. In other electrical generators, the magnets move, while the conductors do not. In this case, the EMF is due to the electric force (qE) term in the Lorentz Force equation. The electric field in question is created by the changing magnetic field, resulting in an induced EMF called the transformer EMF, as described by the Maxwell–Faraday equation (one of the four modern Maxwell's equations). Both of these EMFs, despite their apparently distinct origins, are described by the same equation, namely, the EMF is the rate of change of magnetic flux through the wire. (This is Faraday's law of induction, see below.) Einstein's special theory of relativity was partially motivated by the desire to better understand this link between the two effects. In fact, the electric and magnetic fields are different facets of the same electromagnetic field, and in moving from one inertial frame to another, the solenoidal vector field portion of the E-field can change in whole or in part to a B-field or vice versa. == Lorentz force and Faraday's law of induction == Given a loop of wire in a magnetic field, Faraday's law of induction states the induced electromotive force (EMF) in the wire is: E = − d Φ B d t {\displaystyle {\mathcal {E}}=-{\frac {\mathrm {d} \Phi _{B}}{\mathrm {d} t}}} where Φ B = ∫ Σ ( t ) B ( r , t ) ⋅ d A , {\displaystyle \Phi _{B}=\int _{\Sigma (t)}\mathbf {B} (\mathbf {r} ,t)\cdot \mathrm {d} \mathbf {A} ,} is the magnetic flux through the loop, B is the magnetic field, Σ(t) is a surface bounded by the closed contour ∂Σ(t), at time t, dA is an infinitesimal vector area element of Σ(t) (magnitude is the area of an infinitesimal patch of surface, direction is orthogonal to that surface patch). The sign of the EMF is determined by Lenz's law. Note that this is valid for not only a stationary wire – but also for a moving wire. From Faraday's law of induction (that is valid for a moving wire, for instance in a motor) and the Maxwell Equations, the Lorentz Force can be deduced. The reverse is also true, the Lorentz force and the Maxwell Equations can be used to derive the Faraday Law. Let ∂Σ(t) be the moving wire, moving together without rotation and with constant velocity v and Σ(t) be the internal surface of the wire. The EMF around the closed path ∂Σ(t) is given by: E = ∮ ∂ Σ ( t ) F q ⋅ d ℓ {\displaystyle {\mathcal {E}}=\oint _{\partial \Sigma (t)}{\frac {\mathbf {F} }{q}}\cdot \mathrm {d} {\boldsymbol {\ell }}} where E ′ ( r , t ) = F / q ( r , t ) {\displaystyle \mathbf {E} '(\mathbf {r} ,t)=\mathbf {F} /q(\mathbf {r} ,t)} is the electric field and dℓ is an infinitesimal vector element of the contour ∂Σ(t). Equating both integrals leads to the field theory form of Faraday's law, given by: E = ∮ ∂ Σ ( t ) E ′ ( r , t ) ⋅ d ℓ = − d d t ∫ Σ ( t ) B ( r , t ) ⋅ d A . {\displaystyle {\mathcal {E}}=\oint _{\partial \Sigma (t)}\mathbf {E} '(\mathbf {r} ,t)\cdot \mathrm {d} {\boldsymbol {\ell }}=-{\frac {\mathrm {d} }{\mathrm {d} t}}\int _{\Sigma (t)}\mathbf {B} (\mathbf {r} ,t)\cdot \mathrm {d} \mathbf {A} .} This result can be compared with the version of Faraday's law of induction that appears in the modern Maxwell's equations, called the (integral form of) Maxwell–Faraday equation: ∮ ∂ Σ ( t ) E ( r , t ) ⋅ d ℓ = − ∫ Σ ( t ) ∂ B ( r , t ) ∂ t ⋅ d A . {\displaystyle \oint _{\partial \Sigma (t)}\mathbf {E} (\mathbf {r} ,t)\cdot \mathrm {d} {\boldsymbol {\ell }}=-\int _{\Sigma (t)}{\frac {\partial \mathbf {B} (\mathbf {r} ,t)}{\partial t}}\cdot \mathrm {d} \mathbf {A} .} The two equations are equivalent if the wire is not moving. In case the circuit is moving with a velocity v {\displaystyle \mathbf {v} } in some direction, then, using the Leibniz integral rule and that div B = 0, gives ∮ ∂ Σ ( t ) E ′ ( r , t ) ⋅ d ℓ = − ∫ Σ ( t ) ∂ B ( r , t ) ∂ t ⋅ d A + ∮ ∂ Σ ( t ) ( v × B ( r , t ) ) ⋅ d ℓ . {\displaystyle \oint _{\partial \Sigma (t)}\mathbf {E} '(\mathbf {r} ,t)\cdot \mathrm {d} {\boldsymbol {\ell }}=-\int _{\Sigma (t)}{\frac {\partial \mathbf {B} (\mathbf {r} ,t)}{\partial t}}\cdot \mathrm {d} \mathbf {A} +\oint _{\partial \Sigma (t)}\left(\mathbf {v} \times \mathbf {B} (\mathbf {r} ,t)\right)\cdot \mathrm {d} {\boldsymbol {\ell }}.} Substituting the Maxwell-Faraday equation then gives ∮ ∂ Σ ( t ) E ′ ( r , t ) ⋅ d ℓ = ∮ ∂ Σ ( t ) E ( r , t ) ⋅ d ℓ + ∮ ∂ Σ ( t ) ( v × B ( r , t ) ) d ℓ {\displaystyle \oint _{\partial \Sigma (t)}\mathbf {E} '(\mathbf {r} ,t)\cdot \mathrm {d} {\boldsymbol {\ell }}=\oint _{\partial \Sigma (t)}\mathbf {E} (\mathbf {r} ,t)\cdot \mathrm {d} {\boldsymbol {\ell }}+\oint _{\partial \Sigma (t)}\left(\mathbf {v} \times \mathbf {B} (\mathbf {r} ,t)\right)\mathrm {d} {\boldsymbol {\ell }}} since this is valid for any wire position it implies that F = q E ( r , t ) + q v × B ( r , t ) . {\displaystyle \mathbf {F} =q\,\mathbf {E} (\mathbf {r} ,\,t)+q\,\mathbf {v} \times \mathbf {B} (\mathbf {r} ,\,t).} Faraday's law of induction holds whether the loop of wire is rigid and stationary, or in motion or in process of deformation, and it holds whether the magnetic field is constant in time or changing. However, there are cases where Faraday's law is either inadequate or difficult to use, and application of the underlying Lorentz force law is necessary. See inapplicability of Faraday's law. If the magnetic field is fixed in time and the conducting loop moves through the field, the magnetic flux ΦB linking the loop can change in several ways. For example, if the B-field varies with position, and the loop moves to a location with different B-field, ΦB will change. Alternatively, if the loop changes orientation with respect to the B-field, the B ⋅ dA differential element will change because of the different angle between B and dA, also changing ΦB. As a third example, if a portion of the circuit is swept through a uniform, time-independent B-field, and another portion of the circuit is held stationary, the flux linking the entire closed circuit can change due to the shift in relative position of the circuit's component parts with time (surface ∂Σ(t) time-dependent). In all three cases, Faraday's law of induction then predicts the EMF generated by the change in ΦB. Note that the Maxwell Faraday's equation implies that the Electric Field E is non conservative when the Magnetic Field B varies in time, and is not expressible as the gradient of a scalar field, and not subject to the gradient theorem since its curl is not zero. == Lorentz force in terms of potentials == The E and B fields can be replaced by the magnetic vector potential A and (scalar) electrostatic potential ϕ by E = − ∇ ϕ − ∂ A ∂ t B = ∇ × A {\displaystyle {\begin{aligned}\mathbf {E} &=-\nabla \phi -{\frac {\partial \mathbf {A} }{\partial t}}\\[1ex]\mathbf {B} &=\nabla \times \mathbf {A} \end{aligned}}} where ∇ is the gradient, ∇⋅ is the divergence, and ∇× is the curl. The force becomes F = q [ − ∇ ϕ − ∂ A ∂ t + v × ( ∇ × A ) ] . {\displaystyle \mathbf {F} =q\left[-\nabla \phi -{\frac {\partial \mathbf {A} }{\partial t}}+\mathbf {v} \times (\nabla \times \mathbf {A} )\right].} Using an identity for the triple product this can be rewritten as F = q [ − ∇ ϕ − ∂ A ∂ t + ∇ ( v ⋅ A ) − ( v ⋅ ∇ ) A ] . {\displaystyle \mathbf {F} =q\left[-\nabla \phi -{\frac {\partial \mathbf {A} }{\partial t}}+\nabla \left(\mathbf {v} \cdot \mathbf {A} \right)-\left(\mathbf {v} \cdot \nabla \right)\mathbf {A} \right].} (Notice that the coordinates and the velocity components should be treated as independent variables, so the del operator acts only on A {\displaystyle \mathbf {A} } , not on v {\displaystyle \mathbf {v} } ; thus, there is no need of using Feynman's subscript notation in the equation above.) Using the chain rule, the convective derivative of A {\displaystyle \mathbf {A} } is: d A d t = ∂ A ∂ t + ( v ⋅ ∇ ) A {\displaystyle {\frac {\mathrm {d} \mathbf {A} }{\mathrm {d} t}}={\frac {\partial \mathbf {A} }{\partial t}}+(\mathbf {v} \cdot \nabla )\mathbf {A} } so that the above expression becomes: F = q [ − ∇ ( ϕ − v ⋅ A ) − d A d t ] . {\displaystyle \mathbf {F} =q\left[-\nabla (\phi -\mathbf {v} \cdot \mathbf {A} )-{\frac {\mathrm {d} \mathbf {A} }{\mathrm {d} t}}\right].} With v = ẋ and d d t [ ∂ ∂ x ˙ ( ϕ − x ˙ ⋅ A ) ] = − d A d t , {\displaystyle {\frac {\mathrm {d} }{\mathrm {d} t}}\left[{\frac {\partial }{\partial {\dot {\mathbf {x} }}}}\left(\phi -{\dot {\mathbf {x} }}\cdot \mathbf {A} \right)\right]=-{\frac {\mathrm {d} \mathbf {A} }{\mathrm {d} t}},} we can put the equation into the convenient Euler–Lagrange form where ∇ x = x ^ ∂ ∂ x + y ^ ∂ ∂ y + z ^ ∂ ∂ z {\displaystyle \nabla _{\mathbf {x} }={\hat {x}}{\dfrac {\partial }{\partial x}}+{\hat {y}}{\dfrac {\partial }{\partial y}}+{\hat {z}}{\dfrac {\partial }{\partial z}}} and ∇ x ˙ = x ^ ∂ ∂ x ˙ + y ^ ∂ ∂ y ˙ + z ^ ∂ ∂ z ˙ . {\displaystyle \nabla _{\dot {\mathbf {x} }}={\hat {x}}{\dfrac {\partial }{\partial {\dot {x}}}}+{\hat {y}}{\dfrac {\partial }{\partial {\dot {y}}}}+{\hat {z}}{\dfrac {\partial }{\partial {\dot {z}}}}.} == Lorentz force and analytical mechanics == The Lagrangian for a charged particle of mass m and charge q in an electromagnetic field equivalently describes the dynamics of the particle in terms of its energy, rather than the force exerted on it. The classical expression is given by: L = m 2 r ˙ ⋅ r ˙ + q A ⋅ r ˙ − q ϕ {\displaystyle L={\frac {m}{2}}\mathbf {\dot {r}} \cdot \mathbf {\dot {r}} +q\mathbf {A} \cdot \mathbf {\dot {r}} -q\phi } where A and ϕ are the potential fields as above. The quantity V = q ( ϕ − A ⋅ r ˙ ) {\displaystyle V=q(\phi -\mathbf {A} \cdot \mathbf {\dot {r}} )} can be identified as a generalized, velocity-dependent potential energy and, accordingly, F {\displaystyle \mathbf {F} } as a non-conservative force. Using the Lagrangian, the equation for the Lorentz force given above can be obtained again. The relativistic Lagrangian is L = − m c 2 1 − ( r ˙ c ) 2 + q A ( r ) ⋅ r ˙ − q ϕ ( r ) {\displaystyle L=-mc^{2}{\sqrt {1-\left({\frac {\dot {\mathbf {r} }}{c}}\right)^{2}}}+q\mathbf {A} (\mathbf {r} )\cdot {\dot {\mathbf {r} }}-q\phi (\mathbf {r} )} The action is the relativistic arclength of the path of the particle in spacetime, minus the potential energy contribution, plus an extra contribution which quantum mechanically is an extra phase a charged particle gets when it is moving along a vector potential. == Relativistic form of the Lorentz force == === Covariant form of the Lorentz force === ==== Field tensor ==== Using the metric signature (1, −1, −1, −1), the Lorentz force for a charge q can be written in covariant form: where pα is the four-momentum, defined as p α = ( p 0 , p 1 , p 2 , p 3 ) = ( γ m c , p x , p y , p z ) , {\displaystyle p^{\alpha }=\left(p_{0},p_{1},p_{2},p_{3}\right)=\left(\gamma mc,p_{x},p_{y},p_{z}\right),} τ the proper time of the particle, Fαβ the contravariant electromagnetic tensor F α β = ( 0 − E x / c − E y / c − E z / c E x / c 0 − B z B y E y / c B z 0 − B x E z / c − B y B x 0 ) {\displaystyle F^{\alpha \beta }={\begin{pmatrix}0&-E_{x}/c&-E_{y}/c&-E_{z}/c\\E_{x}/c&0&-B_{z}&B_{y}\\E_{y}/c&B_{z}&0&-B_{x}\\E_{z}/c&-B_{y}&B_{x}&0\end{pmatrix}}} and U is the covariant 4-velocity of the particle, defined as: U β = ( U 0 , U 1 , U 2 , U 3 ) = γ ( c , − v x , − v y , − v z ) , {\displaystyle U_{\beta }=\left(U_{0},U_{1},U_{2},U_{3}\right)=\gamma \left(c,-v_{x},-v_{y},-v_{z}\right),} in which γ ( v ) = 1 1 − v 2 c 2 = 1 1 − v x 2 + v y 2 + v z 2 c 2 {\displaystyle \gamma (v)={\frac {1}{\sqrt {1-{\frac {v^{2}}{c^{2}}}}}}={\frac {1}{\sqrt {1-{\frac {v_{x}^{2}+v_{y}^{2}+v_{z}^{2}}{c^{2}}}}}}} is the Lorentz factor. The fields are transformed to a frame moving with constant relative velocity by: F ′ μ ν = Λ μ α Λ ν β F α β , {\displaystyle F'^{\mu \nu }={\Lambda ^{\mu }}_{\alpha }{\Lambda ^{\nu }}_{\beta }F^{\alpha \beta }\,,} where Λμα is the Lorentz transformation tensor. ==== Translation to vector notation ==== The α = 1 component (x-component) of the force is d p 1 d τ = q U β F 1 β = q ( U 0 F 10 + U 1 F 11 + U 2 F 12 + U 3 F 13 ) . {\displaystyle {\frac {\mathrm {d} p^{1}}{\mathrm {d} \tau }}=qU_{\beta }F^{1\beta }=q\left(U_{0}F^{10}+U_{1}F^{11}+U_{2}F^{12}+U_{3}F^{13}\right).} Substituting the components of the covariant electromagnetic tensor F yields d p 1 d τ = q [ U 0 ( E x c ) + U 2 ( − B z ) + U 3 ( B y ) ] . {\displaystyle {\frac {\mathrm {d} p^{1}}{\mathrm {d} \tau }}=q\left[U_{0}\left({\frac {E_{x}}{c}}\right)+U_{2}(-B_{z})+U_{3}(B_{y})\right].} Using the components of covariant four-velocity yields d p 1 d τ = q γ [ c ( E x c ) + ( − v y ) ( − B z ) + ( − v z ) ( B y ) ] = q γ ( E x + v y B z − v z B y ) = q γ [ E x + ( v × B ) x ] . {\displaystyle {\frac {\mathrm {d} p^{1}}{\mathrm {d} \tau }}=q\gamma \left[c\left({\frac {E_{x}}{c}}\right)+(-v_{y})(-B_{z})+(-v_{z})(B_{y})\right]=q\gamma \left(E_{x}+v_{y}B_{z}-v_{z}B_{y}\right)=q\gamma \left[E_{x}+\left(\mathbf {v} \times \mathbf {B} \right)_{x}\right]\,.} The calculation for α = 2, 3 (force components in the y and z directions) yields similar results, so collecting the three equations into one: d p d τ = q γ ( E + v × B ) , {\displaystyle {\frac {\mathrm {d} \mathbf {p} }{\mathrm {d} \tau }}=q\gamma \left(\mathbf {E} +\mathbf {v} \times \mathbf {B} \right),} and since differentials in coordinate time dt and proper time dτ are related by the Lorentz factor, d t = γ ( v ) d τ , {\displaystyle dt=\gamma (v)\,d\tau ,} so we arrive at d p d t = q ( E + v × B ) . {\displaystyle {\frac {\mathrm {d} \mathbf {p} }{\mathrm {d} t}}=q\left(\mathbf {E} +\mathbf {v} \times \mathbf {B} \right).} This is precisely the Lorentz force law, however, it is important to note that p is the relativistic expression, p = γ ( v ) m 0 v . {\displaystyle \mathbf {p} =\gamma (v)m_{0}\mathbf {v} \,.} === Lorentz force in spacetime algebra (STA) === The electric and magnetic fields are dependent on the velocity of an observer, so the relativistic form of the Lorentz force law can best be exhibited starting from a coordinate-independent expression for the electromagnetic and magnetic fields F {\displaystyle {\mathcal {F}}} , and an arbitrary time-direction, γ 0 {\displaystyle \gamma _{0}} . This can be settled through spacetime algebra (or the geometric algebra of spacetime), a type of Clifford algebra defined on a pseudo-Euclidean space, as E = ( F ⋅ γ 0 ) γ 0 {\displaystyle \mathbf {E} =\left({\mathcal {F}}\cdot \gamma _{0}\right)\gamma _{0}} and i B = ( F ∧ γ 0 ) γ 0 {\displaystyle i\mathbf {B} =\left({\mathcal {F}}\wedge \gamma _{0}\right)\gamma _{0}} F {\displaystyle {\mathcal {F}}} is a spacetime bivector (an oriented plane segment, just like a vector is an oriented line segment), which has six degrees of freedom corresponding to boosts (rotations in spacetime planes) and rotations (rotations in space-space planes). The dot product with the vector γ 0 {\displaystyle \gamma _{0}} pulls a vector (in the space algebra) from the translational part, while the wedge-product creates a trivector (in the space algebra) who is dual to a vector which is the usual magnetic field vector. The relativistic velocity is given by the (time-like) changes in a time-position vector v = x ˙ {\displaystyle v={\dot {x}}} , where v 2 = 1 , {\displaystyle v^{2}=1,} (which shows our choice for the metric) and the velocity is v = c v ∧ γ 0 / ( v ⋅ γ 0 ) . {\displaystyle \mathbf {v} =cv\wedge \gamma _{0}/(v\cdot \gamma _{0}).} The proper form of the Lorentz force law ('invariant' is an inadequate term because no transformation has been defined) is simply Note that the order is important because between a bivector and a vector the dot product is anti-symmetric. Upon a spacetime split like one can obtain the velocity, and fields as above yielding the usual expression. === Lorentz force in general relativity === In the general theory of relativity the equation of motion for a particle with mass m {\displaystyle m} and charge e {\displaystyle e} , moving in a space with metric tensor g a b {\displaystyle g_{ab}} and electromagnetic field F a b {\displaystyle F_{ab}} , is given as m d u c d s − m 1 2 g a b , c u a u b = e F c b u b , {\displaystyle m{\frac {du_{c}}{ds}}-m{\frac {1}{2}}g_{ab,c}u^{a}u^{b}=eF_{cb}u^{b},} where u a = d x a / d s {\displaystyle u^{a}=dx^{a}/ds} ( d x a {\displaystyle dx^{a}} is taken along the trajectory), g a b , c = ∂ g a b / ∂ x c {\displaystyle g_{ab,c}=\partial g_{ab}/\partial x^{c}} , and d s 2 = g a b d x a d x b {\displaystyle ds^{2}=g_{ab}dx^{a}dx^{b}} . The equation can also be written as m d u c d s − m Γ a b c u a u b = e F c b u b , {\displaystyle m{\frac {du_{c}}{ds}}-m\Gamma _{abc}u^{a}u^{b}=eF_{cb}u^{b},} where Γ a b c {\displaystyle \Gamma _{abc}} is the Christoffel symbol (of the torsion-free metric connection in general relativity), or as m D u c d s = e F c b u b , {\displaystyle m{\frac {Du_{c}}{ds}}=eF_{cb}u^{b},} where D {\displaystyle D} is the covariant differential in general relativity. == Applications == The Lorentz force occurs in many devices, including: Cyclotrons and other circular path particle accelerators Mass spectrometers Velocity filters Magnetrons Lorentz force velocimetry In its manifestation as the Laplace force on an electric current in a conductor, this force occurs in many devices, including: Electric motors Railguns Linear motors Loudspeakers Magnetoplasmadynamic thrusters Electrical generators Homopolar generators Linear alternators == See also == == Notes == === Remarks === === Citations === == References == Darrigol, Olivier (2000). Electrodynamics from Ampère to Einstein. Oxford ; New York: Clarendon Press. ISBN 0-19-850594-9. Feynman, Richard Phillips; Leighton, Robert B.; Sands, Matthew L. (2006). The Feynman lectures on physics. Vol. 2. Pearson / Addison-Wesley. ISBN 0-8053-9047-2. Griffiths, David J. (2023). Introduction to Electrodynamics. Cambridge University Press. doi:10.1017/9781009397735. ISBN 978-1-009-39773-5. Jackson, John David (1998). Classical Electrodynamics. New York: John Wiley & Sons. ISBN 978-0-471-30932-1. Purcell, Edward M.; Morin, David J. (2013). Electricity and Magnetism:. Cambridge University Press. doi:10.1017/cbo9781139012973. ISBN 978-1-139-01297-3. Sadiku, Matthew N. O. (2018). Elements of electromagnetics (7th ed.). New York/Oxford: Oxford University Press. ISBN 978-0-19-069861-4. Serway, Raymond A.; Jewett, John W. Jr. (2004). Physics for scientists and engineers, with modern physics. Belmont, California: Thomson Brooks/Cole. ISBN 0-534-40846-X. Srednicki, Mark A. (2007). Quantum field theory. Cambridge, England; New York City: Cambridge University Press. ISBN 978-0-521-86449-7. == External links == Lorentz force (demonstration) Interactive Java applet on the magnetic deflection of a particle beam in a homogeneous magnetic field Archived 2011-08-13 at the Wayback Machine by Wolfgang Bauer
Wikipedia/Lorenz_force
The Friedmann equations, also known as the Friedmann–Lemaître (FL) equations, are a set of equations in physical cosmology that govern cosmic expansion in homogeneous and isotropic models of the universe within the context of general relativity. They were first derived by Alexander Friedmann in 1922 from Einstein's field equations of gravitation for the Friedmann–Lemaître–Robertson–Walker metric and a perfect fluid with a given mass density ρ and pressure p. The equations for negative spatial curvature were given by Friedmann in 1924. The physical models built on the Friedmann equations are called FRW or FLRW models and from the Standard Model of modern cosmology, although such a description is also associated with the further developed Lambda-CDM model. The FLRW model was developed independently by the named authors in the 1920s and 1930s. == Assumptions == The Friedmann equations build on three assumptions:: 22.1.3  the Friedmann–Lemaître–Robertson–Walker metric, Einstein's equations for general relativity, and a perfect fluid source. The metric in turn starts with the simplifying assumption that the universe is spatially homogeneous and isotropic, that is, the cosmological principle; empirically, this is justified on scales larger than the order of 100 Mpc. The metric can be written as:: 65  c 2 d τ 2 = c 2 d t 2 − R 2 ( t ) ( d r 2 + S k 2 ( r ) d ψ 2 ) {\displaystyle c^{2}d\tau ^{2}=c^{2}dt^{2}-R^{2}(t)\left(dr^{2}+S_{k}^{2}(r)d\psi ^{2}\right)} where S − 1 ( r ) = sinh ⁡ ( r ) , S 0 = 1 , S 1 = sin ⁡ ( r ) . {\displaystyle S_{-1}(r)=\sinh(r),S_{0}=1,S_{1}=\sin(r).} These three possibilities correspond to parameter k of (0) flat space, (+1) a sphere of constant positive curvature or (−1) a hyperbolic space with constant negative curvature. Here the radial position has been decomposed into a time-dependent scale factor, R ( t ) {\displaystyle R(t)} , and a comoving coordinate, r {\displaystyle r} . Inserting this metric into Einstein's field equations relate the evolution of this scale factor to the pressure and energy of the matter in the universe. With the stress–energy tensor for a perfect fluid, results in the equations are described below.: 73  == Equations == There are two independent Friedmann equations for modelling a homogeneous, isotropic universe. The first is: H 2 ≡ ( R ˙ R ) 2 = 8 π G ρ 3 − k R 2 + Λ 3 , {\displaystyle H^{2}\equiv {\left({\frac {\dot {R}}{R}}\right)}^{2}={\frac {8\pi G\rho }{3}}-{\frac {k}{R^{2}}}+{\frac {\Lambda }{3}},} and second is: R ¨ R = Λ 3 − 4 π G 3 ( ρ + 3 p ) . {\displaystyle {\frac {\ddot {R}}{R}}={\frac {\Lambda }{3}}-{\frac {4\pi G}{3}}\left(\rho +3p\right).} The term Friedmann equation sometimes is used only for the first equation. In these equations, R(t) is the cosmological scale factor, G {\displaystyle G} is the Newtonian constant of gravitation, Λ is the cosmological constant with dimension length−2, ρ is the energy density and p is the isotropic pressure. k is constant throughout a particular solution, but may vary from one solution to another. The units set the speed of light in vacuum to one. In previous equations, R, ρ, and p are functions of time. If the cosmological constant, Λ, is ignored, the term − k / R 2 {\displaystyle -k/R^{2}} in the first Friedmann equation can be interpreted as a Newtonian total energy, so the evolution of the universe pits gravitational potential energy, 8 π G ρ / 3 {\displaystyle 8\pi G\rho /3} against kinetic energy, R ˙ / R {\displaystyle {\dot {R}}/R} . The winner depends upon the k value in the total energy: if k is +1, gravity eventually causes the universe to contract. These conclusions will be altered if the Λ is not zero. Using the first equation, the second equation can be re-expressed as: ρ ˙ = − 3 H ( ρ + p c 2 ) , {\displaystyle {\dot {\rho }}=-3H\left(\rho +{\frac {p}{c^{2}}}\right),} which eliminates Λ. Alternatively the conservation of mass–energy: T α β ; β = 0 {\displaystyle T^{\alpha \beta }{}_{;\beta }=0} leads to the same result. === Spatial curvature === The first Friedmann equation contains a discrete parameter k = +1, 0 or −1 depending on whether the shape of the universe is a closed 3-sphere, flat (Euclidean space) or an open 3-hyperboloid, respectively. If k is positive, then the universe is "closed": starting off on some paths through the universe return to the starting point. Such a universe is analogous to a sphere: finite but unbounded. If k is negative, then the universe is "open": infinite and no paths return. If k = 0, then the universe is Euclidean (flat) and infinite.: 69  == Dimensionless scale factor == A dimensionless scale factor can be defined: a ( t ) ≡ R ( t ) R 0 {\displaystyle a(t)\equiv {\frac {R(t)}{R_{0}}}} using the present day value R 0 = R ( now ) . {\displaystyle R_{0}=R({\text{now}}).} The Friedmann equations can be written in terms of this dimensionless scale factor: H 2 ( t ) = ( a ˙ a ) 2 = 8 π G 3 [ ρ ( t ) + ρ c − ρ 0 a 2 ( t ) ] {\displaystyle H^{2}(t)=\left({\frac {\dot {a}}{a}}\right)^{2}={\frac {8\pi G}{3}}\left[\rho (t)+{\frac {\rho _{c}-\rho _{0}}{a^{2}(t)}}\right]} where a ˙ = d a / d t {\displaystyle {\dot {a}}=da/dt} , ρ c = 3 H 0 2 / 8 π G {\displaystyle \rho _{c}=3H_{0}^{2}/8\pi G} , and ρ 0 = ρ ( t = now ) {\displaystyle \rho _{0}=\rho (t={\text{now}})} .: 3  == Critical Density == That value of the mass-energy density, ρ {\displaystyle \rho } that gives k = 0 {\displaystyle k=0} when Λ = 0 {\displaystyle \Lambda =0} is called the critical density: ρ c ≡ 3 H 2 8 π G . {\displaystyle \rho _{c}\equiv {\frac {3H^{2}}{8\pi G}}.} If the universe has higher density, ρ ≥ ρ c {\displaystyle \rho \geq \rho _{c}} , then it is called "spatially closed": in this simple approximation the universe would eventually contract. On the other hand, if has lower density, ρ ≤ ρ c {\displaystyle \rho \leq \rho _{c}} , then it is called "spatially open" and expands forever. Therefore the geometry of the universe is directly connected to its density.: 73  == Density parameter == The density parameter Ω is defined as the ratio of the actual (or observed) density ρ to the critical density ρc of the Friedmann universe:: 74  Ω := ρ ρ c = 8 π G ρ 3 H 2 . {\displaystyle \Omega :={\frac {\rho }{\rho _{c}}}={\frac {8\pi G\rho }{3H^{2}}}.} Both the density ρ ( t ) {\displaystyle \rho (t)} and the Hubble parameter H ( t ) {\displaystyle H(t)} depend upon time and thus the density parameter varies with time.: 74  The critical density is equivalent to approximately five atoms (of monatomic hydrogen) per cubic metre, whereas the average density of ordinary matter in the Universe is believed to be 0.2–0.25 atoms per cubic metre. A much greater density comes from the unidentified dark matter, although both ordinary and dark matter contribute in favour of contraction of the universe. However, the largest part comes from so-called dark energy, which accounts for the cosmological constant term. Although the total density is equal to the critical density (exactly, up to measurement error), dark energy does not lead to contraction of the universe but rather may accelerate its expansion. An expression for the critical density is found by assuming Λ to be zero (as it is for all basic Friedmann universes) and setting the normalised spatial curvature, k, equal to zero. When the substitutions are applied to the first of the Friedmann equations given the new H 0 {\displaystyle H_{0}} value we find: ρ = 3 H 0 2 8 π G ≈ 1.10 × 10 − 26 k g m − 3 ≈ 1.88 × 10 − 26 h 2 k g m − 3 ≈ 2.78 × 10 11 h 2 M ⊙ M p c − 3 {\displaystyle {\begin{aligned}\rho ={\frac {3H_{0}^{2}}{8\pi G}}&\approx 1.10\times 10^{-26}\mathrm {kg\,m^{-3}} \\&\approx 1.88\times 10^{-26}{\rm {h}}^{2}\,{\rm {kg}}\,{\rm {m}}^{-3}\\&\approx 2.78\times 10^{11}h^{2}M_{\odot }\,{\rm {Mpc}}^{-3}\end{aligned}}} where: H 0 = 76.5 ± 2.2 k m s − 1 M p c − 1 ≈ 2.48 × 10 − 18 s − 1 {\textstyle H_{0}=76.5\pm 2.2\,\mathrm {km\,s^{-1}\,Mpc^{-1}} \approx 2.48\times 10^{-18}\mathrm {s^{-1}} } h = H 0 100 ( k m / s ) / M p c {\textstyle h={\frac {H_{0}}{100\,\mathrm {(km/s)/Mpc} }}} ρ c = 8.5 × 10 − 27 k g / m 3 {\displaystyle \rho _{c}=8.5\times 10^{-27}\mathrm {kg/m^{3}} } Given the value of dark energy to be Ω Λ = 0.647 {\displaystyle \Omega _{\Lambda }=0.647} This term originally was used as a means to determine the spatial geometry of the universe, where ρc is the critical density for which the spatial geometry is flat (or Euclidean). Assuming a zero vacuum energy density, if Ω is larger than unity, the space sections of the universe are closed; the universe will eventually stop expanding, then collapse. If Ω is less than unity, they are open; and the universe expands forever. However, one can also subsume the spatial curvature and vacuum energy terms into a more general expression for Ω in which case this density parameter equals exactly unity. Then it is a matter of measuring the different components, usually designated by subscripts. According to the ΛCDM model, there are important components of Ω due to baryons, cold dark matter and dark energy. The spatial geometry of the universe has been measured by the WMAP spacecraft to be nearly flat. This means that the universe can be well approximated by a model where the spatial curvature parameter k is zero; however, this does not necessarily imply that the universe is infinite: it might merely be that the universe is much larger than the part we see. The first Friedmann equation is often seen in terms of the present values of the density parameters, that is H 2 H 0 2 = Ω 0 , R a − 4 + Ω 0 , M a − 3 + Ω 0 , k a − 2 + Ω 0 , Λ . {\displaystyle {\frac {H^{2}}{H_{0}^{2}}}=\Omega _{0,\mathrm {R} }a^{-4}+\Omega _{0,\mathrm {M} }a^{-3}+\Omega _{0,k}a^{-2}+\Omega _{0,\Lambda }.} Here Ω0,R is the radiation density today (when a = 1), Ω0,M is the matter (dark plus baryonic) density today, Ω0,k = 1 − Ω0 is the "spatial curvature density" today, and Ω0,Λ is the cosmological constant or vacuum density today. === Other forms === The Hubble parameter can change over time if other parts of the equation are time dependent (in particular the mass density, the vacuum energy, or the spatial curvature). Evaluating the Hubble parameter at the present time yields Hubble's constant which is the proportionality constant of Hubble's law. Applied to a fluid with a given equation of state, the Friedmann equations yield the time evolution and geometry of the universe as a function of the fluid density. == FLRW models == Relativisitic cosmology models based on the FLRW metric and obeying the Friedmann equations are called FRW models.: 73  Direct observation of stars has shown their velocities to be dominated by radial recession, validating these assumptions for cosmological models.: 65  These models are the basis of the standard model of Big Bang cosmological including the current ΛCDM model.: 25.1.3  To apply the metric to cosmology and predict its time evolution via the scale factor a ( t ) {\displaystyle a(t)} requires Einstein's field equations together with a way of calculating the density, ρ ( t ) , {\displaystyle \rho (t),} such as a cosmological equation of state. This process allows an approximate analytic solution Einstein's field equations G μ ν + Λ g μ ν = κ T μ ν {\displaystyle G_{\mu \nu }+\Lambda g_{\mu \nu }=\kappa T_{\mu \nu }} giving the Friedmann equations when the energy–momentum tensor is similarly assumed to be isotropic and homogeneous. The resulting equations are: ( a ˙ a ) 2 + k c 2 a 2 − Λ c 2 3 = κ c 4 3 ρ 2 a ¨ a + ( a ˙ a ) 2 + k c 2 a 2 − Λ c 2 = − κ c 2 p . {\displaystyle {\begin{aligned}{\left({\frac {\dot {a}}{a}}\right)}^{2}+{\frac {kc^{2}}{a^{2}}}-{\frac {\Lambda c^{2}}{3}}&={\frac {\kappa c^{4}}{3}}\rho \\[4pt]2{\frac {\ddot {a}}{a}}+{\left({\frac {\dot {a}}{a}}\right)}^{2}+{\frac {kc^{2}}{a^{2}}}-\Lambda c^{2}&=-\kappa c^{2}p.\end{aligned}}} Because the FLRW model assumes homogeneity, some popular accounts mistakenly assert that the Big Bang model cannot account for the observed lumpiness of the universe. In a strictly FLRW model, there are no clusters of galaxies or stars, since these are objects much denser than a typical part of the universe. Nonetheless, the FLRW model is used as a first approximation for the evolution of the real, lumpy universe because it is simple to calculate, and models that calculate the lumpiness in the universe are added onto the FLRW models as extensions. Most cosmologists agree that the observable universe is well approximated by an almost FLRW model, i.e., a model that follows the FLRW metric apart from primordial density fluctuations. As of 2003, the theoretical implications of the various extensions to the FLRW model appear to be well understood, and the goal is to make these consistent with observations from COBE and WMAP. === Interpretation === The pair of equations given above is equivalent to the following pair of equations ρ ˙ = − 3 a ˙ a ( ρ + p c 2 ) a ¨ a = − κ c 4 6 ( ρ + 3 p c 2 ) + Λ c 2 3 {\displaystyle {\begin{aligned}{\dot {\rho }}&=-3{\frac {\dot {a}}{a}}\left(\rho +{\frac {p}{c^{2}}}\right)\\[1ex]{\frac {\ddot {a}}{a}}&=-{\frac {\kappa c^{4}}{6}}\left(\rho +{\frac {3p}{c^{2}}}\right)+{\frac {\Lambda c^{2}}{3}}\end{aligned}}} with k {\displaystyle k} , the spatial curvature index, serving as a constant of integration for the first equation. The first equation can be derived also from thermodynamical considerations and is equivalent to the first law of thermodynamics, assuming the expansion of the universe is an adiabatic process (which is implicitly assumed in the derivation of the Friedmann–Lemaître–Robertson–Walker metric). The second equation states that both the energy density and the pressure cause the expansion rate of the universe a ˙ {\displaystyle {\dot {a}}} to decrease, i.e., both cause a deceleration in the expansion of the universe. This is a consequence of gravitation, with pressure playing a similar role to that of energy (or mass) density, according to the principles of general relativity. The cosmological constant, on the other hand, causes an acceleration in the expansion of the universe. === Cosmological constant === The cosmological constant term can be omitted if we make the following replacements ρ → ρ − Λ κ c 2 , p → p + Λ κ . {\displaystyle {\begin{aligned}\rho &\to \rho -{\frac {\Lambda }{\kappa c^{2}}},&p&\to p+{\frac {\Lambda }{\kappa }}.\end{aligned}}} Therefore, the cosmological constant can be interpreted as arising from a form of energy that has negative pressure, equal in magnitude to its (positive) energy density: p = − ρ c 2 , {\displaystyle p=-\rho c^{2}\,,} which is an equation of state of vacuum with dark energy. An attempt to generalize this to p = w ρ c 2 {\displaystyle p=w\rho c^{2}} would not have general invariance without further modification. In fact, in order to get a term that causes an acceleration of the universe expansion, it is enough to have a scalar field that satisfies p < − ρ c 2 3 . {\displaystyle p<-{\frac {\rho c^{2}}{3}}.} Such a field is sometimes called quintessence. === Newtonian interpretation === This is due to McCrea and Milne, although sometimes incorrectly ascribed to Friedmann. The Friedmann equations are equivalent to this pair of equations: − a 3 ρ ˙ = 3 a 2 a ˙ ρ + 3 a 2 p a ˙ c 2 a ˙ 2 2 − κ c 4 a 3 ρ 6 a = − k c 2 2 . {\displaystyle {\begin{aligned}-a^{3}{\dot {\rho }}=3a^{2}{\dot {a}}\rho +{\frac {3a^{2}p{\dot {a}}}{c^{2}}}\,\\[1ex]{\frac {{\dot {a}}^{2}}{2}}-{\frac {\kappa c^{4}a^{3}\rho }{6a}}=-{\frac {kc^{2}}{2}}\,.\end{aligned}}} The first equation says that the decrease in the mass contained in a fixed cube (whose side is momentarily a) is the amount that leaves through the sides due to the expansion of the universe plus the mass equivalent of the work done by pressure against the material being expelled. This is the conservation of mass–energy (first law of thermodynamics) contained within a part of the universe. The second equation says that the kinetic energy (seen from the origin) of a particle of unit mass moving with the expansion plus its (negative) gravitational potential energy (relative to the mass contained in the sphere of matter closer to the origin) is equal to a constant related to the curvature of the universe. In other words, the energy (relative to the origin) of a co-moving particle in free-fall is conserved. General relativity merely adds a connection between the spatial curvature of the universe and the energy of such a particle: positive total energy implies negative curvature and negative total energy implies positive curvature. The cosmological constant term is assumed to be treated as dark energy and thus merged into the density and pressure terms. During the Planck epoch, one cannot neglect quantum effects. So they may cause a deviation from the Friedmann equations. == Useful solutions == The Friedmann equations can be solved exactly in presence of a perfect fluid with equation of state p = w ρ c 2 , {\displaystyle p=w\rho c^{2},} where p is the pressure, ρ is the mass density of the fluid in the comoving frame and w is some constant. In spatially flat case (k = 0), the solution for the scale factor is a ( t ) = a 0 t 2 3 ( w + 1 ) {\displaystyle a(t)=a_{0}\,t^{\frac {2}{3(w+1)}}} where a0 is some integration constant to be fixed by the choice of initial conditions. This family of solutions labelled by w is extremely important for cosmology. For example, w = 0 describes a matter-dominated universe, where the pressure is negligible with respect to the mass density. From the generic solution one easily sees that in a matter-dominated universe the scale factor goes as a ( t ) ∝ t 2 / 3 matter-dominated {\displaystyle a(t)\propto t^{2/3}\qquad {\text{matter-dominated}}} Another important example is the case of a radiation-dominated universe, namely when w = ⁠1/3⁠. This leads to a ( t ) ∝ t 1 / 2 radiation-dominated {\displaystyle a(t)\propto t^{1/2}\qquad {\text{radiation-dominated}}} Note that this solution is not valid for domination of the cosmological constant, which corresponds to an w = −1. In this case the energy density is constant and the scale factor grows exponentially. Solutions for other values of k can be found at Tersic, Balsa. "Lecture Notes on Astrophysics". Retrieved 24 February 2022. === Mixtures === If the matter is a mixture of two or more non-interacting fluids each with such an equation of state, then ρ ˙ f = − 3 H ( ρ f + p f c 2 ) {\displaystyle {\dot {\rho }}_{f}=-3H\left(\rho _{f}+{\frac {p_{f}}{c^{2}}}\right)} holds separately for each such fluid f. In each case, ρ ˙ f = − 3 H ( ρ f + w f ρ f ) {\displaystyle {\dot {\rho }}_{f}=-3H\left(\rho _{f}+w_{f}\rho _{f}\right)\,} from which we get ρ f ∝ a − 3 ( 1 + w f ) . {\displaystyle {\rho }_{f}\propto a^{-3\left(1+w_{f}\right)}\,.} For example, one can form a linear combination of such terms ρ = A a − 3 + B a − 4 + C a 0 {\displaystyle \rho =Aa^{-3}+Ba^{-4}+Ca^{0}\,} where A is the density of "dust" (ordinary matter, w = 0) when a = 1; B is the density of radiation (w = ⁠1/3⁠) when a = 1; and C is the density of "dark energy" (w = −1). One then substitutes this into ( a ˙ a ) 2 = 8 π G 3 ρ − k c 2 a 2 {\displaystyle \left({\frac {\dot {a}}{a}}\right)^{2}={\frac {8\pi G}{3}}\rho -{\frac {kc^{2}}{a^{2}}}} and solves for a as a function of time. == History == Friedmann published two cosmology papers in the 1922-1923 time frame. He adopted the same homogeneity and isotropy assumptions used by Albert Einstein and by Willem de Sitter in their papers, both published in 1917. Both of the earlier works also assumed the universe was static, eternally unchanging. Einstein postulated an additional term to his equations of general relativity to ensure this stability. In his paper, de Sitter showed that spacetime had curvature even in the absence of matter: the new equations of general relativity implied that a vacuum had properties that altered spacetime.: 152  The idea of static universe was a fundamental assumption of philosophy and science. However, Friedmann abandoned the idea in his first paper "On the curvature of space". Starting with Einstein's 10 equations of relativity, Friedmann applies the symmetry of an isotropic universe and a simple model for mass-energy density to derive a relationship between that density and the curvature of spacetime. He demonstrates that in addition to one solution is static, many time dependent solutions also exist.: 157  Friedmann's second paper, "On the possibility of a world with constant negative curvature," published in 1924 explored more complex geometrical ideas. This paper establish the idea that that the finiteness of spacetime was not a property that could be established based on the equations of general relativity alone: both finite and infinite geometries could be used to give solutions. Friedmann used two concepts of a three dimensional sphere as analogy: a trip at constant latitude could return to the starting point or the sphere might have an infinite number of sheets and the trip never repeats.: 167  Friedmann's paper were largely ignored except – initially – by Einstein who actively dismissed them. However once Edwin Hubble published astronomical evidence that the universe was expanding, Einstein became convinced. Unfortunately for Friedmann, Georges Lemaître discovered some aspects of the same solutions and wrote persuasively about the concept of a universe born from a "primordial atom". Thus historians give these two scientists equal billing for the discovery. == In popular culture == Several students at Tsinghua University (CCP leader Xi Jinping's alma mater) participating in the 2022 COVID-19 protests in China carried placards with Friedmann equations scrawled on them, interpreted by some as a play on the words "Free man". Others have interpreted the use of the equations as a call to “open up” China and stop its Zero Covid policy, as the Friedmann equations relate to the expansion, or “opening” of the universe. == See also == Mathematics of general relativity Solutions of the Einstein field equations == Sources == == Further reading == Liebscher, Dierck-Ekkehard (2005). "Expansion". Cosmology. Berlin: Springer. pp. 53–77. ISBN 3-540-23261-3.
Wikipedia/Friedmann_equations
The Fresnel equations (or Fresnel coefficients) describe the reflection and transmission of light (or electromagnetic radiation in general) when incident on an interface between different optical media. They were deduced by French engineer and physicist Augustin-Jean Fresnel () who was the first to understand that light is a transverse wave, when no one realized that the waves were electric and magnetic fields. For the first time, polarization could be understood quantitatively, as Fresnel's equations correctly predicted the differing behaviour of waves of the s and p polarizations incident upon a material interface. == Overview == When light strikes the interface between a medium with refractive index n1 and a second medium with refractive index n2, both reflection and refraction of the light may occur. The Fresnel equations give the ratio of the reflected wave's electric field to the incident wave's electric field, and the ratio of the transmitted wave's electric field to the incident wave's electric field, for each of two components of polarization. (The magnetic fields can also be related using similar coefficients.) These ratios are generally complex, describing not only the relative amplitudes but also the phase shifts at the interface. The equations assume the interface between the media is flat and that the media are homogeneous and isotropic. The incident light is assumed to be a plane wave, which is sufficient to solve any problem since any incident light field can be decomposed into plane waves and polarizations. === S and P polarizations === There are two sets of Fresnel coefficients for two different linear polarization components of the incident wave. Since any polarization state can be resolved into a combination of two orthogonal linear polarizations, this is sufficient for any problem. Likewise, unpolarized (or "randomly polarized") light has an equal amount of power in each of two linear polarizations. The s polarization refers to polarization of a wave's electric field normal to the plane of incidence (the z direction in the derivation below); then the magnetic field is in the plane of incidence. The p polarization refers to polarization of the electric field in the plane of incidence (the xy plane in the derivation below); then the magnetic field is normal to the plane of incidence. The names "s" and "p" for the polarization components refer to German "senkrecht" (perpendicular or normal) and "parallel" (parallel to the plane of incidence). Although the reflection and transmission are dependent on polarization, at normal incidence (θ = 0) there is no distinction between them so all polarization states are governed by a single set of Fresnel coefficients (and another special case is mentioned below in which that is true). == Configuration == In the diagram on the right, an incident plane wave in the direction of the ray IO strikes the interface between two media of refractive indices n1 and n2 at point O. Part of the wave is reflected in the direction OR, and part refracted in the direction OT. The angles that the incident, reflected and refracted rays make to the normal of the interface are given as θi, θr and θt, respectively. The relationship between these angles is given by the law of reflection: θ i = θ r , {\displaystyle \theta _{\mathrm {i} }=\theta _{\mathrm {r} },} and Snell's law: n 1 sin ⁡ θ i = n 2 sin ⁡ θ t . {\displaystyle n_{1}\sin \theta _{\mathrm {i} }=n_{2}\sin \theta _{\mathrm {t} }.} The behavior of light striking the interface is explained by considering the electric and magnetic fields that constitute an electromagnetic wave, and the laws of electromagnetism, as shown below. The ratio of waves' electric field (or magnetic field) amplitudes are obtained, but in practice one is more often interested in formulae which determine power coefficients, since power (or irradiance) is what can be directly measured at optical frequencies. The power of a wave is generally proportional to the square of the electric (or magnetic) field amplitude. == Power (intensity) reflection and transmission coefficients == We call the fraction of the incident power that is reflected from the interface the reflectance (or reflectivity, or power reflection coefficient) R, and the fraction that is refracted into the second medium is called the transmittance (or transmissivity, or power transmission coefficient) T. Note that these are what would be measured right at each side of an interface and do not account for attenuation of a wave in an absorbing medium following transmission or reflection. The reflectance for s-polarized light is R s = | Z 2 cos ⁡ θ i − Z 1 cos ⁡ θ t Z 2 cos ⁡ θ i + Z 1 cos ⁡ θ t | 2 , {\displaystyle R_{\mathrm {s} }=\left|{\frac {Z_{2}\cos \theta _{\mathrm {i} }-Z_{1}\cos \theta _{\mathrm {t} }}{Z_{2}\cos \theta _{\mathrm {i} }+Z_{1}\cos \theta _{\mathrm {t} }}}\right|^{2},} while the reflectance for p-polarized light is R p = | Z 2 cos ⁡ θ t − Z 1 cos ⁡ θ i Z 2 cos ⁡ θ t + Z 1 cos ⁡ θ i | 2 , {\displaystyle R_{\mathrm {p} }=\left|{\frac {Z_{2}\cos \theta _{\mathrm {t} }-Z_{1}\cos \theta _{\mathrm {i} }}{Z_{2}\cos \theta _{\mathrm {t} }+Z_{1}\cos \theta _{\mathrm {i} }}}\right|^{2},} where Z1 and Z2 are the wave impedances of media 1 and 2, respectively. We assume that the media are non-magnetic (i.e., μ1 = μ2 = μ0), which is typically a good approximation at optical frequencies (and for transparent media at other frequencies). Then the wave impedances are determined solely by the refractive indices n1 and n2: Z i = Z 0 n i , {\displaystyle Z_{i}={\frac {Z_{0}}{n_{i}}}\,,} where Z0 is the impedance of free space and i = 1, 2. Making this substitution, we obtain equations using the refractive indices: R s = | n 1 cos ⁡ θ i − n 2 cos ⁡ θ t n 1 cos ⁡ θ i + n 2 cos ⁡ θ t | 2 = | n 1 cos ⁡ θ i − n 2 1 − ( n 1 n 2 sin ⁡ θ i ) 2 n 1 cos ⁡ θ i + n 2 1 − ( n 1 n 2 sin ⁡ θ i ) 2 | 2 , {\displaystyle R_{\mathrm {s} }=\left|{\frac {n_{1}\cos \theta _{\mathrm {i} }-n_{2}\cos \theta _{\mathrm {t} }}{n_{1}\cos \theta _{\mathrm {i} }+n_{2}\cos \theta _{\mathrm {t} }}}\right|^{2}=\left|{\frac {n_{1}\cos \theta _{\mathrm {i} }-n_{2}{\sqrt {1-\left({\frac {n_{1}}{n_{2}}}\sin \theta _{\mathrm {i} }\right)^{2}}}}{n_{1}\cos \theta _{\mathrm {i} }+n_{2}{\sqrt {1-\left({\frac {n_{1}}{n_{2}}}\sin \theta _{\mathrm {i} }\right)^{2}}}}}\right|^{2}\!,} R p = | n 1 cos ⁡ θ t − n 2 cos ⁡ θ i n 1 cos ⁡ θ t + n 2 cos ⁡ θ i | 2 = | n 1 1 − ( n 1 n 2 sin ⁡ θ i ) 2 − n 2 cos ⁡ θ i n 1 1 − ( n 1 n 2 sin ⁡ θ i ) 2 + n 2 cos ⁡ θ i | 2 . {\displaystyle R_{\mathrm {p} }=\left|{\frac {n_{1}\cos \theta _{\mathrm {t} }-n_{2}\cos \theta _{\mathrm {i} }}{n_{1}\cos \theta _{\mathrm {t} }+n_{2}\cos \theta _{\mathrm {i} }}}\right|^{2}=\left|{\frac {n_{1}{\sqrt {1-\left({\frac {n_{1}}{n_{2}}}\sin \theta _{\mathrm {i} }\right)^{2}}}-n_{2}\cos \theta _{\mathrm {i} }}{n_{1}{\sqrt {1-\left({\frac {n_{1}}{n_{2}}}\sin \theta _{\mathrm {i} }\right)^{2}}}+n_{2}\cos \theta _{\mathrm {i} }}}\right|^{2}\!.} The second form of each equation is derived from the first by eliminating θt using Snell's law and trigonometric identities. As a consequence of conservation of energy, one can find the transmitted power (or more correctly, irradiance: power per unit area) simply as the portion of the incident power that isn't reflected:  T s = 1 − R s {\displaystyle T_{\mathrm {s} }=1-R_{\mathrm {s} }} and T p = 1 − R p {\displaystyle T_{\mathrm {p} }=1-R_{\mathrm {p} }} Note that all such intensities are measured in terms of a wave's irradiance in the direction normal to the interface; this is also what is measured in typical experiments. That number could be obtained from irradiances in the direction of an incident or reflected wave (given by the magnitude of a wave's Poynting vector) multiplied by cos θ for a wave at an angle θ to the normal direction (or equivalently, taking the dot product of the Poynting vector with the unit vector normal to the interface). This complication can be ignored in the case of the reflection coefficient, since cos θi = cos θr, so that the ratio of reflected to incident irradiance in the wave's direction is the same as in the direction normal to the interface. Although these relationships describe the basic physics, in many practical applications one is concerned with "natural light" that can be described as unpolarized. That means that there is an equal amount of power in the s and p polarizations, so that the effective reflectivity of the material is just the average of the two reflectivities: R e f f = 1 2 ( R s + R p ) . {\displaystyle R_{\mathrm {eff} }={\frac {1}{2}}\left(R_{\mathrm {s} }+R_{\mathrm {p} }\right).} For low-precision applications involving unpolarized light, such as computer graphics, rather than rigorously computing the effective reflection coefficient for each angle, Schlick's approximation is often used. === Special cases === ==== Normal incidence ==== For the case of normal incidence, θi = θt = 0, and there is no distinction between s and p polarization. Thus, the reflectance simplifies to R 0 = | n 1 − n 2 n 1 + n 2 | 2 . {\displaystyle R_{0}=\left|{\frac {n_{1}-n_{2}}{n_{1}+n_{2}}}\right|^{2}\,.} For common glass (n2 ≈ 1.5) surrounded by air (n1 = 1), the power reflectance at normal incidence can be seen to be about 4%, or 8% accounting for both sides of a glass pane. ==== Brewster's angle ==== At a dielectric interface from n1 to n2, there is a particular angle of incidence at which Rp goes to zero and a p-polarised incident wave is purely refracted, thus all reflected light is s-polarised. This angle is known as Brewster's angle, and is around 56° for n1 = 1 and n2 = 1.5 (typical glass). ==== Total internal reflection ==== When light travelling in a denser medium strikes the surface of a less dense medium (i.e., n1 > n2), beyond a particular incidence angle known as the critical angle, all light is reflected and Rs = Rp = 1. This phenomenon, known as total internal reflection, occurs at incidence angles for which Snell's law predicts that the sine of the angle of refraction would exceed unity (whereas in fact sin θ ≤ 1 for all real θ). For glass with n = 1.5 surrounded by air, the critical angle is approximately 42°. ==== 45° incidence ==== Reflection at 45° incidence is very commonly used for making 90° turns. For the case of light traversing from a less dense medium into a denser one at 45° incidence (θ = 45°), it follows algebraically from the above equations that Rp equals the square of Rs: R p = R s 2 {\displaystyle R_{\text{p}}=R_{\text{s}}^{2}} This can be used to either verify the consistency of the measurements of Rs and Rp, or to derive one of them when the other is known. This relationship is only valid for the simple case of a single plane interface between two homogeneous materials, not for films on substrates, where a more complex analysis is required. Measurements of Rs and Rp at 45° can be used to estimate the reflectivity at normal incidence. The "average of averages" obtained by calculating first the arithmetic as well as the geometric average of Rs and Rp, and then averaging these two averages again arithmetically, gives a value for R0 with an error of less than about 3% for most common optical materials. This is useful because measurements at normal incidence can be difficult to achieve in an experimental setup since the incoming beam and the detector will obstruct each other. However, since the dependence of Rs and Rp on the angle of incidence for angles below 10° is very small, a measurement at about 5° will usually be a good approximation for normal incidence, while allowing for a separation of the incoming and reflected beam. == Complex amplitude reflection and transmission coefficients == The above equations relating powers (which could be measured with a photometer for instance) are derived from the Fresnel equations which solve the physical problem in terms of electromagnetic field complex amplitudes, i.e., considering phase shifts in addition to their amplitudes. Those underlying equations supply generally complex-valued ratios of those EM fields and may take several different forms, depending on the formalism used. The complex amplitude coefficients for reflection and transmission are usually represented by lower case r and t (whereas the power coefficients are capitalized). As before, we are assuming the magnetic permeability, µ of both media to be equal to the permeability of free space µ0 as is essentially true of all dielectrics at optical frequencies. In the following equations and graphs, we adopt the following conventions. For s polarization, the reflection coefficient r is defined as the ratio of the reflected wave's complex electric field amplitude to that of the incident wave, whereas for p polarization r is the ratio of the waves complex magnetic field amplitudes (or equivalently, the negative of the ratio of their electric field amplitudes). The transmission coefficient t is the ratio of the transmitted wave's complex electric field amplitude to that of the incident wave, for either polarization. The coefficients r and t are generally different between the s and p polarizations, and even at normal incidence (where the designations s and p do not even apply!) the sign of r is reversed depending on whether the wave is considered to be s or p polarized, an artifact of the adopted sign convention (see graph for an air-glass interface at 0° incidence). The equations consider a plane wave incident on a plane interface at angle of incidence θ i {\displaystyle \theta _{\mathrm {i} }} , a wave reflected at angle θ r = θ i {\displaystyle \theta _{\mathrm {r} }=\theta _{\mathrm {i} }} , and a wave transmitted at angle θ t {\displaystyle \theta _{\mathrm {t} }} . In the case of an interface into an absorbing material (where n is complex) or total internal reflection, the angle of transmission does not generally evaluate to a real number. In that case, however, meaningful results can be obtained using formulations of these relationships in which trigonometric functions and geometric angles are avoided; the inhomogeneous waves launched into the second medium cannot be described using a single propagation angle. Using this convention, r s = n 1 cos ⁡ θ i − n 2 cos ⁡ θ t n 1 cos ⁡ θ i + n 2 cos ⁡ θ t , t s = 2 n 1 cos ⁡ θ i n 1 cos ⁡ θ i + n 2 cos ⁡ θ t , r p = n 2 cos ⁡ θ i − n 1 cos ⁡ θ t n 2 cos ⁡ θ i + n 1 cos ⁡ θ t , t p = 2 n 1 cos ⁡ θ i n 2 cos ⁡ θ i + n 1 cos ⁡ θ t . {\displaystyle {\begin{aligned}r_{\text{s}}&={\frac {n_{1}\cos \theta _{\text{i}}-n_{2}\cos \theta _{\text{t}}}{n_{1}\cos \theta _{\text{i}}+n_{2}\cos \theta _{\text{t}}}},\\[3pt]t_{\text{s}}&={\frac {2n_{1}\cos \theta _{\text{i}}}{n_{1}\cos \theta _{\text{i}}+n_{2}\cos \theta _{\text{t}}}},\\[3pt]r_{\text{p}}&={\frac {n_{2}\cos \theta _{\text{i}}-n_{1}\cos \theta _{\text{t}}}{n_{2}\cos \theta _{\text{i}}+n_{1}\cos \theta _{\text{t}}}},\\[3pt]t_{\text{p}}&={\frac {2n_{1}\cos \theta _{\text{i}}}{n_{2}\cos \theta _{\text{i}}+n_{1}\cos \theta _{\text{t}}}}.\end{aligned}}} For the case where the magnetic permeabilities are non-negligible, the equations change such that every appearance of n i {\displaystyle n_{i}} is replaced by n i / μ i {\displaystyle n_{i}/\mu _{i}} (for both i = 1 , 2 {\displaystyle i=1,2} ). One can see that ts = rs + 1 and ⁠n2/n1⁠tp = rp + 1. One can write very similar equations applying to the ratio of the waves' magnetic fields, but comparison of the electric fields is more conventional. Because the reflected and incident waves propagate in the same medium and make the same angle with the normal to the surface, the power reflection coefficient R is just the squared magnitude of r:  R = | r | 2 . {\displaystyle R=|r|^{2}.} On the other hand, calculation of the power transmission coefficient T is less straightforward, since the light travels in different directions in the two media. What's more, the wave impedances in the two media differ; power (irradiance) is given by the square of the electric field amplitude divided by the characteristic impedance of the medium (or by the square of the magnetic field multiplied by the characteristic impedance). This results in: T = n 2 cos ⁡ θ t n 1 cos ⁡ θ i | t | 2 {\displaystyle T={\frac {n_{2}\cos \theta _{\text{t}}}{n_{1}\cos \theta _{\text{i}}}}|t|^{2}} using the above definition of t. The introduced factor of ⁠n2/n1⁠ is the reciprocal of the ratio of the media's wave impedances. The cos(θ) factors adjust the waves' powers so they are reckoned in the direction normal to the interface, for both the incident and transmitted waves, so that full power transmission corresponds to T = 1. In the case of total internal reflection where the power transmission T is zero, t nevertheless describes the electric field (including its phase) just beyond the interface. This is an evanescent field which does not propagate as a wave (thus T = 0) but has nonzero values very close to the interface. The phase shift of the reflected wave on total internal reflection can similarly be obtained from the phase angles of rp and rs (whose magnitudes are unity in this case). These phase shifts are different for s and p waves, which is the well-known principle by which total internal reflection is used to effect polarization transformations. === Alternative forms === In the above formula for rs, if we put n 2 = n 1 sin ⁡ θ i / sin ⁡ θ t {\displaystyle n_{2}=n_{1}\sin \theta _{\text{i}}/\sin \theta _{\text{t}}} (Snell's law) and multiply the numerator and denominator by ⁠1/n1⁠ sin θt, we obtain  r s = − sin ⁡ ( θ i − θ t ) sin ⁡ ( θ i + θ t ) . {\displaystyle r_{\text{s}}=-{\frac {\sin(\theta _{\text{i}}-\theta _{\text{t}})}{\sin(\theta _{\text{i}}+\theta _{\text{t}})}}.} If we do likewise with the formula for rp, the result is easily shown to be equivalent to  r p = tan ⁡ ( θ i − θ t ) tan ⁡ ( θ i + θ t ) . {\displaystyle r_{\text{p}}={\frac {\tan(\theta _{\text{i}}-\theta _{\text{t}})}{\tan(\theta _{\text{i}}+\theta _{\text{t}})}}.} These formulas  are known respectively as Fresnel's sine law and Fresnel's tangent law. Although at normal incidence these expressions reduce to 0/0, one can see that they yield the correct results in the limit as θi → 0. == Multiple surfaces == When light makes multiple reflections between two or more parallel surfaces, the multiple beams of light generally interfere with one another, resulting in net transmission and reflection amplitudes that depend on the light's wavelength. The interference, however, is seen only when the surfaces are at distances comparable to or smaller than the light's coherence length, which for ordinary white light is few micrometers; it can be much larger for light from a laser. An example of interference between reflections is the iridescent colours seen in a soap bubble or in thin oil films on water. Applications include Fabry–Pérot interferometers, antireflection coatings, and optical filters. A quantitative analysis of these effects is based on the Fresnel equations, but with additional calculations to account for interference. The transfer-matrix method, or the recursive Rouard method  can be used to solve multiple-surface problems. == History == In 1808, Étienne-Louis Malus discovered that when a ray of light was reflected off a non-metallic surface at the appropriate angle, it behaved like one of the two rays emerging from a doubly-refractive calcite crystal. He later coined the term polarization to describe this behavior. In 1815, the dependence of the polarizing angle on the refractive index was determined experimentally by David Brewster. But the reason for that dependence was such a deep mystery that in late 1817, Thomas Young was moved to write: [T]he great difficulty of all, which is to assign a sufficient reason for the reflection or nonreflection of a polarised ray, will probably long remain, to mortify the vanity of an ambitious philosophy, completely unresolved by any theory. In 1821, however, Augustin-Jean Fresnel derived results equivalent to his sine and tangent laws (above), by modeling light waves as transverse elastic waves with vibrations perpendicular to what had previously been called the plane of polarization. Fresnel promptly confirmed by experiment that the equations correctly predicted the direction of polarization of the reflected beam when the incident beam was polarized at 45° to the plane of incidence, for light incident from air onto glass or water; in particular, the equations gave the correct polarization at Brewster's angle. The experimental confirmation was reported in a "postscript" to the work in which Fresnel first revealed his theory that light waves, including "unpolarized" waves, were purely transverse. Details of Fresnel's derivation, including the modern forms of the sine law and tangent law, were given later, in a memoir read to the French Academy of Sciences in January 1823. That derivation combined conservation of energy with continuity of the tangential vibration at the interface, but failed to allow for any condition on the normal component of vibration. The first derivation from electromagnetic principles was given by Hendrik Lorentz in 1875. In the same memoir of January 1823, Fresnel found that for angles of incidence greater than the critical angle, his formulas for the reflection coefficients (rs and rp) gave complex values with unit magnitudes. Noting that the magnitude, as usual, represented the ratio of peak amplitudes, he guessed that the argument represented the phase shift, and verified the hypothesis experimentally. The verification involved calculating the angle of incidence that would introduce a total phase difference of 90° between the s and p components, for various numbers of total internal reflections at that angle (generally there were two solutions), subjecting light to that number of total internal reflections at that angle of incidence, with an initial linear polarization at 45° to the plane of incidence, and checking that the final polarization was circular. Thus he finally had a quantitative theory for what we now call the Fresnel rhomb — a device that he had been using in experiments, in one form or another, since 1817 (see Fresnel rhomb § History). The success of the complex reflection coefficient inspired James MacCullagh and Augustin-Louis Cauchy, beginning in 1836, to analyze reflection from metals by using the Fresnel equations with a complex refractive index. Four weeks before he presented his completed theory of total internal reflection and the rhomb, Fresnel submitted a memoir  in which he introduced the needed terms linear polarization, circular polarization, and elliptical polarization, and in which he explained optical rotation as a species of birefringence: linearly-polarized light can be resolved into two circularly-polarized components rotating in opposite directions, and if these propagate at different speeds, the phase difference between them — hence the orientation of their linearly-polarized resultant — will vary continuously with distance. Thus Fresnel's interpretation of the complex values of his reflection coefficients marked the confluence of several streams of his research and, arguably, the essential completion of his reconstruction of physical optics on the transverse-wave hypothesis (see Augustin-Jean Fresnel). == Derivation == Here we systematically derive the above relations from electromagnetic premises. === Material parameters === In order to compute meaningful Fresnel coefficients, we must assume that the medium is (approximately) linear and homogeneous. If the medium is also isotropic, the four field vectors E, B, D, H  are related by D = ϵ E B = μ H , {\displaystyle {\begin{aligned}\mathbf {D} &=\epsilon \mathbf {E} \\\mathbf {B} &=\mu \mathbf {H} \,,\end{aligned}}} where ϵ and μ are scalars, known respectively as the (electric) permittivity and the (magnetic) permeability of the medium. For vacuum, these have the values ϵ0 and μ0, respectively. Hence we define the relative permittivity (or dielectric constant) ϵrel = ϵ/ϵ0, and the relative permeability μrel = μ/μ0. In optics it is common to assume that the medium is non-magnetic, so that μrel = 1. For ferromagnetic materials at radio/microwave frequencies, larger values of μrel must be taken into account. But, for optically transparent media, and for all other materials at optical frequencies (except possible metamaterials), μrel is indeed very close to 1; that is, μ ≈ μ0. In optics, one usually knows the refractive index n of the medium, which is the ratio of the speed of light in vacuum (c) to the speed of light in the medium. In the analysis of partial reflection and transmission, one is also interested in the electromagnetic wave impedance Z, which is the ratio of the amplitude of E to the amplitude of H. It is therefore desirable to express n and Z in terms of ϵ and μ, and thence to relate Z to n. The last-mentioned relation, however, will make it convenient to derive the reflection coefficients in terms of the wave admittance Y, which is the reciprocal of the wave impedance Z. In the case of uniform plane sinusoidal waves, the wave impedance or admittance is known as the intrinsic impedance or admittance of the medium. This case is the one for which the Fresnel coefficients are to be derived. === Electromagnetic plane waves === In a uniform plane sinusoidal electromagnetic wave, the electric field E has the form where Ek is the (constant) complex amplitude vector, i is the imaginary unit, k is the wave vector (whose magnitude k is the angular wavenumber), r is the position vector, ω is the angular frequency, t is time, and it is understood that the real part of the expression is the physical field. The value of the expression is unchanged if the position r varies in a direction normal to k; hence k is normal to the wavefronts. To advance the phase by the angle ϕ, we replace ωt by ωt + ϕ (that is, we replace −ωt by −ωt − ϕ), with the result that the (complex) field is multiplied by e−iϕ. So a phase advance is equivalent to multiplication by a complex constant with a negative argument. This becomes more obvious when the field (1) is factored as Ek eik⋅re−iωt, where the last factor contains the time-dependence. That factor also implies that differentiation w.r.t. time corresponds to multiplication by −iω.  If ℓ is the component of r in the direction of k, the field (1) can be written Ek ei(kℓ−ωt). If the argument of ei(⋯) is to be constant, ℓ must increase at the velocity ω / k , {\displaystyle \omega /k\,,\,} known as the phase velocity (vp). This in turn is equal to c / n {\displaystyle c/n} . Solving for k gives As usual, we drop the time-dependent factor e−iωt, which is understood to multiply every complex field quantity. The electric field for a uniform plane sine wave will then be represented by the location-dependent phasor For fields of that form, Faraday's law and the Maxwell-Ampère law respectively reduce to  ω B = k × E ω D = − k × H . {\displaystyle {\begin{aligned}\omega \mathbf {B} &=\mathbf {k} \times \mathbf {E} \\\omega \mathbf {D} &=-\mathbf {k} \times \mathbf {H} \,.\end{aligned}}} Putting B = μH and D = ϵE, as above, we can eliminate B and D to obtain equations in only E and H: ω μ H = k × E ω ϵ E = − k × H . {\displaystyle {\begin{aligned}\omega \mu \mathbf {H} &=\mathbf {k} \times \mathbf {E} \\\omega \epsilon \mathbf {E} &=-\mathbf {k} \times \mathbf {H} \,.\end{aligned}}} If the material parameters ϵ and μ are real (as in a lossless dielectric), these equations show that k, E, H form a right-handed orthogonal triad, so that the same equations apply to the magnitudes of the respective vectors. Taking the magnitude equations and substituting from (2), we obtain μ c H = n E ϵ c E = n H , {\displaystyle {\begin{aligned}\mu cH&=nE\\\epsilon cE&=nH\,,\end{aligned}}} where H and E are the magnitudes of H and E. Multiplying the last two equations gives Dividing (or cross-multiplying) the same two equations gives H = YE, where This is the intrinsic admittance. From (4) we obtain the phase velocity c / n = 1 / μ ϵ {\displaystyle c/n=1{\big /}\!{\sqrt {\mu \epsilon \,}}} . For vacuum this reduces to c = 1 / μ 0 ϵ 0 {\displaystyle c=1{\big /}\!{\sqrt {\mu _{0}\epsilon _{0}}}} . Dividing the second result by the first gives n = μ rel ϵ rel . {\displaystyle n={\sqrt {\mu _{\text{rel}}\epsilon _{\text{rel}}}}\,.} For a non-magnetic medium (the usual case), this becomes ⁠ n = ϵ rel {\displaystyle n={\sqrt {\epsilon _{\text{rel}}}}} ⁠. (Taking the reciprocal of (5), we find that the intrinsic impedance is Z = μ / ϵ {\textstyle Z={\sqrt {\mu /\epsilon }}} . In vacuum this takes the value Z 0 = μ 0 / ϵ 0 ≈ 377 Ω , {\textstyle Z_{0}={\sqrt {\mu _{0}/\epsilon _{0}}}\,\approx 377\,\Omega \,,} known as the impedance of free space. By division, Z / Z 0 = μ rel / ϵ rel {\textstyle Z/Z_{0}={\sqrt {\mu _{\text{rel}}/\epsilon _{\text{rel}}}}} . For a non-magnetic medium, this becomes Z = Z 0 / ϵ rel = Z 0 / n . {\displaystyle Z=Z_{0}{\big /}\!{\sqrt {\epsilon _{\text{rel}}}}=Z_{0}/n.} ) === Wave vectors === In Cartesian coordinates (x, y, z), let the region y < 0 have refractive index n1, intrinsic admittance Y1, etc., and let the region y > 0 have refractive index n2, intrinsic admittance Y2, etc. Then the xz plane is the interface, and the y axis is normal to the interface (see diagram). Let i and j (in bold roman type) be the unit vectors in the x and y directions, respectively. Let the plane of incidence be the xy plane (the plane of the page), with the angle of incidence θi measured from j towards i. Let the angle of refraction, measured in the same sense, be θt, where the subscript t stands for transmitted (reserving r for reflected). In the absence of Doppler shifts, ω does not change on reflection or refraction. Hence, by (2), the magnitude of the wave vector is proportional to the refractive index. So, for a given ω, if we redefine k as the magnitude of the wave vector in the reference medium (for which n = 1), then the wave vector has magnitude n1k in the first medium (region y < 0 in the diagram) and magnitude n2k in the second medium. From the magnitudes and the geometry, we find that the wave vectors are k i = n 1 k ( i sin ⁡ θ i + j cos ⁡ θ i ) k r = n 1 k ( i sin ⁡ θ i − j cos ⁡ θ i ) k t = n 2 k ( i sin ⁡ θ t + j cos ⁡ θ t ) = k ( i n 1 sin ⁡ θ i + j n 2 cos ⁡ θ t ) , {\displaystyle {\begin{aligned}\mathbf {k} _{\text{i}}&=n_{1}k(\mathbf {i} \sin \theta _{\text{i}}+\mathbf {j} \cos \theta _{\text{i}})\\[.5ex]\mathbf {k} _{\text{r}}&=n_{1}k(\mathbf {i} \sin \theta _{\text{i}}-\mathbf {j} \cos \theta _{\text{i}})\\[.5ex]\mathbf {k} _{\text{t}}&=n_{2}k(\mathbf {i} \sin \theta _{\text{t}}+\mathbf {j} \cos \theta _{\text{t}})\\&=k(\mathbf {i} \,n_{1}\sin \theta _{\text{i}}+\mathbf {j} \,n_{2}\cos \theta _{\text{t}})\,,\end{aligned}}} where the last step uses Snell's law. The corresponding dot products in the phasor form (3) are Hence: === s components === For the s polarization, the E field is parallel to the z axis and may therefore be described by its component in the z direction. Let the reflection and transmission coefficients be rs and ts, respectively. Then, if the incident E field is taken to have unit amplitude, the phasor form (3) of its z-component is and the reflected and transmitted fields, in the same form, are Under the sign convention used in this article, a positive reflection or transmission coefficient is one that preserves the direction of the transverse field, meaning (in this context) the field normal to the plane of incidence. For the s polarization, that means the E field. If the incident, reflected, and transmitted E fields (in the above equations) are in the z-direction ("out of the page"), then the respective H fields are in the directions of the red arrows, since k, E, H form a right-handed orthogonal triad. The H fields may therefore be described by their components in the directions of those arrows, denoted by Hi, Hr, Ht. Then, since H = YE, At the interface, by the usual interface conditions for electromagnetic fields, the tangential components of the E and H fields must be continuous; that is, When we substitute from equations (8) to (10) and then from (7), the exponential factors cancel out, so that the interface conditions reduce to the simultaneous equations which are easily solved for rs and ts, yielding and At normal incidence (θi = θt = 0), indicated by an additional subscript 0, these results become and At grazing incidence (θi → 90°), we have cos θi → 0, hence rs → −1 and ts → 0. === p components === For the p polarization, the incident, reflected, and transmitted E fields are parallel to the red arrows and may therefore be described by their components in the directions of those arrows. Let those components be Ei, Er, Et  (redefining the symbols for the new context). Let the reflection and transmission coefficients be rp and tp. Then, if the incident E field is taken to have unit amplitude, we have If the E fields are in the directions of the red arrows, then, in order for k, E, H to form a right-handed orthogonal triad, the respective H fields must be in the −z-direction ("into the page") and may therefore be described by their components in that direction. This is consistent with the adopted sign convention, namely that a positive reflection or transmission coefficient is one that preserves the direction of the transverse field (the H field in the case of the p polarization). The agreement of the other field with the red arrows reveals an alternative definition of the sign convention: that a positive reflection or transmission coefficient is one for which the field vector in the plane of incidence points towards the same medium before and after reflection or transmission. So, for the incident, reflected, and transmitted H fields, let the respective components in the −z-direction be Hi, Hr, Ht. Then, since H = YE, At the interface, the tangential components of the E and H fields must be continuous; that is, When we substitute from equations (17) and (18) and then from (7), the exponential factors again cancel out, so that the interface conditions reduce to Solving for rp and tp, we find and At normal incidence (θi = θt = 0) indicated by an additional subscript 0, these results become and At grazing incidence (θi → 90°), we again have cos θi → 0, hence rp → −1 and tp → 0. Comparing (23) and (24) with (15) and (16), we see that at normal incidence, under the adopted sign convention, the transmission coefficients for the two polarizations are equal, whereas the reflection coefficients have equal magnitudes but opposite signs. While this clash of signs is a disadvantage of the convention, the attendant advantage is that the signs agree at grazing incidence. === Power ratios (reflectivity and transmissivity) === The Poynting vector for a wave is a vector whose component in any direction is the irradiance (power per unit area) of that wave on a surface perpendicular to that direction. For a plane sinusoidal wave the Poynting vector is ⁠1/2⁠‍Re{E × H∗}, where E and H are due only to the wave in question, and the asterisk denotes complex conjugation. Inside a lossless dielectric (the usual case), E and H are in phase, and at right angles to each other and to the wave vector k; so, for s polarization, using the z and xy components of E and H respectively (or for p polarization, using the xy and −z components of E and H), the irradiance in the direction of k is given simply by EH/2, which is E2/2Z in a medium of intrinsic impedance Z = 1/Y. To compute the irradiance in the direction normal to the interface, as we shall require in the definition of the power transmission coefficient, we could use only the x component (rather than the full xy component) of H or E or, equivalently, simply multiply EH/2 by the proper geometric factor, obtaining (E2/2Z)cos θ. From equations (13) and (21), taking squared magnitudes, we find that the reflectivity (ratio of reflected power to incident power) is for the s polarization, and for the p polarization. Note that when comparing the powers of two such waves in the same medium and with the same cos θ, the impedance and geometric factors mentioned above are identical and cancel out. But in computing the power transmission (below), these factors must be taken into account. The simplest way to obtain the power transmission coefficient (transmissivity, the ratio of transmitted power to incident power in the direction normal to the interface, i.e. the y direction) is to use R + T = 1 (conservation of energy). In this way we find for the s polarization, and for the p polarization. In the case of an interface between two lossless media (for which ϵ and μ are real and positive), one can obtain these results directly using the squared magnitudes of the amplitude transmission coefficients that we found earlier in equations (14) and (22). But, for given amplitude (as noted above), the component of the Poynting vector in the y direction is proportional to the geometric factor cos θ and inversely proportional to the wave impedance Z. Applying these corrections to each wave, we obtain two ratios multiplying the square of the amplitude transmission coefficient: for the s polarization, and for the p polarization. The last two equations apply only to lossless dielectrics, and only at incidence angles smaller than the critical angle (beyond which, of course, T = 0). For unpolarized light: T = 1 2 ( T s + T p ) {\displaystyle T={1 \over 2}(T_{s}+T_{p})} R = 1 2 ( R s + R p ) {\displaystyle R={1 \over 2}(R_{s}+R_{p})} where R + T = 1 {\displaystyle R+T=1} . === Equal refractive indices === From equations (4) and (5), we see that two dissimilar media will have the same refractive index, but different admittances, if the ratio of their permeabilities is the inverse of the ratio of their permittivities. In that unusual situation we have θt = θi (that is, the transmitted ray is undeviated), so that the cosines in equations (13), (14), (21), (22), and (25) to (28) cancel out, and all the reflection and transmission ratios become independent of the angle of incidence; in other words, the ratios for normal incidence become applicable to all angles of incidence. When extended to spherical reflection or scattering, this results in the Kerker effect for Mie scattering. === Non-magnetic media === Since the Fresnel equations were developed for optics, they are usually given for non-magnetic materials. Dividing (4) by (5)) yields Y = n c μ . {\displaystyle Y={\frac {n}{\,c\mu \,}}\,.} For non-magnetic media we can substitute the vacuum permeability μ0 for μ, so that Y 1 = n 1 c μ 0 ; Y 2 = n 2 c μ 0 ; {\displaystyle Y_{1}={\frac {n_{1}}{\,c\mu _{0}}}~~;~~~Y_{2}={\frac {n_{2}}{\,c\mu _{0}}}\,;} that is, the admittances are simply proportional to the corresponding refractive indices. When we make these substitutions in equations (13) to (16) and equations (21) to (26), the factor cμ0 cancels out. For the amplitude coefficients we obtain: For the case of normal incidence these reduce to: The power reflection coefficients become: The power transmissions can then be found from T = 1 − R. === Brewster's angle === For equal permeabilities (e.g., non-magnetic media), if θi and θt are complementary, we can substitute sin θt for cos θi, and sin θi for cos θt, so that the numerator in equation (31) becomes n2‍sin θt − n1‍sin θi, which is zero (by Snell's law). Hence rp = 0  and only the s-polarized component is reflected. This is what happens at the Brewster angle. Substituting cos θi for sin θt in Snell's law, we readily obtain for Brewster's angle. === Equal permittivities === Although it is not encountered in practice, the equations can also apply to the case of two media with a common permittivity but different refractive indices due to different permeabilities. From equations (4) and (5), if ϵ is fixed instead of μ, then Y becomes inversely proportional to n, with the result that the subscripts 1 and 2 in equations (29) to (38) are interchanged (due to the additional step of multiplying the numerator and denominator by n1n2). Hence, in (29) and (31), the expressions for rs and rp in terms of refractive indices will be interchanged, so that Brewster's angle (39) will give rs = 0 instead of rp = 0, and any beam reflected at that angle will be p-polarized instead of s-polarized. Similarly, Fresnel's sine law will apply to the p polarization instead of the s polarization, and his tangent law to the s polarization instead of the p polarization. This switch of polarizations has an analog in the old mechanical theory of light waves (see § History, above). One could predict reflection coefficients that agreed with observation by supposing (like Fresnel) that different refractive indices were due to different densities and that the vibrations were normal to what was then called the plane of polarization, or by supposing (like MacCullagh and Neumann) that different refractive indices were due to different elasticities and that the vibrations were parallel to that plane. Thus the condition of equal permittivities and unequal permeabilities, although not realistic, is of some historical interest. == See also == Jones calculus Polarization mixing Index-matching material Field and power quantities Fresnel rhomb, Fresnel's apparatus to produce circularly polarised light Reflection loss Specular reflection Schlick's approximation Snell's window X-ray reflectivity Plane of incidence Reflections of signals on conducting lines == Notes == == References == == Sources == M. Born and E. Wolf, 1970, Principles of Optics, 4th Ed., Oxford: Pergamon Press. J.Z. Buchwald, 1989, The Rise of the Wave Theory of Light: Optical Theory and Experiment in the Early Nineteenth Century, University of Chicago Press, ISBN 0-226-07886-8. R.E. Collin, 1966, Foundations for Microwave Engineering, Tokyo: McGraw-Hill. O. Darrigol, 2012, A History of Optics: From Greek Antiquity to the Nineteenth Century, Oxford, ISBN 978-0-19-964437-7. A. Fresnel, 1866 (ed. H. de Senarmont, E. Verdet, and L. Fresnel), Oeuvres complètes d'Augustin Fresnel, Paris: Imprimerie Impériale (3 vols., 1866–70), vol. 1 (1866). Griffiths, David J. (2017). "Chapter 9.3: Electromagnetic Waves in Matter". Introduction to Electrodynamics (4th ed.). Cambridge University Press. ISBN 978-1-108-42041-9. E. Hecht, 1987, Optics, 2nd Ed., Addison Wesley, ISBN 0-201-11609-X. E. Hecht, 2002, Optics, 4th Ed., Addison Wesley, ISBN 0-321-18878-0. F.A. Jenkins and H.E. White, 1976, Fundamentals of Optics, 4th Ed., New York: McGraw-Hill, ISBN 0-07-032330-5. H. Lloyd, 1834, "Report on the progress and present state of physical optics", Report of the Fourth Meeting of the British Association for the Advancement of Science (held at Edinburgh in 1834), London: J. Murray, 1835, pp. 295–413. W. Whewell, 1857, History of the Inductive Sciences: From the Earliest to the Present Time, 3rd Ed., London: J.W. Parker & Son, vol. 2. E. T. Whittaker, 1910, A History of the Theories of Aether and Electricity: From the Age of Descartes to the Close of the Nineteenth Century, London: Longmans, Green, & Co. == External links == Fresnel Equations – Wolfram. Fresnel equations calculator FreeSnell – Free software computes the optical properties of multilayer materials. Thinfilm – Web interface for calculating optical properties of thin films and multilayer materials (reflection & transmission coefficients, ellipsometric parameters Psi & Delta). Simple web interface for calculating single-interface reflection and refraction angles and strengths. Reflection and transmittance for two dielectrics – Mathematica interactive webpage that shows the relations between index of refraction and reflection. A self-contained first-principles derivation of the transmission and reflection probabilities from a multilayer with complex indices of refraction.
Wikipedia/Fresnel_equations
The Milne model was a special-relativistic cosmological model of the universe proposed by Edward Arthur Milne in 1935. It is mathematically equivalent to a special case of the FLRW model in the limit of zero energy density and it obeys the cosmological principle. The Milne model is also similar to Rindler space in that both are simple re-parameterizations of flat Minkowski space. Since it features both zero energy density and maximally negative spatial curvature, the Milne model is inconsistent with cosmological observations. Cosmologists actually observe the universe's density parameter to be consistent with unity and its curvature to be consistent with flatness. == Milne metric == The Milne universe is a special case of a more general Friedmann–Lemaître–Robertson–Walker model (FLRW). The Milne solution can be obtained from the more generic FLRW model by demanding that the energy density, pressure and cosmological constant all equal zero and the spatial curvature is negative. From these assumptions and the Friedmann equations it follows that the scale factor must depend on time coordinate linearly. Setting the spatial curvature and speed of light to unity the metric for a Milne universe can be expressed with hyperspherical coordinates as: d s 2 = d t 2 − t 2 ( d χ 2 + sinh 2 ⁡ χ d Ω 2 ) {\displaystyle ds^{2}=dt^{2}-t^{2}(d\chi ^{2}+\sinh ^{2}{\chi }d\Omega ^{2})\ } where d Ω 2 = d θ 2 + sin 2 ⁡ θ d ϕ 2 {\displaystyle d\Omega ^{2}=d\theta ^{2}+\sin ^{2}\theta d\phi ^{2}\ } is the metric for a two-sphere and χ = sinh − 1 ⁡ r {\displaystyle \chi =\sinh ^{-1}{r}} is the curvature-corrected radial component for negatively curved space that varies between 0 and + ∞ {\displaystyle +\infty } . The empty space that the Milne model describes can be identified with the inside of a light cone of an event in Minkowski space by a change of coordinates. Milne developed this model independent of general relativity but with awareness of special relativity. As he initially described it, the model has no expansion of space, so all of the redshift (except that caused by peculiar velocities) is explained by a recessional velocity associated with the hypothetical "explosion". However, the mathematical equivalence of the zero energy density ( ρ = 0 {\displaystyle \rho =0} ) version of the FLRW metric to Milne's model implies that a full general relativistic treatment using Milne's assumptions would result in a linearly increasing scale factor for all time since the deceleration parameter is uniquely zero for such a model. == Milne's density function == Milne proposed that the universe's density changes in time because of an initial outward explosion of matter. Milne's model assumes an inhomogeneous density function which is Lorentz Invariant (around the event t=x=y=z=0). When rendered graphically Milne's density distribution shows a three-dimensional spherical Lobachevskian pattern with outer edges moving outward at the speed of light. Every inertial body perceives itself to be at the center of the explosion of matter (see observable universe), and sees the local universe as homogeneous and isotropic in the sense of the cosmological principle. In order to be consistent with general relativity, the universe's density must be negligible in comparison to the critical density at all times for which the Milne model is taken to apply. == Notes == == References == Milne Cosmology: Why I Keep Talking About It Archived 12 September 2006 at the Wayback Machine - a detailed non-technical introduction to the Milne model True Wegener, Mogens (2021). Non-Standard Relativity: a Philosopher's Handbook of Heresies in Physics. Books on Demand. ISBN 978-8743031420. A thorough historical and theoretical study of the British Tradition in Cosmology, and one long celebration of Milne.
Wikipedia/Milne_model
In mathematics, the Ernst equation is an integrable non-linear partial differential equation, named after the American physicist Frederick J. Ernst. == The Ernst equation == The equation reads: ℜ ( u ) ( u r r + u r / r + u z z ) = ( u r ) 2 + ( u z ) 2 . {\displaystyle \Re (u)(u_{rr}+u_{r}/r+u_{zz})=(u_{r})^{2}+(u_{z})^{2}.} where ℜ ( u ) {\textstyle \Re (u)} is the real part of u {\textstyle u} . For its Lax pair and other features see e.g. and references therein. === Usage === The Ernst equation is employed in order to produce exact solutions of the Einstein's equations in the general theory of relativity. == References ==
Wikipedia/Ernst_equation
In physics, gravity (from Latin gravitas 'weight'), also known as gravitation or a gravitational interaction, is a fundamental interaction, a mutual attraction between all massive particles. On Earth, gravity takes a slightly different meaning: the observed force between objects and the Earth. This force is dominated by the combined gravitational interactions of particles but also includes effect of the Earth's rotation. Gravity gives weight to physical objects and is essential to understanding the mechanisms responsible for surface water waves and lunar tides. Gravity also has many important biological functions, helping to guide the growth of plants through the process of gravitropism and influencing the circulation of fluids in multicellular organisms. The gravitational attraction between primordial hydrogen and clumps of dark matter in the early universe caused the hydrogen gas to coalesce, eventually condensing and fusing to form stars. At larger scales this results in galaxies and clusters, so gravity is a primary driver for the large-scale structures in the universe. Gravity has an infinite range, although its effects become weaker as objects get farther away. Gravity is accurately described by the general theory of relativity, proposed by Albert Einstein in 1915, which describes gravity in terms of the curvature of spacetime, caused by the uneven distribution of mass. The most extreme example of this curvature of spacetime is a black hole, from which nothing—not even light—can escape once past the black hole's event horizon. However, for most applications, gravity is well approximated by Newton's law of universal gravitation, which describes gravity as a force causing any two bodies to be attracted toward each other, with magnitude proportional to the product of their masses and inversely proportional to the square of the distance between them. Scientists are currently working to develop a theory of gravity consistent with quantum mechanics, a quantum gravity theory, which would allow gravity to be united in a common mathematical framework (a theory of everything) with the other three fundamental interactions of physics. Although experiments are now being conducted to prove (or disprove) whether gravity is quantum, it is not known with certainty. == Definitions == Gravity is the word used to describe both a fundamental physical interaction and the observed consequences of that interaction on macroscopic objects on Earth. Gravity is, by far, the weakest of the four fundamental interactions, approximately 1038 times weaker than the strong interaction, 1036 times weaker than the electromagnetic force, and 1029 times weaker than the weak interaction. As a result, it has no significant influence at the level of subatomic particles. However, gravity is the most significant interaction between objects at the macroscopic scale, and it determines the motion of planets, stars, galaxies, and even light. Gravity, as the gravitational attraction at the surface of a planet or other celestial body, may also include the centrifugal force resulting from the planet's rotation (see § Earth's gravity). == History == === Ancient world === The nature and mechanism of gravity were explored by a wide range of ancient scholars. In Greece, Aristotle believed that objects fell towards the Earth because the Earth was the center of the Universe and attracted all of the mass in the Universe towards it. He also thought that the speed of a falling object should increase with its weight, a conclusion that was later shown to be false. While Aristotle's view was widely accepted throughout Ancient Greece, there were other thinkers such as Plutarch who correctly predicted that the attraction of gravity was not unique to the Earth. Although he did not understand gravity as a force, the ancient Greek philosopher Archimedes discovered the center of gravity of a triangle. He postulated that if two equal weights did not have the same center of gravity, the center of gravity of the two weights together would be in the middle of the line that joins their centers of gravity. Two centuries later, the Roman engineer and architect Vitruvius contended in his De architectura that gravity is not dependent on a substance's weight but rather on its "nature". In the 6th century CE, the Byzantine Alexandrian scholar John Philoponus proposed the theory of impetus, which modifies Aristotle's theory that "continuation of motion depends on continued action of a force" by incorporating a causative force that diminishes over time. In 628 CE, the Indian mathematician and astronomer Brahmagupta proposed the idea that gravity is an attractive force that draws objects to the Earth and used the term gurutvākarṣaṇ to describe it.: 105  In the ancient Middle East, gravity was a topic of fierce debate. The Persian intellectual Al-Biruni believed that the force of gravity was not unique to the Earth, and he correctly assumed that other heavenly bodies should exert a gravitational attraction as well. In contrast, Al-Khazini held the same position as Aristotle that all matter in the Universe is attracted to the center of the Earth. === Scientific revolution === In the mid-16th century, various European scientists experimentally disproved the Aristotelian notion that heavier objects fall at a faster rate. In particular, the Spanish Dominican priest Domingo de Soto wrote in 1551 that bodies in free fall uniformly accelerate. De Soto may have been influenced by earlier experiments conducted by other Dominican priests in Italy, including those by Benedetto Varchi, Francesco Beato, Luca Ghini, and Giovan Bellaso which contradicted Aristotle's teachings on the fall of bodies. The mid-16th century Italian physicist Giambattista Benedetti published papers claiming that, due to specific gravity, objects made of the same material but with different masses would fall at the same speed. With the 1586 Delft tower experiment, the Flemish physicist Simon Stevin observed that two cannonballs of differing sizes and weights fell at the same rate when dropped from a tower. In the late 16th century, Galileo Galilei's careful measurements of balls rolling down inclines allowed him to firmly establish that gravitational acceleration is the same for all objects.: 334  Galileo postulated that air resistance is the reason that objects with a low density and high surface area fall more slowly in an atmosphere. In his 1638 work Two New Sciences Galileo proved that that the distance traveled by a falling object is proportional to the square of the time elapsed. His method was a form of graphical numerical integration since concepts of algebra and calculus were unknown at the time.: 4  This was later confirmed by Italian scientists Jesuits Grimaldi and Riccioli between 1640 and 1650. They also calculated the magnitude of the Earth's gravity by measuring the oscillations of a pendulum. Galileo also broke with incorrect ideas of Aristotelian philosophy by regarding inertia as persistence of motion, not a tendency to come to rest. By considering that the laws of physics appear identical on a moving ship to those on land, Galileo developed the concepts of reference frame and the principle of relativity.: 5  These concepts would become central to Newton's mechanics, only to be transformed in Einstein's theory of gravity, the general theory of relativity.: 17  Johannes Kepler, in his 1609 book Astronomia nova described gravity as a mutual attraction, claiming that if the Earth and Moon were not held apart by some force they would come together. He recognized that mechanical forces cause action, creating a kind of celestial machine. On the other hand Kepler viewed the force of the Sun on the planets as magnetic and acting tangential to their orbits and he assumed with Aristotle that inertia meant objects tend to come to rest.: 846  In 1666, Giovanni Alfonso Borelli avoided the key problems that limited Kepler. By Borelli's time the concept of inertia had its modern meaning as the tendency of objects to remain in uniform motion and he viewed the Sun as just another heavenly body. Borelli developed the idea of mechanical equilibrium, a balance between inertia and gravity. Newton cited Borelli's influence on his theory.: 848  In 1657, Robert Hooke published his Micrographia, in which he hypothesized that the Moon must have its own gravity.: 57  In a communication to the Royal Society in 1666 and his 1674 Gresham lecture, An Attempt to prove the Annual Motion of the Earth, Hooke took the important step of combining related hypothesis and then forming predictions based on the hypothesis. He wrote: I will explain a system of the world very different from any yet received. It is founded on the following positions. 1. That all the heavenly bodies have not only a gravitation of their parts to their own proper centre, but that they also mutually attract each other within their spheres of action. 2. That all bodies having a simple motion, will continue to move in a straight line, unless continually deflected from it by some extraneous force, causing them to describe a circle, an ellipse, or some other curve. 3. That this attraction is so much the greater as the bodies are nearer. As to the proportion in which those forces diminish by an increase of distance, I own I have not discovered it.... Hooke was an important communicator who helped reformulate the scientific enterprise. He was one of the first professional scientists and worked as the then-new Royal Society's curator of experiments for 40 years. However his valuable insights remained hypotheses since he was unable to convert them in to a mathematical theory of gravity and work out the consequences.: 853  For this he turned to Newton, writing him a letter in 1679, outlining a model of planetary motion in a void or vacuum due to attractive action at a distance. This letter likely turned Newton's thinking in a new direction leading to his revolutionary work on gravity. When Newton reported his results in 1686, Hooke claimed the inverse square law portion was his "notion". === Newton's theory of gravitation === Before 1684, scientists including Christopher Wren, Robert Hooke and Edmund Halley determined that Kepler's third law, relating to planetary orbital periods, would prove the inverse square law if the orbits where circles. However the orbits were known to be ellipses. At Halley's suggestion, Newton tackled the problem and was able to prove that ellipses also proved the inverse square relation from Kepler's observations.: 13  In 1684, Isaac Newton sent a manuscript to Edmond Halley titled De motu corporum in gyrum ('On the motion of bodies in an orbit'), which provided a physical justification for Kepler's laws of planetary motion. Halley was impressed by the manuscript and urged Newton to expand on it, and a few years later Newton published a groundbreaking book called Philosophiæ Naturalis Principia Mathematica (Mathematical Principles of Natural Philosophy). The revolutionary aspect of Newton's theory of gravity was the unification of Earth-bound observations of acceleration with celestial mechanics.: 4  In his book, Newton described gravitation as a universal force, and claimed that it operated on objects "according to the quantity of solid matter which they contain and propagates on all sides to immense distances always at the inverse square of the distances".: 546  This formulation had two important parts. First was equating inertial mass and gravitational mass. Newton's 2nd law defines force via F = m a {\displaystyle F=ma} for inertial mass, his law of gravitational force uses the same mass. Newton did experiments with pendulums to verify this concept as best he could.: 11  The second aspect of Newton's formulation was the inverse square of distance. This aspect was not new: the astronomer Ismaël Bullialdus proposed it around 1640. Seeking proof, Newton made quantitative analysis around 1665, considering the period and distance of the Moon's orbit and considering the timing of objects falling on Earth. Newton did not publish these results at the time because he could not prove that the Earth's gravity acts as if all its mass were concentrated at its center. That proof took him twenty years.: 13  Newton's Principia was well received by the scientific community, and his law of gravitation quickly spread across the European world. More than a century later, in 1821, his theory of gravitation rose to even greater prominence when it was used to predict the existence of Neptune. In that year, the French astronomer Alexis Bouvard used this theory to create a table modeling the orbit of Uranus, which was shown to differ significantly from the planet's actual trajectory. In order to explain this discrepancy, many astronomers speculated that there might be a large object beyond the orbit of Uranus which was disrupting its orbit. In 1846, the astronomers John Couch Adams and Urbain Le Verrier independently used Newton's law to predict Neptune's location in the night sky, and the planet was discovered there within a day. Newton's formulation was later condensed into the inverse-square law: F = G m 1 m 2 r 2 , {\displaystyle F=G{\frac {m_{1}m_{2}}{r^{2}}},} where F is the force, m1 and m2 are the masses of the objects interacting, r is the distance between the centers of the masses and G is the gravitational constant 6.674×10−11 m3⋅kg−1⋅s−2. While G is also called Newton's constant, Newton did not use this constant or formula, he only discussed proportionality. But this allowed him to come to an astounding conclusion we take for granted today: the gravity of the Earth on the Moon is the same as the gravity of the Earth on an apple: M earth ∝ a apple R radius of earth 2 = a moon R lunar orbit 2 {\displaystyle M_{\text{earth}}\propto a_{\text{apple}}R_{\text{radius of earth}}^{2}=a_{\text{moon}}R_{\text{lunar orbit}}^{2}} Using the values known at the time, Newton was able to verify this form of his law. The value of G was eventually measured by Henry Cavendish in 1797.: 31  === Einstein's general relativity === Eventually, astronomers noticed an eccentricity in the orbit of the planet Mercury which could not be explained by Newton's theory: the perihelion of the orbit was increasing by about 42.98 arcseconds per century. The most obvious explanation for this discrepancy was an as-yet-undiscovered celestial body, such as a planet orbiting the Sun even closer than Mercury, but all efforts to find such a body turned out to be fruitless. In 1915, Albert Einstein developed a theory of general relativity which was able to accurately model Mercury's orbit. Einstein's theory brought two other ideas with independent histories into the physical theories of gravity: the principle of relativity and non-Euclidean geometry The principle of relativity, introduced by Galileo and used as a foundational principle by Newton, lead to a long and fruitless search for a luminiferous aether after Maxwell's equations demonstrated that light propagated at a fixed speed independent of reference frame. In Newton's mechanics, velocities add: a cannon ball shot from a moving ship would travel with a trajectory which included the motion of the ship. Since light speed was fixed, it was assumed to travel in a fixed, absolute medium. Many experiments sought to reveal this medium but failed and in 1905 Einstein's special relativity theory showed the aether was not needed. Special relativity proposed that mechanics be reformulated to use the Lorentz transformation already applicable to light rather than the Galilean transformation adopted by Newton. Special relativity, as in special case, specifically did not cover gravity.: 4  While relativity was associated with mechanics and thus gravity, the idea of altering geometry only joined the story of gravity once mechanics required the Lorentz transformations. Geometry was an ancient science that gradually broke free of Euclidean limitations when Carl Gauss discovered in the 1800s that surfaces in any number of dimensions could be characterized by a metric, a distance measurement along the shortest path between two points that reduces to Euclidean distance at infinitesimal separation. Gauss' student Bernhard Riemann developed this into a complete geometry by 1854. These geometries are locally flat but have global curvature.: 4  In 1907, Einstein took his first step by using special relativity to create a new form of the equivalence principle. The equivalence of inertial mass and gravitational mass was a known empirical law. The m in Newton's first law, F = m a {\displaystyle F=ma} , has the same value as the m in Newton's law of gravity on Earth, F = G M m / r 2 {\displaystyle F=GMm/r^{2}} . In what he later described as "the happiest thought of my life" Einstein realized this meant that in free-fall, an accelerated coordinate system exists with no local gravitational field. Every description of gravity in any other coordinate system must transform to give no field in the free-fall case, a powerful invariance constraint on all theories of gravity.: 20  Einstein's description of gravity was accepted by the majority of physicists for two reasons. First, by 1910 his special relativity was accepted in German physics and was spreading to other countries. Second, his theory explained experimental results like the perihelion of Mercury and the bending of light around the Sun better than Newton's theory. In 1919, the British astrophysicist Arthur Eddington was able to confirm the predicted deflection of light during that year's solar eclipse. Eddington measured starlight deflections twice those predicted by Newtonian corpuscular theory, in accordance with the predictions of general relativity. Although Eddington's analysis was later disputed, this experiment made Einstein famous almost overnight and caused general relativity to become widely accepted in the scientific community. In 1959, American physicists Robert Pound and Glen Rebka performed an experiment in which they used gamma rays to confirm the prediction of gravitational time dilation. By sending the rays down a 74-foot tower and measuring their frequency at the bottom, the scientists confirmed that light is Doppler shifted as it moves towards a source of gravity. The observed shift also supports the idea that time runs more slowly in the presence of a gravitational field (many more wave crests pass in a given interval). If light moves outward from a strong source of gravity it will be observed with a redshift. The time delay of light passing close to a massive object was first identified by Irwin I. Shapiro in 1964 in interplanetary spacecraft signals. In 1971, scientists discovered the first-ever black hole in the galaxy Cygnus. The black hole was detected because it was emitting bursts of x-rays as it consumed a smaller star, and it came to be known as Cygnus X-1. This discovery confirmed yet another prediction of general relativity, because Einstein's equations implied that light could not escape from a sufficiently large and compact object. Frame dragging, the idea that a rotating massive object should twist spacetime around it, was confirmed by Gravity Probe B results in 2011. In 2015, the LIGO observatory detected faint gravitational waves, the existence of which had been predicted by general relativity. Scientists believe that the waves emanated from a black hole merger that occurred 1.5 billion light-years away. == On Earth == Every planetary body (including the Earth) is surrounded by its own gravitational field, which can be conceptualized with Newtonian physics as exerting an attractive force on all objects. Assuming a spherically symmetrical planet, the strength of this field at any given point above the surface is proportional to the planetary body's mass and inversely proportional to the square of the distance from the center of the body. The strength of the gravitational field is numerically equal to the acceleration of objects under its influence. The rate of acceleration of falling objects near the Earth's surface varies very slightly depending on latitude, surface features such as mountains and ridges, and perhaps unusually high or low sub-surface densities. For purposes of weights and measures, a standard gravity value is defined by the International Bureau of Weights and Measures, under the International System of Units (SI). The force of gravity experienced by objects on Earth's surface is the vector sum of two forces: (a) The gravitational attraction in accordance with Newton's universal law of gravitation, and (b) the centrifugal force, which results from the choice of an earthbound, rotating frame of reference. The force of gravity is weakest at the equator because of the centrifugal force caused by the Earth's rotation and because points on the equator are farthest from the center of the Earth. The force of gravity varies with latitude, and the resultant acceleration increases from about 9.780 m/s2 at the Equator to about 9.832 m/s2 at the poles. === Gravity wave === Waves on oceans, lakes, and other bodies of water occur when the gravitational equilibrium at the surface of the water is disturbed by for example wind. Similar effects occur in the atmosphere where equilibrium is disturbed by thermal weather fronts or mountain ranges. == Astrophysics == === Stars and black holes === During star formation, gravitational attraction in a cloud of hydrogen gas competes with thermal gas pressure. As the gas density increases, the temperature rises, then the gas radiates energy, allowing additional gravitational condensation. If the mass of gas in the region is low, the process continues until a brown dwarf or gas-giant planet is produced. If more mass is available, the additional gravitational energy allows the central region to reach pressures sufficient for nuclear fusion, forming a star. In a star, again the gravitational attraction competes, with thermal and radiation pressure in hydrostatic equilibrium until the star's atomic fuel runs out. The next phase depends upon the total mass of the star. Very low mass stars slowly cool as white dwarf stars with a small core balancing gravitational attraction with electron degeneracy pressure. Stars with masses similar to the Sun go through a red giant phase before becoming white dwarf stars. Higher mass stars have complex core structures that burn helium and high atomic number elements ultimately producing an iron core. As their fuel runs out, these stars become unstable producing a supernova. The result can be a neutron star where gravitational attraction balances neutron degeneracy pressure or, for even higher masses, a black hole where gravity operates alone with such intensity that even light cannot escape.: 121  === Gravitational radiation === General relativity predicts that energy can be transported out of a system through gravitational radiation also known as gravitational waves. The first indirect evidence for gravitational radiation was through measurements of the Hulse–Taylor binary in 1973. This system consists of a pulsar and neutron star in orbit around one another. Its orbital period has decreased since its initial discovery due to a loss of energy, which is consistent for the amount of energy loss due to gravitational radiation. This research was awarded the Nobel Prize in Physics in 1993. The first direct evidence for gravitational radiation was measured on 14 September 2015 by the LIGO detectors. The gravitational waves emitted during the collision of two black holes 1.3 billion light years from Earth were measured. This observation confirms the theoretical predictions of Einstein and others that such waves exist. It also opens the way for practical observation and understanding of the nature of gravity and events in the Universe including the Big Bang. Neutron star and black hole formation also create detectable amounts of gravitational radiation. This research was awarded the Nobel Prize in Physics in 2017. === Dark matter === At the cosmological scale, gravity is a dominant player. About 5/6 of the total mass in the universe consists of dark matter which interacts through gravity but not through electromagnetic interactions. The gravitation of clumps of dark matter known as dark matter halos attract hydrogen gas leading to stars and galaxies. === Gravitational lensing === Gravity acts on light and matter equally, meaning that a sufficiently massive object could warp light around it and create a gravitational lens. This phenomenon was first confirmed by observation in 1979 using the 2.1 meter telescope at Kitt Peak National Observatory in Arizona, which saw two mirror images of the same quasar whose light had been bent around the galaxy YGKOW G1. Many subsequent observations of gravitational lensing provide additional evidence for substantial amounts of dark matter around galaxies. Gravitational lenses do not focus like eyeglass lenses, but rather lead to annular shapes called Einstein rings.: 370  === Speed of gravity === In December 2012, a research team in China announced that it had produced measurements of the phase lag of Earth tides during full and new moons which seem to prove that the speed of gravity is equal to the speed of light. This means that if the Sun suddenly disappeared, the Earth would keep orbiting the vacant point normally for 8 minutes, which is the time light takes to travel that distance. The team's findings were released in Science Bulletin in February 2013. In October 2017, the LIGO and Virgo interferometer detectors received gravitational wave signals within 2 seconds of gamma ray satellites and optical telescopes seeing signals from the same direction. This confirmed that the speed of gravitational waves was the same as the speed of light. === Anomalies and discrepancies === There are some observations that are not adequately accounted for, which may point to the need for better theories of gravity or perhaps be explained in other ways. Galaxy rotation curves: Stars in galaxies follow a distribution of velocities where stars on the outskirts are moving faster than they should according to the observed distributions of luminous matter. Galaxies within galaxy clusters show a similar pattern. The pattern is considered strong evidence for dark matter, which would interact through gravitation but not electromagnetically; various modifications to Newtonian dynamics have also been proposed. Accelerated expansion: The expansion of the universe seems to be accelerating. Dark energy has been proposed to explain this. Flyby anomaly: Various spacecraft have experienced greater acceleration than expected during gravity assist maneuvers. The Pioneer anomaly has been shown to be explained by thermal recoil due to the distant sun radiation on one side of the space craft. == General relativity == In modern physics, general relativity is considered the most successful theory of gravitation. Physicists continue to work to find solutions to the Einstein field equations that form the basis of general relativity and continue to test the theory, finding excellent agreement in all cases.: p.9  === Constraints === Any theory of gravity must conform to the requirements of special relativity and experimental observations. Newton's theory of gravity assumes action at a distance and therefore cannot be reconciled with special relativity. The simplest generalization of Newton's approach would be a scalar theory with the gravitational potential represented by a single number in a 4 dimensional spacetime. However this type of theory fails to predict gravitational redshift or the deviation of light by matter and gives values for the precession of Mercury which are incorrect. A vector field theory predicts negative energy gravitational waves so it also fails. Furthermore, no theory without curvature in spacetime can be consistent with special relativity. The simplest theory consistent with special relativity and the well-studied observations is general relativity. === General characteristics === Unlike Newton's formula with one parameter, G, force in general relativity is terms of 10 numbers formed in to a metric tensor.: 70 In general relativity the effects of gravitation are described in different ways in different frames of reference. In a free-falling or co-moving coordinate system, an object travels in a straight line. In other coordinate systems, the object accelerates and thus is seen to move under a force. The path in spacetime (not 3D space) taken by a free-falling object is called a geodesic and the length of that path as measured by time in the objects frame is the shortest (or rarely the longest) one. Consequently the effect of gravity can be described as curving spacetime. In a weak stationary gravitational field, general relativity reduces to Newton's equations. The corrections introduced by general relativity on Earth are on the order of 1 part in a billion.: 77  === Einstein field equations === The Einstein field equations are a system of 10 partial differential equations which describe how matter affects the curvature of spacetime. The system is may be expressed in the form G μ ν + Λ g μ ν = κ T μ ν , {\displaystyle G_{\mu \nu }+\Lambda g_{\mu \nu }=\kappa T_{\mu \nu },} where Gμν is the Einstein tensor, gμν is the metric tensor, Tμν is the stress–energy tensor, Λ is the cosmological constant, G {\displaystyle G} is the Newtonian constant of gravitation and c {\displaystyle c} is the speed of light. The constant κ = 8 π G c 4 {\displaystyle \kappa ={\frac {8\pi G}{c^{4}}}} is referred to as the Einstein gravitational constant. === Solutions === The non-linear second-order Einstein field equations are extremely complex and have been solved in only a few special cases. These cases however has been transformational in our understanding of the cosmos. Several solutions are the basis for understanding black holes and for our modern model of the evolution of the universe since the Big Bang.: 227  === Tests of general relativity === Testing the predictions of general relativity has historically been difficult, because they are almost identical to the predictions of Newtonian gravity for small energies and masses. A wide range of experiments provided support of general relativity.: p.1–9  Today, Einstein's theory of relativity is used for all gravitational calculations where absolute precision is desired, although Newton's inverse-square law is accurate enough for virtually all ordinary calculations.: 79  === Gravity and quantum mechanics === Despite its success in predicting the effects of gravity at large scales, general relativity is ultimately incompatible with quantum mechanics. This is because general relativity describes gravity as a smooth, continuous distortion of spacetime, while quantum mechanics holds that all forces arise from the exchange of discrete particles known as quanta. This contradiction is especially vexing to physicists because the other three fundamental forces (strong force, weak force and electromagnetism) were reconciled with a quantum framework decades ago. As a result, researchers have begun to search for a theory that could unite both gravity and quantum mechanics under a more general framework. One path is to describe gravity in the framework of quantum field theory (QFT), which has been successful to accurately describe the other fundamental interactions. The electromagnetic force arises from an exchange of virtual photons, where the QFT description of gravity is that there is an exchange of virtual gravitons. This description reproduces general relativity in the classical limit. However, this approach fails at short distances of the order of the Planck length, where a more complete theory of quantum gravity (or a new approach to quantum mechanics) is required. === Alternative theories === General relativity has withstood many tests over a large range of mass and size scales. When applied to interpret astronomical observations, cosmological models based on general relativity introduce two components to the universe, dark matter and dark energy, the nature of which is currently an unsolved problem in physics. The many successful, high precision predictions of the standard model of cosmology has led astrophysicists to conclude it and thus general relativity will be the basis for future progress. However, dark matter is not supported by the standard model of particle physics, physical models for dark energy do not match cosmological data, and some cosmological observations are inconsistent. These issues have led to the study of alternative theories of gravity. == See also == == References == == Further reading == I. Bernard Cohen (1999) [1687]. "A Guide to Newton's Principia". The Principia : mathematical principles of natural philosophy. By Newton, Isaac. Translated by Cohen, I. Bernard. University of California Press. ISBN 9780520088160. OCLC 313895715. Halliday, David; Resnick, Robert; Krane, Kenneth S. (2001). Physics v. 1. New York: John Wiley & Sons. ISBN 978-0-471-32057-9. Serway, Raymond A.; Jewett, John W. (2004). Physics for Scientists and Engineers (6th ed.). Brooks/Cole. ISBN 978-0-534-40842-8. Tipler, Paul (2004). Physics for Scientists and Engineers: Mechanics, Oscillations and Waves, Thermodynamics (5th ed.). W.H. Freeman. ISBN 978-0-7167-0809-4. Thorne, Kip S.; Misner, Charles W.; Wheeler, John Archibald (1973). Gravitation. W.H. Freeman. ISBN 978-0-7167-0344-0. Panek, Richard (2 August 2019). "Everything you thought you knew about gravity is wrong". The Washington Post. == External links == The Feynman Lectures on Physics Vol. I Ch. 7: The Theory of Gravitation "Gravitation", Encyclopedia of Mathematics, EMS Press, 2001 [1994] "Gravitation, theory of", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Wikipedia/Gravitational_physics
The electromagnetic wave equation is a second-order partial differential equation that describes the propagation of electromagnetic waves through a medium or in a vacuum. It is a three-dimensional form of the wave equation. The homogeneous form of the equation, written in terms of either the electric field E or the magnetic field B, takes the form: ( v p h 2 ∇ 2 − ∂ 2 ∂ t 2 ) E = 0 ( v p h 2 ∇ 2 − ∂ 2 ∂ t 2 ) B = 0 {\displaystyle {\begin{aligned}\left(v_{\mathrm {ph} }^{2}\nabla ^{2}-{\frac {\partial ^{2}}{\partial t^{2}}}\right)\mathbf {E} &=\mathbf {0} \\\left(v_{\mathrm {ph} }^{2}\nabla ^{2}-{\frac {\partial ^{2}}{\partial t^{2}}}\right)\mathbf {B} &=\mathbf {0} \end{aligned}}} where v p h = 1 μ ε {\displaystyle v_{\mathrm {ph} }={\frac {1}{\sqrt {\mu \varepsilon }}}} is the speed of light (i.e. phase velocity) in a medium with permeability μ, and permittivity ε, and ∇2 is the Laplace operator. In a vacuum, vph = c0 = 299792458 m/s, a fundamental physical constant. The electromagnetic wave equation derives from Maxwell's equations. In most older literature, B is called the magnetic flux density or magnetic induction. The following equations ∇ ⋅ E = 0 ∇ ⋅ B = 0 {\displaystyle {\begin{aligned}\nabla \cdot \mathbf {E} &=0\\\nabla \cdot \mathbf {B} &=0\end{aligned}}} predicate that any electromagnetic wave must be a transverse wave, where the electric field E and the magnetic field B are both perpendicular to the direction of wave propagation. == The origin of the electromagnetic wave equation == In his 1865 paper titled A Dynamical Theory of the Electromagnetic Field, James Clerk Maxwell utilized the correction to Ampère's circuital law that he had made in part III of his 1861 paper On Physical Lines of Force. In Part VI of his 1864 paper titled Electromagnetic Theory of Light, Maxwell combined displacement current with some of the other equations of electromagnetism and he obtained a wave equation with a speed equal to the speed of light. He commented: The agreement of the results seems to show that light and magnetism are affections of the same substance, and that light is an electromagnetic disturbance propagated through the field according to electromagnetic laws. Maxwell's derivation of the electromagnetic wave equation has been replaced in modern physics education by a much less cumbersome method involving combining the corrected version of Ampère's circuital law with Faraday's law of induction. To obtain the electromagnetic wave equation in a vacuum using the modern method, we begin with the modern 'Heaviside' form of Maxwell's equations. In a vacuum- and charge-free space, these equations are: ∇ ⋅ E = 0 ∇ × E = − ∂ B ∂ t ∇ ⋅ B = 0 ∇ × B = μ 0 ε 0 ∂ E ∂ t {\displaystyle {\begin{aligned}\nabla \cdot \mathbf {E} &=0\\\nabla \times \mathbf {E} &=-{\frac {\partial \mathbf {B} }{\partial t}}\\\nabla \cdot \mathbf {B} &=0\\\nabla \times \mathbf {B} &=\mu _{0}\varepsilon _{0}{\frac {\partial \mathbf {E} }{\partial t}}\\\end{aligned}}} These are the general Maxwell's equations specialized to the case with charge and current both set to zero. Taking the curl of the curl equations gives: ∇ × ( ∇ × E ) = ∇ × ( − ∂ B ∂ t ) = − ∂ ∂ t ( ∇ × B ) = − μ 0 ε 0 ∂ 2 E ∂ t 2 ∇ × ( ∇ × B ) = ∇ × ( μ 0 ε 0 ∂ E ∂ t ) = μ 0 ε 0 ∂ ∂ t ( ∇ × E ) = − μ 0 ε 0 ∂ 2 B ∂ t 2 {\displaystyle {\begin{aligned}\nabla \times \left(\nabla \times \mathbf {E} \right)&=\nabla \times \left(-{\frac {\partial \mathbf {B} }{\partial t}}\right)=-{\frac {\partial }{\partial t}}\left(\nabla \times \mathbf {B} \right)=-\mu _{0}\varepsilon _{0}{\frac {\partial ^{2}\mathbf {E} }{\partial t^{2}}}\\\nabla \times \left(\nabla \times \mathbf {B} \right)&=\nabla \times \left(\mu _{0}\varepsilon _{0}{\frac {\partial \mathbf {E} }{\partial t}}\right)=\mu _{0}\varepsilon _{0}{\frac {\partial }{\partial t}}\left(\nabla \times \mathbf {E} \right)=-\mu _{0}\varepsilon _{0}{\frac {\partial ^{2}\mathbf {B} }{\partial t^{2}}}\end{aligned}}} We can use the vector identity ∇ × ( ∇ × V ) = ∇ ( ∇ ⋅ V ) − ∇ 2 V {\displaystyle \nabla \times \left(\nabla \times \mathbf {V} \right)=\nabla \left(\nabla \cdot \mathbf {V} \right)-\nabla ^{2}\mathbf {V} } where V is any vector function of space. And ∇ 2 V = ∇ ⋅ ( ∇ V ) {\displaystyle \nabla ^{2}\mathbf {V} =\nabla \cdot \left(\nabla \mathbf {V} \right)} where ∇V is a dyadic which when operated on by the divergence operator ∇ ⋅ yields a vector. Since ∇ ⋅ E = 0 ∇ ⋅ B = 0 {\displaystyle {\begin{aligned}\nabla \cdot \mathbf {E} &=0\\\nabla \cdot \mathbf {B} &=0\end{aligned}}} then the first term on the right in the identity vanishes and we obtain the wave equations: 1 c 0 2 ∂ 2 E ∂ t 2 − ∇ 2 E = 0 1 c 0 2 ∂ 2 B ∂ t 2 − ∇ 2 B = 0 {\displaystyle {\begin{aligned}{\frac {1}{c_{0}^{2}}}{\frac {\partial ^{2}\mathbf {E} }{\partial t^{2}}}-\nabla ^{2}\mathbf {E} &=\mathbf {0} \\{\frac {1}{c_{0}^{2}}}{\frac {\partial ^{2}\mathbf {B} }{\partial t^{2}}}-\nabla ^{2}\mathbf {B} &=\mathbf {0} \end{aligned}}} where c 0 = 1 μ 0 ε 0 = 2.99792458 × 10 8 m/s {\displaystyle c_{0}={\frac {1}{\sqrt {\mu _{0}\varepsilon _{0}}}}=2.99792458\times 10^{8}\;{\textrm {m/s}}} is the speed of light in free space. == Covariant form of the homogeneous wave equation == These relativistic equations can be written in contravariant form as ◻ A μ = 0 {\displaystyle \Box A^{\mu }=0} where the electromagnetic four-potential is A μ = ( ϕ c , A ) {\displaystyle A^{\mu }=\left({\frac {\phi }{c}},\mathbf {A} \right)} with the Lorenz gauge condition: ∂ μ A μ = 0 , {\displaystyle \partial _{\mu }A^{\mu }=0,} and where ◻ = ∇ 2 − 1 c 2 ∂ 2 ∂ t 2 {\displaystyle \Box =\nabla ^{2}-{\frac {1}{c^{2}}}{\frac {\partial ^{2}}{\partial t^{2}}}} is the d'Alembert operator. == Homogeneous wave equation in curved spacetime == The electromagnetic wave equation is modified in two ways, the derivative is replaced with the covariant derivative and a new term that depends on the curvature appears. − A α ; β ; β + R α β A β = 0 {\displaystyle -{A^{\alpha ;\beta }}_{;\beta }+{R^{\alpha }}_{\beta }A^{\beta }=0} where R α β {\displaystyle {R^{\alpha }}_{\beta }} is the Ricci curvature tensor and the semicolon indicates covariant differentiation. The generalization of the Lorenz gauge condition in curved spacetime is assumed: A μ ; μ = 0. {\displaystyle {A^{\mu }}_{;\mu }=0.} == Inhomogeneous electromagnetic wave equation == Localized time-varying charge and current densities can act as sources of electromagnetic waves in a vacuum. Maxwell's equations can be written in the form of a wave equation with sources. The addition of sources to the wave equations makes the partial differential equations inhomogeneous. == Solutions to the homogeneous electromagnetic wave equation == The general solution to the electromagnetic wave equation is a linear superposition of waves of the form E ( r , t ) = g ( ϕ ( r , t ) ) = g ( ω t − k ⋅ r ) B ( r , t ) = g ( ϕ ( r , t ) ) = g ( ω t − k ⋅ r ) {\displaystyle {\begin{aligned}\mathbf {E} (\mathbf {r} ,t)&=g(\phi (\mathbf {r} ,t))=g(\omega t-\mathbf {k} \cdot \mathbf {r} )\\\mathbf {B} (\mathbf {r} ,t)&=g(\phi (\mathbf {r} ,t))=g(\omega t-\mathbf {k} \cdot \mathbf {r} )\end{aligned}}} for virtually any well-behaved function g of dimensionless argument φ, where ω is the angular frequency (in radians per second), and k = (kx, ky, kz) is the wave vector (in radians per meter). Although the function g can be and often is a monochromatic sine wave, it does not have to be sinusoidal, or even periodic. In practice, g cannot have infinite periodicity because any real electromagnetic wave must always have a finite extent in time and space. As a result, and based on the theory of Fourier decomposition, a real wave must consist of the superposition of an infinite set of sinusoidal frequencies. In addition, for a valid solution, the wave vector and the angular frequency are not independent; they must adhere to the dispersion relation: k = | k | = ω c = 2 π λ {\displaystyle k=|\mathbf {k} |={\omega \over c}={2\pi \over \lambda }} where k is the wavenumber and λ is the wavelength. The variable c can only be used in this equation when the electromagnetic wave is in a vacuum. === Monochromatic, sinusoidal steady-state === The simplest set of solutions to the wave equation result from assuming sinusoidal waveforms of a single frequency in separable form: E ( r , t ) = ℜ { E ( r ) e i ω t } {\displaystyle \mathbf {E} (\mathbf {r} ,t)=\Re \left\{\mathbf {E} (\mathbf {r} )e^{i\omega t}\right\}} where i is the imaginary unit, ω = 2π f  is the angular frequency in radians per second,  f  is the frequency in hertz, and e i ω t = cos ⁡ ( ω t ) + i sin ⁡ ( ω t ) {\displaystyle e^{i\omega t}=\cos(\omega t)+i\sin(\omega t)} is Euler's formula. === Plane wave solutions === Consider a plane defined by a unit normal vector n = k k . {\displaystyle \mathbf {n} ={\mathbf {k} \over k}.} Then planar traveling wave solutions of the wave equations are E ( r ) = E 0 e − i k ⋅ r B ( r ) = B 0 e − i k ⋅ r {\displaystyle {\begin{aligned}\mathbf {E} (\mathbf {r} )&=\mathbf {E} _{0}e^{-i\mathbf {k} \cdot \mathbf {r} }\\\mathbf {B} (\mathbf {r} )&=\mathbf {B} _{0}e^{-i\mathbf {k} \cdot \mathbf {r} }\end{aligned}}} where r = (x, y, z) is the position vector (in meters). These solutions represent planar waves traveling in the direction of the normal vector n. If we define the z direction as the direction of n, and the x direction as the direction of E, then by Faraday's Law the magnetic field lies in the y direction and is related to the electric field by the relation c 2 ∂ B ∂ z = ∂ E ∂ t . {\displaystyle c^{2}{\partial B \over \partial z}={\partial E \over \partial t}.} Because the divergence of the electric and magnetic fields are zero, there are no fields in the direction of propagation. This solution is the linearly polarized solution of the wave equations. There are also circularly polarized solutions in which the fields rotate about the normal vector. === Spectral decomposition === Because of the linearity of Maxwell's equations in a vacuum, solutions can be decomposed into a superposition of sinusoids. This is the basis for the Fourier transform method for the solution of differential equations. The sinusoidal solution to the electromagnetic wave equation takes the form E ( r , t ) = E 0 cos ⁡ ( ω t − k ⋅ r + ϕ 0 ) B ( r , t ) = B 0 cos ⁡ ( ω t − k ⋅ r + ϕ 0 ) {\displaystyle {\begin{aligned}\mathbf {E} (\mathbf {r} ,t)&=\mathbf {E} _{0}\cos(\omega t-\mathbf {k} \cdot \mathbf {r} +\phi _{0})\\\mathbf {B} (\mathbf {r} ,t)&=\mathbf {B} _{0}\cos(\omega t-\mathbf {k} \cdot \mathbf {r} +\phi _{0})\end{aligned}}} where t is time (in seconds), ω is the angular frequency (in radians per second), k = (kx, ky, kz) is the wave vector (in radians per meter), and ϕ 0 {\displaystyle \phi _{0}} is the phase angle (in radians). The wave vector is related to the angular frequency by k = | k | = ω c = 2 π λ {\displaystyle k=|\mathbf {k} |={\omega \over c}={2\pi \over \lambda }} where k is the wavenumber and λ is the wavelength. The electromagnetic spectrum is a plot of the field magnitudes (or energies) as a function of wavelength. === Multipole expansion === Assuming monochromatic fields varying in time as e − i ω t {\displaystyle e^{-i\omega t}} , if one uses Maxwell's Equations to eliminate B, the electromagnetic wave equation reduces to the Helmholtz equation for E: ( ∇ 2 + k 2 ) E = 0 , B = − i k ∇ × E , {\displaystyle (\nabla ^{2}+k^{2})\mathbf {E} =0,\,\mathbf {B} =-{\frac {i}{k}}\nabla \times \mathbf {E} ,} with k = ω/c as given above. Alternatively, one can eliminate E in favor of B to obtain: ( ∇ 2 + k 2 ) B = 0 , E = − i k ∇ × B . {\displaystyle (\nabla ^{2}+k^{2})\mathbf {B} =0,\,\mathbf {E} =-{\frac {i}{k}}\nabla \times \mathbf {B} .} A generic electromagnetic field with frequency ω can be written as a sum of solutions to these two equations. The three-dimensional solutions of the Helmholtz Equation can be expressed as expansions in spherical harmonics with coefficients proportional to the spherical Bessel functions. However, applying this expansion to each vector component of E or B will give solutions that are not generically divergence-free (∇ ⋅ E = ∇ ⋅ B = 0), and therefore require additional restrictions on the coefficients. The multipole expansion circumvents this difficulty by expanding not E or B, but r ⋅ E or r ⋅ B into spherical harmonics. These expansions still solve the original Helmholtz equations for E and B because for a divergence-free field F, ∇2 (r ⋅ F) = r ⋅ (∇2 F). The resulting expressions for a generic electromagnetic field are: E = e − i ω t ∑ l , m l ( l + 1 ) [ a E ( l , m ) E l , m ( E ) + a M ( l , m ) E l , m ( M ) ] B = e − i ω t ∑ l , m l ( l + 1 ) [ a E ( l , m ) B l , m ( E ) + a M ( l , m ) B l , m ( M ) ] , {\displaystyle {\begin{aligned}\mathbf {E} &=e^{-i\omega t}\sum _{l,m}{\sqrt {l(l+1)}}\left[a_{E}(l,m)\mathbf {E} _{l,m}^{(E)}+a_{M}(l,m)\mathbf {E} _{l,m}^{(M)}\right]\\\mathbf {B} &=e^{-i\omega t}\sum _{l,m}{\sqrt {l(l+1)}}\left[a_{E}(l,m)\mathbf {B} _{l,m}^{(E)}+a_{M}(l,m)\mathbf {B} _{l,m}^{(M)}\right]\,,\end{aligned}}} where E l , m ( E ) {\displaystyle \mathbf {E} _{l,m}^{(E)}} and B l , m ( E ) {\displaystyle \mathbf {B} _{l,m}^{(E)}} are the electric multipole fields of order (l, m), and E l , m ( M ) {\displaystyle \mathbf {E} _{l,m}^{(M)}} and B l , m ( M ) {\displaystyle \mathbf {B} _{l,m}^{(M)}} are the corresponding magnetic multipole fields, and aE(l, m) and aM(l, m) are the coefficients of the expansion. The multipole fields are given by B l , m ( E ) = l ( l + 1 ) [ B l ( 1 ) h l ( 1 ) ( k r ) + B l ( 2 ) h l ( 2 ) ( k r ) ] Φ l , m E l , m ( E ) = i k ∇ × B l , m ( E ) E l , m ( M ) = l ( l + 1 ) [ E l ( 1 ) h l ( 1 ) ( k r ) + E l ( 2 ) h l ( 2 ) ( k r ) ] Φ l , m B l , m ( M ) = − i k ∇ × E l , m ( M ) , {\displaystyle {\begin{aligned}\mathbf {B} _{l,m}^{(E)}&={\sqrt {l(l+1)}}\left[B_{l}^{(1)}h_{l}^{(1)}(kr)+B_{l}^{(2)}h_{l}^{(2)}(kr)\right]\mathbf {\Phi } _{l,m}\\\mathbf {E} _{l,m}^{(E)}&={\frac {i}{k}}\nabla \times \mathbf {B} _{l,m}^{(E)}\\\mathbf {E} _{l,m}^{(M)}&={\sqrt {l(l+1)}}\left[E_{l}^{(1)}h_{l}^{(1)}(kr)+E_{l}^{(2)}h_{l}^{(2)}(kr)\right]\mathbf {\Phi } _{l,m}\\\mathbf {B} _{l,m}^{(M)}&=-{\frac {i}{k}}\nabla \times \mathbf {E} _{l,m}^{(M)}\,,\end{aligned}}} where hl(1,2)(x) are the spherical Hankel functions, El(1,2) and Bl(1,2) are determined by boundary conditions, and Φ l , m = 1 l ( l + 1 ) ( r × ∇ ) Y l , m {\displaystyle \mathbf {\Phi } _{l,m}={\frac {1}{\sqrt {l(l+1)}}}(\mathbf {r} \times \nabla )Y_{l,m}} are vector spherical harmonics normalized so that ∫ Φ l , m ∗ ⋅ Φ l ′ , m ′ d Ω = δ l , l ′ δ m , m ′ . {\displaystyle \int \mathbf {\Phi } _{l,m}^{*}\cdot \mathbf {\Phi } _{l',m'}d\Omega =\delta _{l,l'}\delta _{m,m'}.} The multipole expansion of the electromagnetic field finds application in a number of problems involving spherical symmetry, for example antennae radiation patterns, or nuclear gamma decay. In these applications, one is often interested in the power radiated in the far-field. In this regions, the E and B fields asymptotically approach B ≈ e i ( k r − ω t ) k r ∑ l , m ( − i ) l + 1 [ a E ( l , m ) Φ l , m + a M ( l , m ) r ^ × Φ l , m ] E ≈ B × r ^ . {\displaystyle {\begin{aligned}\mathbf {B} &\approx {\frac {e^{i(kr-\omega t)}}{kr}}\sum _{l,m}(-i)^{l+1}\left[a_{E}(l,m)\mathbf {\Phi } _{l,m}+a_{M}(l,m)\mathbf {\hat {r}} \times \mathbf {\Phi } _{l,m}\right]\\\mathbf {E} &\approx \mathbf {B} \times \mathbf {\hat {r}} .\end{aligned}}} The angular distribution of the time-averaged radiated power is then given by d P d Ω ≈ 1 2 k 2 | ∑ l , m ( − i ) l + 1 [ a E ( l , m ) Φ l , m × r ^ + a M ( l , m ) Φ l , m ] | 2 . {\displaystyle {\frac {dP}{d\Omega }}\approx {\frac {1}{2k^{2}}}\left|\sum _{l,m}(-i)^{l+1}\left[a_{E}(l,m)\mathbf {\Phi } _{l,m}\times \mathbf {\hat {r}} +a_{M}(l,m)\mathbf {\Phi } _{l,m}\right]\right|^{2}.} == See also == === Theory and experiment === === Phenomena and applications === === Biographies === == Notes == == Further reading == === Electromagnetism === ==== Journal articles ==== Maxwell, James Clerk, "A Dynamical Theory of the Electromagnetic Field", Philosophical Transactions of the Royal Society of London 155, 459-512 (1865). (This article accompanied a December 8, 1864 presentation by Maxwell to the Royal Society.) ==== Undergraduate-level textbooks ==== Griffiths, David J. (1998). Introduction to Electrodynamics (3rd ed.). Prentice Hall. ISBN 0-13-805326-X. Tipler, Paul (2004). Physics for Scientists and Engineers: Electricity, Magnetism, Light, and Elementary Modern Physics (5th ed.). W. H. Freeman. ISBN 0-7167-0810-8. Edward M. Purcell, Electricity and Magnetism (McGraw-Hill, New York, 1985). ISBN 0-07-004908-4. Hermann A. Haus and James R. Melcher, Electromagnetic Fields and Energy (Prentice-Hall, 1989) ISBN 0-13-249020-X. Banesh Hoffmann, Relativity and Its Roots (Freeman, New York, 1983). ISBN 0-7167-1478-7. David H. Staelin, Ann W. Morgenthaler, and Jin Au Kong, Electromagnetic Waves (Prentice-Hall, 1994) ISBN 0-13-225871-4. Charles F. Stevens, The Six Core Theories of Modern Physics, (MIT Press, 1995) ISBN 0-262-69188-4. Markus Zahn, Electromagnetic Field Theory: a problem solving approach, (John Wiley & Sons, 1979) ISBN 0-471-02198-9 ==== Graduate-level textbooks ==== Jackson, John D. (1998). Classical Electrodynamics (3rd ed.). Wiley. ISBN 0-471-30932-X. Landau, L. D., The Classical Theory of Fields (Course of Theoretical Physics: Volume 2), (Butterworth-Heinemann: Oxford, 1987). ISBN 0-08-018176-7. Maxwell, James C. (1954). A Treatise on Electricity and Magnetism. Dover. ISBN 0-486-60637-6. {{cite book}}: ISBN / Date incompatibility (help) Charles W. Misner, Kip S. Thorne, John Archibald Wheeler, Gravitation, (1970) W.H. Freeman, New York; ISBN 0-7167-0344-0. (Provides a treatment of Maxwell's equations in terms of differential forms.) === Vector calculus === P. C. Matthews Vector Calculus, Springer 1998, ISBN 3-540-76180-2 H. M. Schey, Div Grad Curl and all that: An informal text on vector calculus, 4th edition (W. W. Norton & Company, 2005) ISBN 0-393-92516-1.
Wikipedia/Electromagnetic_wave_equation
In astrophysics, the Tolman–Oppenheimer–Volkoff (TOV) equation constrains the structure of a spherically symmetric body of isotropic material which is in static gravitational equilibrium, as modeled by general relativity. The equation is d P d r = − G m r 2 ρ ( 1 + P ρ c 2 ) ( 1 + 4 π r 3 P m c 2 ) ( 1 − 2 G m r c 2 ) − 1 {\displaystyle {\frac {dP}{dr}}=-{\frac {Gm}{r^{2}}}\rho \left(1+{\frac {P}{\rho c^{2}}}\right)\left(1+{\frac {4\pi r^{3}P}{mc^{2}}}\right)\left(1-{\frac {2Gm}{rc^{2}}}\right)^{-1}} Here, r {\textstyle r} is a radial coordinate, and ρ ( r ) {\textstyle \rho (r)} and P ( r ) {\textstyle P(r)} are the density and pressure, respectively, of the material at radius r {\textstyle r} . The quantity m ( r ) {\textstyle m(r)} , the total mass within r {\textstyle r} , is discussed below. The equation is derived by solving the Einstein equations for a general time-invariant, spherically symmetric metric. For a solution to the Tolman–Oppenheimer–Volkoff equation, this metric will take the form d s 2 = e ν c 2 d t 2 − ( 1 − 2 G m r c 2 ) − 1 d r 2 − r 2 ( d θ 2 + sin 2 ⁡ θ d ϕ 2 ) {\displaystyle ds^{2}=e^{\nu }c^{2}\,dt^{2}-\left(1-{\frac {2Gm}{rc^{2}}}\right)^{-1}\,dr^{2}-r^{2}\left(d\theta ^{2}+\sin ^{2}\theta \,d\phi ^{2}\right)} where ν ( r ) {\textstyle \nu (r)} is determined by the constraint d ν d r = − ( 2 P + ρ c 2 ) d P d r {\displaystyle {\frac {d\nu }{dr}}=-\left({\frac {2}{P+\rho c^{2}}}\right){\frac {dP}{dr}}} When supplemented with an equation of state, F ( ρ , P ) = 0 {\textstyle F(\rho ,P)=0} , which relates density to pressure, the Tolman–Oppenheimer–Volkoff equation completely determines the structure of a spherically symmetric body of isotropic material in equilibrium. If terms of order 1 / c 2 {\textstyle 1/c^{2}} are neglected, the Tolman–Oppenheimer–Volkoff equation becomes the Newtonian hydrostatic equation, used to find the equilibrium structure of a spherically symmetric body of isotropic material when general-relativistic corrections are not important. If the equation is used to model a bounded sphere of material in a vacuum, the zero-pressure condition P ( r ) = 0 {\textstyle P(r)=0} and the condition e ν = 1 − 2 G m / c 2 r {\textstyle e^{\nu }=1-2Gm/c^{2}r} should be imposed at the boundary. The second boundary condition is imposed so that the metric at the boundary is continuous with the unique static spherically symmetric solution to the vacuum field equations, the Schwarzschild metric: d s 2 = ( 1 − 2 G M r c 2 ) c 2 d t 2 − ( 1 − 2 G M r c 2 ) − 1 d r 2 − r 2 ( d θ 2 + sin 2 ⁡ θ d ϕ 2 ) {\displaystyle ds^{2}=\left(1-{\frac {2GM}{rc^{2}}}\right)c^{2}\,dt^{2}-\left(1-{\frac {2GM}{rc^{2}}}\right)^{-1}\,dr^{2}-r^{2}(d\theta ^{2}+\sin ^{2}\theta \,d\phi ^{2})} == Total mass == m ( r ) {\textstyle m(r)} is the total mass contained inside radius r {\textstyle r} , as measured by the gravitational field felt by a distant observer. It satisfies m ( 0 ) = 0 {\textstyle m(0)=0} . d m d r = 4 π r 2 ρ {\displaystyle {\frac {dm}{dr}}=4\pi r^{2}\rho } Here, M {\textstyle M} is the total mass of the object, again, as measured by the gravitational field felt by a distant observer. If the boundary is at r = R {\textstyle r=R} , continuity of the metric and the definition of m ( r ) {\textstyle m(r)} require that M = m ( R ) = ∫ 0 R 4 π r 2 ρ d r {\displaystyle M=m(R)=\int _{0}^{R}4\pi r^{2}\rho \,dr} Computing the mass by integrating the density of the object over its volume, on the other hand, will yield the larger value M 1 = ∫ 0 R 4 π r 2 ρ 1 − 2 G m r c 2 d r {\displaystyle M_{1}=\int _{0}^{R}{\frac {4\pi r^{2}\rho }{\sqrt {1-{\frac {2Gm}{rc^{2}}}}}}\,dr} The difference between these two quantities, δ M = ∫ 0 R 4 π r 2 ρ ( 1 − 1 1 − 2 G m r c 2 ) d r {\displaystyle \delta M=\int _{0}^{R}4\pi r^{2}\rho \left(1-{\frac {1}{\sqrt {1-{\frac {2Gm}{rc^{2}}}}}}\right)\,dr} will be the gravitational binding energy of the object divided by c 2 {\textstyle c^{2}} and it is negative. == Derivation from general relativity == Let us assume a static, spherically symmetric perfect fluid. The metric components are similar to those for the Schwarzschild metric: c 2 d τ 2 = g μ ν d x μ d x ν = e ν c 2 d t 2 − e λ d r 2 − r 2 d θ 2 − r 2 sin 2 ⁡ θ d ϕ 2 {\displaystyle c^{2}\,d\tau ^{2}=g_{\mu \nu }\,dx^{\mu }\,dx^{\nu }=e^{\nu }c^{2}\,dt^{2}-e^{\lambda }\,dr^{2}-r^{2}\,d\theta ^{2}-r^{2}\sin ^{2}\theta \,d\phi ^{2}} By the perfect fluid assumption, the stress-energy tensor is diagonal (in the central spherical coordinate system), with eigenvalues of energy density and pressure: T 0 0 = ρ c 2 {\displaystyle T_{0}^{0}=\rho c^{2}} and T i j = − P δ i j {\displaystyle T_{i}^{j}=-P\delta _{i}^{j}} Where ρ ( r ) {\textstyle \rho (r)} is the fluid density and P ( r ) {\textstyle P(r)} is the fluid pressure. To proceed further, we solve Einstein's field equations: 8 π G c 4 T μ ν = G μ ν {\displaystyle {\frac {8\pi G}{c^{4}}}T_{\mu \nu }=G_{\mu \nu }} Let us first consider the G 00 {\textstyle G_{00}} component: 8 π G c 4 ρ c 2 e ν = e ν r 2 ( 1 − d d r [ r e − λ ] ) {\displaystyle {\frac {8\pi G}{c^{4}}}\rho c^{2}e^{\nu }={\frac {e^{\nu }}{r^{2}}}\left(1-{\frac {d}{dr}}[re^{-\lambda }]\right)} Integrating this expression from 0 to r {\textstyle r} , we obtain e − λ = 1 − 2 G m r c 2 {\displaystyle e^{-\lambda }=1-{\frac {2Gm}{rc^{2}}}} where m ( r ) {\textstyle m(r)} is as defined in the previous section. Next, consider the G 11 {\textstyle G_{11}} component. Explicitly, we have − 8 π G c 4 P e λ = − r ν ′ + e λ − 1 r 2 {\displaystyle -{\frac {8\pi G}{c^{4}}}Pe^{\lambda }={\frac {-r\nu '+e^{\lambda }-1}{r^{2}}}} which we can simplify (using our expression for e λ {\textstyle e^{\lambda }} ) to d ν d r = 1 r ( 1 − 2 G m c 2 r ) − 1 ( 2 G m c 2 r + 8 π G c 4 r 2 P ) {\displaystyle {\frac {d\nu }{dr}}={\frac {1}{r}}\left(1-{\frac {2Gm}{c^{2}r}}\right)^{-1}\left({\frac {2Gm}{c^{2}r}}+{\frac {8\pi G}{c^{4}}}r^{2}P\right)} We obtain a second equation by demanding continuity of the stress-energy tensor: ∇ μ T ν μ = 0 {\textstyle \nabla _{\mu }T_{\,\nu }^{\mu }=0} . Observing that ∂ t ρ = ∂ t P = 0 {\textstyle \partial _{t}\rho =\partial _{t}P=0} (since the configuration is assumed to be static) and that ∂ ϕ P = ∂ θ P = 0 {\textstyle \partial _{\phi }P=\partial _{\theta }P=0} (since the configuration is also isotropic), we obtain in particular 0 = ∇ μ T 1 μ = − d P d r − 1 2 ( P + ρ c 2 ) d ν d r {\displaystyle 0=\nabla _{\mu }T_{1}^{\mu }=-{\frac {dP}{dr}}-{\frac {1}{2}}\left(P+\rho c^{2}\right){\frac {d\nu }{dr}}\;} Rearranging terms yields: d P d r = − ( ρ c 2 + P 2 ) d ν d r {\displaystyle {\frac {dP}{dr}}=-\left({\frac {\rho c^{2}+P}{2}}\right){\frac {d\nu }{dr}}\;} This gives us two expressions, both containing d ν / d r {\textstyle d\nu /dr} . Eliminating d ν / d r {\textstyle d\nu /dr} , we obtain: d P d r = − 1 r ( ρ c 2 + P 2 ) ( 2 G m c 2 r + 8 π G c 4 r 2 P ) ( 1 − 2 G m c 2 r ) − 1 {\displaystyle {\frac {dP}{dr}}=-{\frac {1}{r}}\left({\frac {\rho c^{2}+P}{2}}\right)\left({\frac {2Gm}{c^{2}r}}+{\frac {8\pi G}{c^{4}}}r^{2}P\right)\left(1-{\frac {2Gm}{c^{2}r}}\right)^{-1}} Pulling out a factor of G / r {\textstyle G/r} and rearranging factors of 2 and c 2 {\textstyle c^{2}} results in the Tolman–Oppenheimer–Volkoff equation: == History == Richard C. Tolman analyzed spherically symmetric metrics in 1934 and 1939. The form of the equation given here was derived by J. Robert Oppenheimer and George Volkoff in their 1939 paper, "On Massive Neutron Cores". In this paper, the equation of state for a degenerate Fermi gas of neutrons was used to calculate an upper limit of ~0.7 solar masses for the gravitational mass of a neutron star. Since this equation of state is not realistic for a neutron star, this limiting mass is likewise incorrect. Using gravitational wave observations from binary neutron star mergers (like GW170817) and the subsequent information from electromagnetic radiation (kilonova), the data suggest that the maximum mass limit is close to 2.17 solar masses. Earlier estimates for this limit range from 1.5 to 3.0 solar masses. == Post-Newtonian approximation == In the post-Newtonian approximation, i.e., gravitational fields that slightly deviates from Newtonian field, the equation can be expanded in powers of 1 / c 2 {\textstyle 1/c^{2}} . In other words, we have d P d r = − G m r 2 ρ ( 1 + P ρ c 2 + 4 π r 3 P m c 2 + 2 G m r c 2 ) + O ( c − 4 ) . {\displaystyle {\frac {dP}{dr}}=-{\frac {Gm}{r^{2}}}\rho \left(1+{\frac {P}{\rho c^{2}}}+{\frac {4\pi r^{3}P}{mc^{2}}}+{\frac {2Gm}{rc^{2}}}\right)+O(c^{-4}).} == See also == Chandrasekhar's white dwarf equation Hydrostatic equation Tolman–Oppenheimer–Volkoff limit Solutions of the Einstein field equations Static spherically symmetric perfect fluid == References ==
Wikipedia/Tolman–Oppenheimer–Volkoff_equation
Finite-difference time-domain (FDTD) or Yee's method (named after the Chinese American applied mathematician Kane S. Yee, born 1934) is a numerical analysis technique used for modeling computational electrodynamics. == History == Finite difference schemes for time-dependent partial differential equations (PDEs) have been employed for many years in computational fluid dynamics problems, including the idea of using centered finite difference operators on staggered grids in space and time to achieve second-order accuracy. The novelty of Yee's FDTD scheme, presented in his seminal 1966 paper, was to apply centered finite difference operators on staggered grids in space and time for each electric and magnetic vector field component in Maxwell's curl equations. The descriptor "Finite-difference time-domain" and its corresponding "FDTD" acronym were originated by Allen Taflove in 1980. Since about 1990, FDTD techniques have emerged as primary means to computationally model many scientific and engineering problems dealing with electromagnetic wave interactions with material structures. Current FDTD modeling applications range from near-DC (ultralow-frequency geophysics involving the entire Earth-ionosphere waveguide) through microwaves (radar signature technology, antennas, wireless communications devices, digital interconnects, biomedical imaging/treatment) to visible light (photonic crystals, nanoplasmonics, solitons, and biophotonics). In 2006, an estimated 2,000 FDTD-related publications appeared in the science and engineering literature (see Popularity). As of 2013, there are at least 25 commercial/proprietary FDTD software vendors; 13 free-software/open-source-software FDTD projects; and 2 freeware/closed-source FDTD projects, some not for commercial use (see External links). === Development of FDTD and Maxwell's equations === An appreciation of the basis, technical development, and possible future of FDTD numerical techniques for Maxwell's equations can be developed by first considering their history. The following lists some of the key publications in this area. == FDTD models and methods == When Maxwell's differential equations are examined, it can be seen that the change in the E-field in time (the time derivative) is dependent on the change in the H-field across space (the curl). This results in the basic FDTD time-stepping relation that, at any point in space, the updated value of the E-field in time is dependent on the stored value of the E-field and the numerical curl of the local distribution of the H-field in space. The H-field is time-stepped in a similar manner. At any point in space, the updated value of the H-field in time is dependent on the stored value of the H-field and the numerical curl of the local distribution of the E-field in space. Iterating the E-field and H-field updates results in a marching-in-time process wherein sampled-data analogs of the continuous electromagnetic waves under consideration propagate in a numerical grid stored in the computer memory. This description holds true for 1-D, 2-D, and 3-D FDTD techniques. When multiple dimensions are considered, calculating the numerical curl can become complicated. Kane Yee's seminal 1966 paper proposed spatially staggering the vector components of the E-field and H-field about rectangular unit cells of a Cartesian computational grid so that each E-field vector component is located midway between a pair of H-field vector components, and conversely. This scheme, now known as a Yee lattice, has proven to be very robust, and remains at the core of many current FDTD software constructs. Furthermore, Yee proposed a leapfrog scheme for marching in time wherein the E-field and H-field updates are staggered so that E-field updates are conducted midway during each time-step between successive H-field updates, and conversely. On the plus side, this explicit time-stepping scheme avoids the need to solve simultaneous equations, and furthermore yields dissipation-free numerical wave propagation. On the minus side, this scheme mandates an upper bound on the time-step to ensure numerical stability. As a result, certain classes of simulations can require many thousands of time-steps for completion. === Using the FDTD method === To implement an FDTD solution of Maxwell's equations, a computational domain must first be established. The computational domain is simply the physical region over which the simulation will be performed. The E and H fields are determined at every point in space within that computational domain. The material of each cell within the computational domain must be specified. Typically, the material is either free-space (air), metal, or dielectric. Any material can be used as long as the permeability, permittivity, and conductivity are specified. The permittivity of dispersive materials in tabular form cannot be directly substituted into the FDTD scheme. Instead, it can be approximated using multiple Debye, Drude, Lorentz or critical point terms. This approximation can be obtained using open fitting programs and does not necessarily have physical meaning. Once the computational domain and the grid materials are established, a source is specified. The source can be current on a wire, applied electric field or impinging plane wave. In the last case FDTD can be used to simulate light scattering from arbitrary shaped objects, planar periodic structures at various incident angles, and photonic band structure of infinite periodic structures. Since the E and H fields are determined directly, the output of the simulation is usually the E or H field at a point or a series of points within the computational domain. The simulation evolves the E and H fields forward in time. Processing may be done on the E and H fields returned by the simulation. Data processing may also occur while the simulation is ongoing. While the FDTD technique computes electromagnetic fields within a compact spatial region, scattered and/or radiated far fields can be obtained via near-to-far-field transformations. === Strengths of FDTD modeling === Every modeling technique has strengths and weaknesses, and the FDTD method is no different. FDTD is a versatile modeling technique used to solve Maxwell's equations. It is intuitive, so users can easily understand how to use it and know what to expect from a given model. FDTD is a time-domain technique, and when a broadband pulse (such as a Gaussian pulse) is used as the source, then the response of the system over a wide range of frequencies can be obtained with a single simulation. This is useful in applications where resonant frequencies are not exactly known, or anytime that a broadband result is desired. Since FDTD calculates the E and H fields everywhere in the computational domain as they evolve in time, it lends itself to providing animated displays of the electromagnetic field movement through the model. This type of display is useful in understanding what is going on in the model, and to help ensure that the model is working correctly. The FDTD technique allows the user to specify the material at all points within the computational domain. A wide variety of linear and nonlinear dielectric and magnetic materials can be naturally and easily modeled. FDTD allows the effects of apertures to be determined directly. Shielding effects can be found, and the fields both inside and outside a structure can be found directly or indirectly. FDTD uses the E and H fields directly. Since most EMI/EMC modeling applications are interested in the E and H fields, it is convenient that no conversions must be made after the simulation has run to get these values. === Weaknesses of FDTD modeling === Since FDTD requires that the entire computational domain be gridded, and the grid spatial discretization must be sufficiently fine to resolve both the smallest electromagnetic wavelength and the smallest geometrical feature in the model, very large computational domains can be developed, which results in very long solution times. Models with long, thin features, (like wires) are difficult to model in FDTD because of the excessively large computational domain required. Methods such as eigenmode expansion can offer a more efficient alternative as they do not require a fine grid along the z-direction. There is no way to determine unique values for permittivity and permeability at a material interface. Space and time steps must satisfy the CFL condition, or the leapfrog integration used to solve the partial differential equation is likely to become unstable. FDTD finds the E/H fields directly everywhere in the computational domain. If the field values at some distance are desired, it is likely that this distance will force the computational domain to be excessively large. Far-field extensions are available for FDTD, but require some amount of postprocessing. Since FDTD simulations calculate the E and H fields at all points within the computational domain, the computational domain must be finite to permit its residence in the computer memory. In many cases this is achieved by inserting artificial boundaries into the simulation space. Care must be taken to minimize errors introduced by such boundaries. There are a number of available highly effective absorbing boundary conditions (ABCs) to simulate an infinite unbounded computational domain. Most modern FDTD implementations instead use a special absorbing "material", called a perfectly matched layer (PML) to implement absorbing boundaries. Because FDTD is solved by propagating the fields forward in the time domain, the electromagnetic time response of the medium must be modeled explicitly. For an arbitrary response, this involves a computationally expensive time convolution, although in most cases the time response of the medium (or Dispersion (optics)) can be adequately and simply modeled using either the recursive convolution (RC) technique, the auxiliary differential equation (ADE) technique, or the Z-transform technique. An alternative way of solving Maxwell's equations that can treat arbitrary dispersion easily is the pseudo-spectral spatial domain (PSSD), which instead propagates the fields forward in space. === Grid truncation techniques === The most commonly used grid truncation techniques for open-region FDTD modeling problems are the Mur absorbing boundary condition (ABC), the Liao ABC, and various perfectly matched layer (PML) formulations. The Mur and Liao techniques are simpler than PML. However, PML (which is technically an absorbing region rather than a boundary condition per se) can provide orders-of-magnitude lower reflections. The PML concept was introduced by J.-P. Berenger in a seminal 1994 paper in the Journal of Computational Physics. Since 1994, Berenger's original split-field implementation has been modified and extended to the uniaxial PML (UPML), the convolutional PML (CPML), and the higher-order PML. The latter two PML formulations have increased ability to absorb evanescent waves, and therefore can in principle be placed closer to a simulated scattering or radiating structure than Berenger's original formulation. To reduce undesired numerical reflection from the PML additional back absorbing layers technique can be used. == Popularity == Notwithstanding both the general increase in academic publication throughput during the same period and the overall expansion of interest in all Computational electromagnetics (CEM) techniques, there are seven primary reasons for the tremendous expansion of interest in FDTD computational solution approaches for Maxwell's equations: FDTD does not require a matrix inversion. Being a fully explicit computation, FDTD avoids the difficulties with matrix inversions that limit the size of frequency-domain integral-equation and finite-element electromagnetics models to generally fewer than 109 electromagnetic field unknowns. FDTD models with as many as 109 field unknowns have been run; there is no intrinsic upper bound to this number. FDTD is accurate and robust. The sources of error in FDTD calculations are well understood, and can be bounded to permit accurate models for a very large variety of electromagnetic wave interaction problems. FDTD treats impulsive behavior naturally. Being a time-domain technique, FDTD directly calculates the impulse response of an electromagnetic system. Therefore, a single FDTD simulation can provide either ultrawideband temporal waveforms or the sinusoidal steady-state response at any frequency within the excitation spectrum. FDTD treats nonlinear behavior naturally. Being a time-domain technique, FDTD directly calculates the nonlinear response of an electromagnetic system. This allows natural hybriding of FDTD with sets of auxiliary differential equations that describe nonlinearities from either the classical or semi-classical standpoint. One research frontier is the development of hybrid algorithms which join FDTD classical electrodynamics models with phenomena arising from quantum electrodynamics, especially vacuum fluctuations, such as the Casimir effect. FDTD is a systematic approach. With FDTD, specifying a new structure to be modeled is reduced to a problem of mesh generation rather than the potentially complex reformulation of an integral equation. For example, FDTD requires no calculation of structure-dependent Green functions. Parallel-processing computer architectures have come to dominate supercomputing. FDTD scales with high efficiency on parallel-processing CPU-based computers, and extremely well on recently developed GPU-based accelerator technology. Computer visualization capabilities are increasing rapidly. While this trend positively influences all numerical techniques, it is of particular advantage to FDTD methods, which generate time-marched arrays of field quantities suitable for use in color videos to illustrate the field dynamics. Taflove has argued that these factors combine to suggest that FDTD will remain one of the dominant computational electrodynamics techniques (as well as potentially other multiphysics problems). == See also == Computational electromagnetics Eigenmode expansion Beam propagation method Finite-difference frequency-domain Finite element method Scattering-matrix method Discrete dipole approximation == References == == Further reading == == External links == Free software/Open-source software FDTD projects: FDTD++: advanced, fully featured FDTD software, along with sophisticated material models and predefined fits as well as discussion/support forums and email support openEMS (Fully 3D Cartesian & Cylindrical graded mesh EC-FDTD Solver, written in C++, using a Matlab/Octave-Interface) pFDTD (3D C++ FDTD codes developed by Se-Heon Kim) JFDTD (2D/3D C++ FDTD codes developed for nanophotonics by Jeffrey M. McMahon) WOLFSIM Archived 2008-07-02 at the Wayback Machine (NCSU) (2-D) Meep (MIT, 2D/3D/cylindrical parallel FDTD) (Geo-) Radar FDTD bigboy (unmaintained, no release files. must get source from cvs) Parallel (MPI&OpenMP) FDTD codes in C++ (developed by Zs. Szabó) FDTD code in Fortran 90 FDTD code in C for 2D EM Wave simulation Angora (3D parallel FDTD software package, maintained by Ilker R. Capoglu) GSvit (3D FDTD solver with graphics card computing support, written in C, graphical user interface XSvit available) gprMax (Open Source (GPLv3), 3D/2D FDTD modelling code in Python/Cython developed for GPR but can be used for general EM modelling.) Freeware/Closed source FDTD projects (some not for commercial use): EMTL (Electromagnetic Template Library) (Free С++ library for electromagnetic simulations. The current version implements mainly the FDTD).
Wikipedia/Finite-difference_time-domain_method
In electromagnetism, a branch of fundamental physics, the matrix representations of the Maxwell's equations are a formulation of Maxwell's equations using matrices, complex numbers, and vector calculus. These representations are for a homogeneous medium, an approximation in an inhomogeneous medium. A matrix representation for an inhomogeneous medium was presented using a pair of matrix equations. A single equation using 4 × 4 matrices is necessary and sufficient for any homogeneous medium. For an inhomogeneous medium it necessarily requires 8 × 8 matrices. == Introduction == Maxwell's equations in the standard vector calculus formalism, in an inhomogeneous medium with sources, are: ∇ ⋅ D ( r , t ) = ρ ∇ × H ( r , t ) − ∂ ∂ t D ( r , t ) = J ∇ × E ( r , t ) + ∂ ∂ t B ( r , t ) = 0 ∇ ⋅ B ( r , t ) = 0 . {\displaystyle {\begin{aligned}&{\mathbf {\nabla } }\cdot {\mathbf {D} }\left({\mathbf {r} },t\right)=\rho \,\\&{\mathbf {\nabla } }\times {\mathbf {H} }\left({\mathbf {r} },t\right)-{\frac {\partial }{\partial t}}{\mathbf {D} }\left({\mathbf {r} },t\right)={\mathbf {J} }\,\\&{\mathbf {\nabla } }\times {\mathbf {E} }\left({\mathbf {r} },t\right)+{\frac {\partial }{\partial t}}{\mathbf {B} }\left({\mathbf {r} },t\right)=0\,\\&{\mathbf {\nabla } }\cdot {\mathbf {B} }\left({\mathbf {r} },t\right)=0\,.\end{aligned}}} The media is assumed to be linear, that is D = ε E , B = μ H {\displaystyle {\mathbf {D} }=\varepsilon \mathbf {E} \,,\quad \mathbf {B} =\mu \mathbf {H} } , where scalar ε = ε ( r , t ) {\displaystyle \varepsilon =\varepsilon (\mathbf {r} ,t)} is the permittivity of the medium and scalar μ = μ ( r , t ) {\displaystyle \mu =\mu (\mathbf {r} ,t)} the permeability of the medium (see constitutive equation). For a homogeneous medium ε {\displaystyle \varepsilon } and μ {\displaystyle \mu } are constants. The speed of light in the medium is given by v ( r , t ) = 1 ε ( r , t ) μ ( r , t ) {\displaystyle v({\mathbf {r} },t)={\frac {1}{\sqrt {\varepsilon ({\mathbf {r} },t)\,\mu ({\mathbf {r} },t)\,}}}} . In vacuum, ε 0 = {\displaystyle \varepsilon _{0}=\,} 8.85 × 10−12 C2·N−1·m−2 and μ 0 = 4 π {\displaystyle \mu _{0}=4\pi \,} × 10−7 H·m−1 One possible way to obtain the required matrix representation is to use the Riemann–Silberstein vector given by F + ( r , t ) = 1 2 [ ε ( r , t ) E ( r , t ) + i 1 μ ( r , t ) B ( r , t ) ] F − ( r , t ) = 1 2 [ ε ( r , t ) E ( r , t ) − i 1 μ ( r , t ) B ( r , t ) ] . {\displaystyle {\begin{aligned}{\mathbf {F} }^{+}\left({\mathbf {r} },t\right)&={\frac {1}{\sqrt {2\,}}}\left[{\sqrt {\varepsilon ({\mathbf {r} },t)\,}}\,{\mathbf {E} }\left({\mathbf {r} },t\right)+{\rm {i}}\,{\frac {1}{\sqrt {\mu ({\mathbf {r} },t)\,}}}{\mathbf {B} }\left({\mathbf {r} },t\right)\right]\\{\mathbf {F} }^{-}\left({\mathbf {r} },t\right)&={\frac {1}{\sqrt {2\,}}}\left[{\sqrt {\varepsilon ({\mathbf {r} },t)\,}}\,{\mathbf {E} }\left({\mathbf {r} },t\right)-{\rm {i}}\,{\frac {1}{\sqrt {\mu ({\mathbf {r} },t)\,}}}{\mathbf {B} }\left({\mathbf {r} },t\right)\right]\,.\end{aligned}}} If for a certain medium ε = ε ( r , t ) {\displaystyle \varepsilon =\varepsilon (\mathbf {r} ,t)} and μ = μ ( r , t ) {\displaystyle \mu =\mu (\mathbf {r} ,t)} are scalar constants (or can be treated as local scalar constants under certain approximations), then the vectors F ± ( r , t ) {\displaystyle {\mathbf {F} }^{\pm }(\mathbf {r} ,t)} satisfy i ∂ ∂ t F ± ( r , t ) = ± v ∇ × F ± ( r , t ) − 1 2 ϵ ( i J ) ∇ ⋅ F ± ( r , t ) = 1 2 ε ( ρ ) . {\displaystyle {\begin{aligned}{\rm {i}}\,{\frac {\partial }{\partial t}}{\mathbf {F} }^{\pm }\left({\mathbf {r} },t\right)&=\pm v\,{\mathbf {\nabla } }\times {\mathbf {F} }^{\pm }\left({\mathbf {r} },t\right)-{\frac {1}{\sqrt {2\epsilon \,}}}({\rm {i}}\,{\mathbf {J} })\\{\mathbf {\nabla } }\cdot {\mathbf {F} }^{\pm }\left({\mathbf {r} },t\right)&={\frac {1}{\sqrt {2\varepsilon \,}}}(\rho )\,.\end{aligned}}} Thus by using the Riemann–Silberstein vector, it is possible to reexpress the Maxwell's equations for a medium with constant ε = ε ( r , t ) {\displaystyle \varepsilon =\varepsilon (\mathbf {r} ,t)} and μ = μ ( r , t ) {\displaystyle \mu =\mu (\mathbf {r} ,t)} as a pair of constitutive equations. == Homogeneous medium == In order to obtain a single matrix equation instead of a pair, the following new functions are constructed using the components of the Riemann–Silberstein vector Ψ + ( r , t ) = [ − F x + + i F y + F z + F z + F x + + i F y + ] Ψ − ( r , t ) = [ − F x − − i F y − F z − F z − F x − − i F y − ] . {\displaystyle {\begin{aligned}\Psi ^{+}({\mathbf {r} },t)&=\left[{\begin{array}{c}-F_{x}^{+}+{\rm {i}}F_{y}^{+}\\F_{z}^{+}\\F_{z}^{+}\\F_{x}^{+}+{\rm {i}}F_{y}^{+}\end{array}}\right]\,\quad \Psi ^{-}({\mathbf {r} },t)=\left[{\begin{array}{c}-F_{x}^{-}-{\rm {i}}F_{y}^{-}\\F_{z}^{-}\\F_{z}^{-}\\F_{x}^{-}-{\rm {i}}F_{y}^{-}\end{array}}\right]\,.\end{aligned}}} The vectors for the sources are W + = ( 1 2 ϵ ) [ − J x + i J y J z − v ρ J z + v ρ J x + i J y ] W − = ( 1 2 ϵ ) [ − J x − i J y J z − v ρ J z + v ρ J x − i J y ] . {\displaystyle {\begin{aligned}W^{+}&=\left({\frac {1}{\sqrt {2\epsilon }}}\right)\left[{\begin{array}{c}-J_{x}+{\rm {i}}J_{y}\\J_{z}-v\rho \\J_{z}+v\rho \\J_{x}+{\rm {i}}J_{y}\end{array}}\right]\,\quad W^{-}=\left({\frac {1}{\sqrt {2\epsilon }}}\right)\left[{\begin{array}{c}-J_{x}-{\rm {i}}J_{y}\\J_{z}-v\rho \\J_{z}+v\rho \\J_{x}-{\rm {i}}J_{y}\end{array}}\right]\,.\end{aligned}}} Then, ∂ ∂ t Ψ + = − v { M ⋅ ∇ } Ψ + − W + ∂ ∂ t Ψ − = − v { M ∗ ⋅ ∇ } Ψ − − W − {\displaystyle {\begin{aligned}{\frac {\partial }{\partial t}}\Psi ^{+}&=-v\left\{{\mathbf {M} }\cdot {\mathbf {\nabla } }\right\}\Psi ^{+}-W^{+}\,\\{\frac {\partial }{\partial t}}\Psi ^{-}&=-v\left\{{\mathbf {M} }^{*}\cdot {\mathbf {\nabla } }\right\}\Psi ^{-}-W^{-}\,\end{aligned}}} where * denotes complex conjugation and the triplet, M = [Mx, My, Mz] is a vector whose component elements are abstract 4×4 matricies given by M x = [ 0 0 1 0 0 0 0 1 1 0 0 0 0 1 0 0 ] , {\displaystyle M_{x}={\begin{bmatrix}0&0&1&0\\0&0&0&1\\1&0&0&0\\0&1&0&0\end{bmatrix}},} M y = i [ 0 0 − 1 0 0 0 0 − 1 + 1 0 0 0 0 + 1 0 0 ] , {\displaystyle M_{y}={\rm {i}}{\begin{bmatrix}0&0&-1&0\\0&0&0&-1\\+1&0&0&0\\0&+1&0&0\end{bmatrix}},} M z = [ + 1 0 0 0 0 + 1 0 0 0 0 − 1 0 0 0 0 − 1 ] . {\displaystyle M_{z}={\begin{bmatrix}+1&0&0&0\\0&+1&0&0\\0&0&-1&0\\0&0&0&-1\end{bmatrix}}\,.} The component M-matrices may be formed using: Ω = [ 0 − I 2 I 2 0 ] β = [ I 2 0 0 − I 2 ] , {\displaystyle \Omega ={\begin{bmatrix}{\mathbf {0} }&-{\mathbf {I} _{2}}\\{\mathbf {I} _{2}}&{\mathbf {0} }\end{bmatrix}}\,\qquad \beta ={\begin{bmatrix}{\mathbf {I} _{2}}&{\mathbf {0} }\\{\mathbf {0} }&-{\mathbf {I} _{2}}\end{bmatrix}}\,,} where I 2 = [ 1 0 0 1 ] , {\displaystyle \mathbf {I} _{2}={\begin{bmatrix}1&0\\0&1\end{bmatrix}}\,,} from which, get: M x = − β Ω , M y = i Ω , M z = β . {\displaystyle M_{x}=-\beta \Omega ,\qquad M_{y}={\rm {i}}\Omega ,\qquad M_{z}=\beta \,.} Alternately, one may use the matrix J = − Ω . {\displaystyle J=-\Omega \,.} Which only differ by a sign. For our purpose it is fine to use either Ω or J. However, they have a different meaning: J is contravariant and Ω is covariant. The matrix Ω corresponds to the Lagrange brackets of classical mechanics and J corresponds to the Poisson brackets. Note the important relation Ω = J − 1 . {\displaystyle \Omega =J^{-1}\,.} Each of the four Maxwell's equations are obtained from the matrix representation. This is done by taking the sums and differences of row-I with row-IV and row-II with row-III respectively. The first three give the y, x, and z components of the curl and the last one gives the divergence conditions. The matrices M are all non-singular and all are Hermitian. Moreover, they satisfy the usual (quaternion-like) algebra of the Dirac matrices, including, M x M z = − M z M x M y M z = − M z M y M x 2 = M y 2 = M z 2 = I M x M y = − M y M x = i M z M y M z = − M z M y = i M x M z M x = − M x M z = i M y . {\displaystyle {\begin{aligned}M_{x}M_{z}=-M_{z}M_{x}\,\\M_{y}M_{z}=-M_{z}M_{y}\,\\\\M_{x}^{2}=M_{y}^{2}=M_{z}^{2}=I\,\\\\M_{x}M_{y}=-M_{y}M_{x}={\rm {i}}M_{z}\,\\M_{y}M_{z}=-M_{z}M_{y}={\rm {i}}M_{x}\,\\M_{z}M_{x}=-M_{x}M_{z}={\rm {i}}M_{y}\,.\end{aligned}}} The (Ψ±, M) are not unique. Different choices of Ψ± would give rise to different M, such that the triplet M continues to satisfy the algebra of the Dirac matrices. The Ψ± via the Riemann–Silberstein vector has certain advantages over the other possible choices. The Riemann–Silberstein vector is well known in classical electrodynamics and has certain interesting properties and uses. In deriving the above 4×4 matrix representation of the Maxwell's equations, the spatial and temporal derivatives of ε(r, t) and μ(r, t) in the first two of the Maxwell's equations have been ignored. The ε and μ have been treated as local constants. == Inhomogeneous medium == In an inhomogeneous medium, the spatial and temporal variations of ε = ε(r, t) and μ = μ(r, t) are not zero. That is they are no longer local constant. Instead of using ε = ε(r, t) and μ = μ(r, t), it is advantageous to use the two derived laboratory functions namely the resistance function and the velocity function Velocity function: v ( r , t ) = 1 ϵ ( r , t ) μ ( r , t ) Resistance function: h ( r , t ) = μ ( r , t ) ϵ ( r , t ) . {\displaystyle {\begin{aligned}{\text{ Velocity function:}}\,v({\mathbf {r} },t)&={\frac {1}{\sqrt {\epsilon ({\mathbf {r} },t)\mu ({\mathbf {r} },t)}}}\\{\text{Resistance function:}}\,h({\mathbf {r} },t)&={\sqrt {\frac {\mu ({\mathbf {r} },t)}{\epsilon ({\mathbf {r} },t)}}}\,.\end{aligned}}} In terms of these functions: ε = 1 v h , μ = h v {\displaystyle \varepsilon ={\frac {1}{vh}}\,,\quad \mu ={\frac {h}{v}}} . These functions occur in the matrix representation through their logarithmic derivatives; u ( r , t ) = 1 2 v ( r , t ) ∇ v ( r , t ) = 1 2 ∇ { ln ⁡ v ( r , t ) } = − 1 2 ∇ { ln ⁡ n ( r , t ) } w ( r , t ) = 1 2 h ( r , t ) ∇ h ( r , t ) = 1 2 ∇ { ln ⁡ h ( r , t ) } {\displaystyle {\begin{aligned}{\mathbf {u} }({\mathbf {r} },t)&={\frac {1}{2v({\mathbf {r} },t)}}{\mathbf {\nabla } }v({\mathbf {r} },t)={\frac {1}{2}}{\mathbf {\nabla } }\left\{\ln v({\mathbf {r} },t)\right\}=-{\frac {1}{2}}{\mathbf {\nabla } }\left\{\ln n({\mathbf {r} },t)\right\}\\{\mathbf {w} }({\mathbf {r} },t)&={\frac {1}{2h({\mathbf {r} },t)}}{\mathbf {\nabla } }h({\mathbf {r} },t)={\frac {1}{2}}{\mathbf {\nabla } }\left\{\ln h({\mathbf {r} },t)\right\}\,\end{aligned}}} where n ( r , t ) = c v ( r , t ) {\displaystyle n({\mathbf {r} },t)={\frac {c}{v({\mathbf {r} },t)}}} is the refractive index of the medium. The following matrices naturally arise in the exact matrix representation of the Maxwell's equation in a medium Σ = [ σ 0 0 σ ] α = [ 0 σ σ 0 ] I = [ 1 0 0 1 ] {\displaystyle {\begin{aligned}{\mathbf {\Sigma } }=\left[{\begin{array}{cc}{\mathbf {\sigma } }&{\mathbf {0} }\\{\mathbf {0} }&{\mathbf {\sigma } }\end{array}}\right]\,\qquad {\mathbf {\alpha } }=\left[{\begin{array}{cc}{\mathbf {0} }&{\mathbf {\sigma } }\\{\mathbf {\sigma } }&{\mathbf {0} }\end{array}}\right]\,\qquad {\mathbf {I} }=\left[{\begin{array}{cc}{\mathbf {1} }&{\mathbf {0} }\\{\mathbf {0} }&{\mathbf {1} }\end{array}}\right]\,\end{aligned}}} where Σ are the Dirac spin matrices and α are the matrices used in the Dirac equation, and σ is the triplet of the Pauli matrices σ = ( σ x , σ y , σ z ) = [ ( 0 1 1 0 ) , ( 0 − i i 0 ) , ( 1 0 0 − 1 ) ] {\displaystyle {\mathbf {\sigma } }=(\sigma _{x},\sigma _{y},\sigma _{z})=\left[{\begin{pmatrix}0&1\\1&0\end{pmatrix}},{\begin{pmatrix}0&-{\rm {i}}\\{\rm {i}}&0\end{pmatrix}},{\begin{pmatrix}1&0\\0&-1\end{pmatrix}}\right]} Finally, the matrix representation is ∂ ∂ t [ I 0 0 I ] [ Ψ + Ψ − ] − v ˙ ( r , t ) 2 v ( r , t ) [ I 0 0 I ] [ Ψ + Ψ − ] + h ˙ ( r , t ) 2 h ( r , t ) [ 0 i β α y i β α y 0 ] [ Ψ + Ψ − ] = − v ( r , t ) [ { M ⋅ ∇ + Σ ⋅ u } − i β ( Σ ⋅ w ) α y − i β ( Σ ∗ ⋅ w ) α y { M ∗ ⋅ ∇ + Σ ∗ ⋅ u } ] [ Ψ + Ψ − ] − [ I 0 0 I ] [ W + W − ] {\displaystyle {\begin{aligned}&{\frac {\partial }{\partial t}}\left[{\begin{array}{cc}{\mathbf {I} }&{\mathbf {0} }\\{\mathbf {0} }&{\mathbf {I} }\end{array}}\right]\left[{\begin{array}{cc}\Psi ^{+}\\\Psi ^{-}\end{array}}\right]-{\frac {{\dot {v}}({\mathbf {r} },t)}{2v({\mathbf {r} },t)}}\left[{\begin{array}{cc}{\mathbf {I} }&{\mathbf {0} }\\{\mathbf {0} }&{\mathbf {I} }\end{array}}\right]\left[{\begin{array}{cc}\Psi ^{+}\\\Psi ^{-}\end{array}}\right]+{\frac {{\dot {h}}({\mathbf {r} },t)}{2h({\mathbf {r} },t)}}\left[{\begin{array}{cc}{\mathbf {0} }&{\rm {i}}\beta \alpha _{y}\\{\rm {i}}\beta \alpha _{y}&{\mathbf {0} }\end{array}}\right]\left[{\begin{array}{cc}\Psi ^{+}\\\Psi ^{-}\end{array}}\right]\\&=-v({\mathbf {r} },t)\left[{\begin{array}{ccc}\left\{{\mathbf {M} }\cdot {\mathbf {\nabla } }+{\mathbf {\Sigma } }\cdot {\mathbf {u} }\right\}&&-{\rm {i}}\beta \left({\mathbf {\Sigma } }\cdot {\mathbf {w} }\right)\alpha _{y}\\-{\rm {i}}\beta \left({\mathbf {\Sigma } }^{*}\cdot {\mathbf {w} }\right)\alpha _{y}&\left\{{\mathbf {M} }^{*}\cdot {\mathbf {\nabla } }+{\mathbf {\Sigma } }^{*}\cdot {\mathbf {u} }\right\}\end{array}}\right]\left[{\begin{array}{cc}\Psi ^{+}\\\Psi ^{-}\end{array}}\right]-\left[{\begin{array}{cc}{\mathbf {I} }&{\mathbf {0} }\\{\mathbf {0} }&{\mathbf {I} }\end{array}}\right]\left[{\begin{array}{c}W^{+}\\W^{-}\end{array}}\right]\,\end{aligned}}} The above representation contains thirteen 8 × 8 matrices. Ten of these are Hermitian. The exceptional ones are the ones that contain the three components of w(r, t), the logarithmic gradient of the resistance function. These three matrices, for the resistance function are antihermitian. The Maxwell's equations have been expressed in a matrix form for a medium with varying permittivity ε = ε(r, t) and permeability μ = μ(r, t), in presence of sources. This representation uses a single matrix equation, instead of a pair of matrix equations. In this representation, using 8 × 8 matrices, it has been possible to separate the dependence of the coupling between the upper components (Ψ+) and the lower components (Ψ−) through the two laboratory functions. Moreover, the exact matrix representation has an algebraic structure very similar to the Dirac equation. Maxwell's equations can be derived from the Fermat's principle of geometrical optics by the process of "wavization" analogous to the quantization of classical mechanics. == Applications == One of the early uses of the matrix forms of the Maxwell's equations was to study certain symmetries, and the similarities with the Dirac equation. The matrix form of the Maxwell's equations is used as a candidate for the Photon Wavefunction. Historically, the geometrical optics is based on the Fermat's principle of least time. Geometrical optics can be completely derived from the Maxwell's equations. This is traditionally done using the Helmholtz equation. The derivation of the Helmholtz equation from the Maxwell's equations is an approximation as one neglects the spatial and temporal derivatives of the permittivity and permeability of the medium. A new formalism of light beam optics has been developed, starting with the Maxwell's equations in a matrix form: a single entity containing all the four Maxwell's equations. Such a prescription is sure to provide a deeper understanding of beam-optics and polarization in a unified manner. The beam-optical Hamiltonian derived from this matrix representation has an algebraic structure very similar to the Dirac equation, making it amenable to the Foldy-Wouthuysen technique. This approach is very similar to one developed for the quantum theory of charged-particle beam optics. == References == === Notes === === Others ===
Wikipedia/Matrix_representation_of_Maxwell's_equations
In electromagnetism and applications, an inhomogeneous electromagnetic wave equation, or nonhomogeneous electromagnetic wave equation, is one of a set of wave equations describing the propagation of electromagnetic waves generated by nonzero source charges and currents. The source terms in the wave equations make the partial differential equations inhomogeneous, if the source terms are zero the equations reduce to the homogeneous electromagnetic wave equations, which follow from Maxwell's equations. == Maxwell's equations == For reference, Maxwell's equations are summarized below in SI units and Gaussian units. They govern the electric field E and magnetic field B due to a source charge density ρ and current density J: where ε0 is the vacuum permittivity and μ0 is the vacuum permeability. Throughout, the relation ε 0 μ 0 = 1 c 2 {\displaystyle \varepsilon _{0}\mu _{0}={\dfrac {1}{c^{2}}}} is also used. == SI units == === E and B fields === Maxwell's equations can directly give inhomogeneous wave equations for the electric field E and magnetic field B. Substituting Gauss's law for electricity and Ampère's law into the curl of Faraday's law of induction, and using the curl of the curl identity ∇ × (∇ × X) = ∇(∇ ⋅ X) − ∇2X (The last term in the right side is the vector Laplacian, not Laplacian applied on scalar functions.) gives the wave equation for the electric field E: 1 c 2 ∂ 2 E ∂ t 2 − ∇ 2 E = − ( 1 ε 0 ∇ ρ + μ 0 ∂ J ∂ t ) . {\displaystyle {\dfrac {1}{c^{2}}}{\dfrac {\partial ^{2}\mathbf {E} }{\partial t^{2}}}-\nabla ^{2}\mathbf {E} =-\left({\dfrac {1}{\varepsilon _{0}}}\nabla \rho +\mu _{0}{\dfrac {\partial \mathbf {J} }{\partial t}}\right)\,.} Similarly substituting Gauss's law for magnetism into the curl of Ampère's circuital law (with Maxwell's additional time-dependent term), and using the curl of the curl identity, gives the wave equation for the magnetic field B: 1 c 2 ∂ 2 B ∂ t 2 − ∇ 2 B = μ 0 ∇ × J . {\displaystyle {\dfrac {1}{c^{2}}}{\dfrac {\partial ^{2}\mathbf {B} }{\partial t^{2}}}-\nabla ^{2}\mathbf {B} =\mu _{0}\nabla \times \mathbf {J} \,.} The left hand sides of each equation correspond to wave motion (the D'Alembert operator acting on the fields), while the right hand sides are the wave sources. The equations imply that EM waves are generated if there are gradients in charge density ρ, circulations in current density J, time-varying current density, or any mixture of these. The above equation for the electric field can be transformed to a homogeneous wave equation with a so called damping term if we study a problem where Ohm's law in differential form J f = σ E {\displaystyle \mathbf {J_{f}} =\sigma \mathbf {E} } hold (we assume J b = 0 {\displaystyle \mathbf {J_{b}} =0} that is we dealing with homogeneous conductors that have relative permeability and permittivity around 1), and by substituting 1 ε 0 ∇ ρ = ∇ ( ∇ ⋅ E ) {\displaystyle {\dfrac {1}{\varepsilon _{0}}}\nabla \rho =\nabla (\nabla \cdot \mathbf {E} )} from the differential form of Gauss law and J = J b + J f = σ E {\displaystyle \mathbf {J=J_{b}+J_{f}} =\sigma \mathbf {E} } The final homogeneous equation with only the unknown electric field and its partial derivatives is 1 c 2 ∂ 2 E ∂ t 2 − ∇ 2 E + ∇ ( ∇ ⋅ E ) + σ μ 0 ∂ E ∂ t = 0 {\displaystyle {\dfrac {1}{c^{2}}}{\dfrac {\partial ^{2}\mathbf {E} }{\partial t^{2}}}-\nabla ^{2}\mathbf {E} +\nabla (\nabla \cdot \mathbf {E} )+\sigma \mu _{0}{\dfrac {\partial \mathbf {E} }{\partial t}}=0} The solutions for the above homogeneous equation for the electric field are infinitely many and we must specify boundary conditions for the electric field in order to find specific solutions These forms of the wave equations are not often used in practice, as the source terms are inconveniently complicated. A simpler formulation more commonly encountered in the literature and used in theory use the electromagnetic potential formulation, presented next. === A and φ potential fields === Introducing the electric potential φ (a scalar potential) and the magnetic potential A (a vector potential) defined from the E and B fields by: E = − ∇ φ − ∂ A ∂ t , B = ∇ × A . {\displaystyle \mathbf {E} =-\nabla \varphi -{\frac {\partial \mathbf {A} }{\partial t}}\,,\quad \mathbf {B} =\nabla \times \mathbf {A} \,.} The four Maxwell's equations in a vacuum with charge ρ and current J sources reduce to two equations, Gauss's law for electricity is: ∇ 2 φ + ∂ ∂ t ( ∇ ⋅ A ) = − 1 ε 0 ρ , {\displaystyle \nabla ^{2}\varphi +{\frac {\partial }{\partial t}}\left(\nabla \cdot \mathbf {A} \right)=-{\frac {1}{\varepsilon _{0}}}\rho \,,} where ∇ 2 {\displaystyle \nabla ^{2}} here is the Laplacian applied on scalar functions, and the Ampère-Maxwell law is: ∇ 2 A − 1 c 2 ∂ 2 A ∂ t 2 − ∇ ( 1 c 2 ∂ φ ∂ t + ∇ ⋅ A ) = − μ 0 J {\displaystyle \nabla ^{2}\mathbf {A} -{\frac {1}{c^{2}}}{\frac {\partial ^{2}\mathbf {A} }{\partial t^{2}}}-\nabla \left({\frac {1}{c^{2}}}{\frac {\partial \varphi }{\partial t}}+\nabla \cdot \mathbf {A} \right)=-\mu _{0}\mathbf {J} \,} where ∇ 2 {\displaystyle \nabla ^{2}} here is the vector Laplacian applied on vector fields. The source terms are now much simpler, but the wave terms are less obvious. Since the potentials are not unique, but have gauge freedom, these equations can be simplified by gauge fixing. A common choice is the Lorenz gauge condition: 1 c 2 ∂ φ ∂ t + ∇ ⋅ A = 0 {\displaystyle {\frac {1}{c^{2}}}{\frac {\partial \varphi }{\partial t}}+\nabla \cdot \mathbf {A} =0} Then the nonhomogeneous wave equations become uncoupled and symmetric in the potentials: ∇ 2 φ − 1 c 2 ∂ 2 φ ∂ t 2 = − 1 ε 0 ρ , ∇ 2 A − 1 c 2 ∂ 2 A ∂ t 2 = − μ 0 J . {\displaystyle {\begin{aligned}\nabla ^{2}\varphi -{\frac {1}{c^{2}}}{\frac {\partial ^{2}\varphi }{\partial t^{2}}}&=-{\frac {1}{\varepsilon _{0}}}\rho \,,\\[2.75ex]\nabla ^{2}\mathbf {A} -{\frac {1}{c^{2}}}{\frac {\partial ^{2}\mathbf {A} }{\partial t^{2}}}&=-\mu _{0}\mathbf {J} \,.\end{aligned}}} For reference, in cgs units these equations are ∇ 2 φ − 1 c 2 ∂ 2 φ ∂ t 2 = − 4 π ρ ∇ 2 A − 1 c 2 ∂ 2 A ∂ t 2 = − 4 π c J {\displaystyle {\begin{aligned}\nabla ^{2}\varphi -{\frac {1}{c^{2}}}{\frac {\partial ^{2}\varphi }{\partial t^{2}}}&=-4\pi \rho \\[2ex]\nabla ^{2}\mathbf {A} -{\frac {1}{c^{2}}}{\frac {\partial ^{2}\mathbf {A} }{\partial t^{2}}}&=-{\frac {4\pi }{c}}\mathbf {J} \end{aligned}}} with the Lorenz gauge condition 1 c ∂ φ ∂ t + ∇ ⋅ A = 0 . {\displaystyle {\frac {1}{c}}{\frac {\partial \varphi }{\partial t}}+\nabla \cdot \mathbf {A} =0\,.} == Covariant form of the inhomogeneous wave equation == The relativistic Maxwell's equations can be written in covariant form as ◻ A μ = d e f ∂ β ∂ β A μ = d e f A μ , β β = − μ 0 J μ SI ◻ A μ = d e f ∂ β ∂ β A μ = d e f A μ , β β = − 4 π c J μ cgs {\displaystyle {\begin{aligned}\Box A^{\mu }&\ {\stackrel {\scriptscriptstyle \mathrm {def} }{=}}\ \partial _{\beta }\partial ^{\beta }A^{\mu }\ {\stackrel {\scriptscriptstyle \mathrm {def} }{=}}\ {A^{\mu ,\beta }}_{\beta }=-\mu _{0}J^{\mu }&&{\text{SI}}\\[1.15ex]\Box A^{\mu }&\ {\stackrel {\scriptscriptstyle \mathrm {def} }{=}}\ \partial _{\beta }\partial ^{\beta }A^{\mu }\ {\stackrel {\scriptscriptstyle \mathrm {def} }{=}}\ {A^{\mu ,\beta }}_{\beta }=-{\tfrac {4\pi }{c}}J^{\mu }&&{\text{cgs}}\end{aligned}}} where ◻ = ∂ β ∂ β = ∇ 2 − 1 c 2 ∂ 2 ∂ t 2 {\displaystyle \Box =\partial _{\beta }\partial ^{\beta }=\nabla ^{2}-{\frac {1}{c^{2}}}{\frac {\partial ^{2}}{\partial t^{2}}}} is the d'Alembert operator, J μ = ( c ρ , J ) {\displaystyle J^{\mu }=\left(c\rho ,\mathbf {J} \right)} is the four-current, ∂ ∂ x a = d e f ∂ a = d e f , a = d e f ( ∂ / ∂ c t , ∇ ) {\displaystyle {\frac {\partial }{\partial x^{a}}}\ {\stackrel {\mathrm {def} }{=}}\ \partial _{a}\ {\stackrel {\mathrm {def} }{=}}\ {}_{,a}\ {\stackrel {\mathrm {def} }{=}}\ (\partial /\partial ct,\nabla )} is the 4-gradient, and A μ = ( φ / c , A ) SI A μ = ( φ , A ) cgs {\displaystyle {\begin{aligned}A^{\mu }&=(\varphi /c,\mathbf {A} )&&{\text{SI}}\\[1ex]A^{\mu }&=(\varphi ,\mathbf {A} )&&{\text{cgs}}\end{aligned}}} is the electromagnetic four-potential with the Lorenz gauge condition ∂ μ A μ = 0 . {\displaystyle \partial _{\mu }A^{\mu }=0\,.} == Curved spacetime == The electromagnetic wave equation is modified in two ways in curved spacetime, the derivative is replaced with the covariant derivative and a new term that depends on the curvature appears (SI units). − A α ; β β + R α β A β = μ 0 J α {\displaystyle -{A^{\alpha ;\beta }}_{\beta }+{R^{\alpha }}_{\beta }A^{\beta }=\mu _{0}J^{\alpha }} where R α β {\displaystyle {R^{\alpha }}_{\beta }} is the Ricci curvature tensor. Here the semicolon indicates covariant differentiation. To obtain the equation in cgs units, replace the permeability with 4π/c. The Lorenz gauge condition in curved spacetime is assumed: A μ ; μ = 0 . {\displaystyle {A^{\mu }}_{;\mu }=0\,.} == Solutions to the inhomogeneous electromagnetic wave equation == In the case that there are no boundaries surrounding the sources, the solutions (cgs units) of the nonhomogeneous wave equations are φ ( r , t ) = ∫ δ ( t ′ + 1 c | r − r ′ | − t ) | r − r ′ | ρ ( r ′ , t ′ ) d 3 r ′ d t ′ {\displaystyle \varphi (\mathbf {r} ,t)=\int {\frac {\delta \left(t'+{\frac {1}{c}}{\left|\mathbf {r} -\mathbf {r} '\right|}-t\right)}{\left|\mathbf {r} -\mathbf {r} '\right|}}\rho (\mathbf {r} ',t')\,d^{3}\mathbf {r} 'dt'} and A ( r , t ) = ∫ δ ( t ′ + 1 c | r − r ′ | − t ) | r − r ′ | J ( r ′ , t ′ ) c d 3 r ′ d t ′ {\displaystyle \mathbf {A} (\mathbf {r} ,t)=\int {\frac {\delta \left(t'+{\frac {1}{c}}{\left|\mathbf {r} -\mathbf {r} '\right|}-t\right)}{\left|\mathbf {r} -\mathbf {r} '\right|}}{\frac {\mathbf {J} (\mathbf {r} ',t')}{c}}\,d^{3}\mathbf {r} 'dt'} where δ ( t ′ + 1 c | r − r ′ | − t ) {\displaystyle \delta \left(t'+{\tfrac {1}{c}}{\left|\mathbf {r} -\mathbf {r} '\right|}-t\right)} is a Dirac delta function. These solutions are known as the retarded Lorenz gauge potentials. They represent a superposition of spherical light waves traveling outward from the sources of the waves, from the present into the future. There are also advanced solutions (cgs units) φ ( r , t ) = ∫ δ ( t ′ − 1 c | r − r ′ | − t ) | r − r ′ | ρ ( r ′ , t ′ ) d 3 r ′ d t ′ {\displaystyle \varphi (\mathbf {r} ,t)=\int {\frac {\delta \left(t'-{\tfrac {1}{c}}{\left|\mathbf {r} -\mathbf {r} '\right|}-t\right)}{\left|\mathbf {r} -\mathbf {r} '\right|}}\rho (\mathbf {r} ',t')\,d^{3}\mathbf {r} 'dt'} and A ( r , t ) = ∫ δ ( t ′ − 1 c | r − r ′ | − t ) | r − r ′ | J ( r ′ , t ′ ) c d 3 r ′ d t ′ . {\displaystyle \mathbf {A} (\mathbf {r} ,t)=\int {\frac {\delta \left(t'-{\tfrac {1}{c}}{\left|\mathbf {r} -\mathbf {r} '\right|}-t\right)}{\left|\mathbf {r} -\mathbf {r} '\right|}}{\mathbf {J} (\mathbf {r} ',t') \over c}\,d^{3}\mathbf {r} 'dt'\,.} These represent a superposition of spherical waves travelling from the future into the present. == See also == Wave equation Sinusoidal plane-wave solutions of the electromagnetic wave equation Larmor formula Covariant formulation of classical electromagnetism Maxwell's equations in curved spacetime Abraham–Lorentz force Green's function == References == === Electromagnetics === ==== Journal articles ==== ==== Undergraduate-level textbooks ==== ==== Graduate-level textbooks ==== === Vector Calculus & Further Topics ===
Wikipedia/Inhomogeneous_electromagnetic_wave_equation
Science in the medieval Islamic world was the science developed and practised during the Islamic Golden Age under the Abbasid Caliphate of Baghdad, the Umayyads of Córdoba, the Abbadids of Seville, the Samanids, the Ziyarids and the Buyids in Persia and beyond, spanning the period roughly between 786 and 1258. Islamic scientific achievements encompassed a wide range of subject areas, especially astronomy, mathematics, and medicine. Other subjects of scientific inquiry included alchemy and chemistry, botany and agronomy, geography and cartography, ophthalmology, pharmacology, physics, and zoology. Medieval Islamic science had practical purposes as well as the goal of understanding. For example, astronomy was useful for determining the Qibla, the direction in which to pray, botany had practical application in agriculture, as in the works of Ibn Bassal and Ibn al-'Awwam, and geography enabled Abu Zayd al-Balkhi to make accurate maps. Islamic mathematicians such as Al-Khwarizmi, Avicenna and Jamshīd al-Kāshī made advances in algebra, trigonometry, geometry and Arabic numerals. Islamic doctors described diseases like smallpox and measles, and challenged classical Greek medical theory. Al-Biruni, Avicenna and others described the preparation of hundreds of drugs made from medicinal plants and chemical compounds. Islamic physicists such as Ibn Al-Haytham, Al-Bīrūnī and others studied optics and mechanics as well as astronomy, and criticised Aristotle's view of motion. During the Middle Ages, Islamic science flourished across a wide area around the Mediterranean Sea and further afield, for several centuries, in a wide range of institutions. == Context and history == The Islamic era began in 622. Islamic armies eventually conquered Arabia, Egypt and Mesopotamia, and successfully displaced the Persian and Byzantine Empires from the region within a few decades. Within a century, Islam had reached the area of present-day Portugal in the west and Central Asia in the east. The Islamic Golden Age (roughly between 786 and 1258) spanned the period of the Abbasid Caliphate (750–1258), with stable political structures and flourishing trade. Major religious and cultural works of the Islamic empire were translated into Arabic and occasionally Persian. Islamic culture inherited Greek, Indic, Assyrian and Persian influences. A new common civilisation formed, based on Islam. An era of high culture and innovation ensued, with rapid growth in population and cities. The Arab Agricultural Revolution in the countryside brought more crops and improved agricultural technology, especially irrigation. This supported the larger population and enabled culture to flourish. From the 9th century onwards, scholars such as Al-Kindi translated Indian, Assyrian, Sasanian (Persian) and Greek knowledge, including the works of Aristotle, into Arabic. These translations supported advances by scientists across the Islamic world. Islamic science survived the initial Christian reconquest of Spain, including the fall of Seville in 1248, as work continued in the eastern centres (such as in Persia). After the completion of the Spanish reconquest in 1492, the Islamic world went into an economic and cultural decline. The Abbasid caliphate was followed by the Ottoman Empire (c. 1299–1922), centred in Turkey, and the Safavid Empire (1501–1736), centred in Persia, where work in the arts and sciences continued. == Fields of inquiry == Medieval Islamic scientific achievements encompassed a wide range of subject areas, especially mathematics, astronomy, and medicine. Other subjects of scientific inquiry included physics, alchemy and chemistry, ophthalmology, and geography and cartography. === Alchemy and chemistry === The early Islamic period saw the development of theoretical frameworks in alchemy and chemistry, laying the foundation for later advancements in both fields. The sulfur-mercury theory of metals, first found in Sirr al-khalīqa ("The Secret of Creation", c. 750–850, falsely attributed to Apollonius of Tyana), and in the writings attributed to Jabir ibn Hayyan (written c. 850–950), remained the basis of theories of metallic composition until the 18th century. The Emerald Tablet, a cryptic text that all later alchemists up to and including Isaac Newton saw as the foundation of their art, first occurs in the Sirr al-khalīqa and in one of the works attributed to Jabir. In practical chemistry, the works of Jabir, and those of the Persian alchemist and physician Abu Bakr al-Razi (c. 865–925), contain the earliest systematic classifications of chemical substances. Alchemists were also interested in artificially creating such substances. Jabir describes the synthesis of ammonium chloride (sal ammoniac) from organic substances, and Abu Bakr al-Razi experimented with the heating of ammonium chloride, vitriol, and other salts, which would eventually lead to the discovery of the mineral acids by 13th-century Latin alchemists such as pseudo-Geber. === Astronomy and cosmology === Astronomy became a major discipline within Islamic science. Astronomers devoted effort both towards understanding the nature of the cosmos and to practical purposes. One application involved determining the Qibla, the direction to face during prayer. Another was astrology, predicting events affecting human life and selecting suitable times for actions such as going to war or founding a city. Al-Battani (850–922) accurately determined the length of the solar year. He contributed to the Tables of Toledo, used by astronomers to predict the movements of the sun, moon and planets across the sky. Copernicus (1473–1543) later used some of Al-Battani's astronomic tables. Al-Zarqali (1028–1087) developed a more accurate astrolabe, used for centuries afterwards. He constructed a water clock in Toledo, discovered that the Sun's apogee moves slowly relative to the fixed stars, and obtained a good estimate of its motion for its rate of change. Nasir al-Din al-Tusi (1201–1274) wrote an important revision to Ptolemy's 2nd-century celestial model. When Tusi became Helagu's astrologer, he was given an observatory and gained access to Chinese techniques and observations. He developed trigonometry as a separate field, and compiled the most accurate astronomical tables available up to that time. === Botany and agronomy === The study of the natural world extended to a detailed examination of plants. The work done proved directly useful in the unprecedented growth of pharmacology across the Islamic world. Al-Dinawari (815–896) popularised botany in the Islamic world with his six-volume Kitab al-Nabat (Book of Plants). Only volumes 3 and 5 have survived, with part of volume 6 reconstructed from quoted passages. The surviving text describes 637 plants in alphabetical order from the letters sin to ya, so the whole book must have covered several thousand kinds of plants. Al-Dinawari described the phases of plant growth and the production of flowers and fruit. The thirteenth century encyclopedia compiled by Zakariya al-Qazwini (1203–1283) – ʿAjā'ib al-makhlūqāt (The Wonders of Creation) – contained, among many other topics, both realistic botany and fantastic accounts. For example, he described trees which grew birds on their twigs in place of leaves, but which could only be found in the far-distant British Isles. The use and cultivation of plants was documented in the 11th century by Muhammad bin Ibrāhīm Ibn Bassāl of Toledo in his book Dīwān al-filāha (The Court of Agriculture), and by Ibn al-'Awwam al-Ishbīlī (also called Abū l-Khayr al-Ishbīlī) of Seville in his 12th century book Kitāb al-Filāha (Treatise on Agriculture). Ibn Bassāl had travelled widely across the Islamic world, returning with a detailed knowledge of agronomy that fed into the Arab Agricultural Revolution. His practical and systematic book describes over 180 plants and how to propagate and care for them. It covered leaf- and root-vegetables, herbs, spices and trees. === Geography and cartography === The spread of Islam across Western Asia and North Africa encouraged an unprecedented growth in trade and travel by land and sea as far away as Southeast Asia, China, much of Africa, Scandinavia and even Iceland. Geographers worked to compile increasingly accurate maps of the known world, starting from many existing but fragmentary sources. Abu Zayd al-Balkhi (850–934), founder of the Balkhī school of cartography in Baghdad, wrote an atlas called Figures of the Regions (Suwar al-aqalim). Al-Biruni (973–1048) measured the radius of the earth using a new method. It involved observing the height of a mountain at Nandana (now in Pakistan). Al-Idrisi (1100–1166) drew a map of the world for Roger, the Norman King of Sicily (ruled 1105–1154). He also wrote the Tabula Rogeriana (Book of Roger), a geographic study of the peoples, climates, resources and industries of the whole of the world known at that time. The Ottoman admiral Piri Reis (c. 1470–1553) made a map of the New World and West Africa in 1513. He made use of maps from Greece, Portugal, Muslim sources, and perhaps one made by Christopher Columbus. He represented a part of a major tradition of Ottoman cartography. === Mathematics === Islamic mathematicians gathered, organised and clarified the mathematics they inherited from ancient Egypt, Greece, India, Mesopotamia and Persia, and went on to make innovations of their own. Islamic mathematics covered algebra, geometry and arithmetic. Algebra was mainly used for recreation: it had few practical applications at that time. Geometry was studied at different levels. Some texts contain practical geometrical rules for surveying and for measuring figures. Theoretical geometry was a necessary prerequisite for understanding astronomy and optics, and it required years of concentrated work. Early in the Abbasid caliphate (founded 750), soon after the foundation of Baghdad in 762, some mathematical knowledge was assimilated by al-Mansur's group of scientists from the pre-Islamic Persian tradition in astronomy. Astronomers from India were invited to the court of the caliph in the late eighth century; they explained the rudimentary trigonometrical techniques used in Indian astronomy. Ancient Greek works such as Ptolemy's Almagest and Euclid's Elements were translated into Arabic. By the second half of the ninth century, Islamic mathematicians were already making contributions to the most sophisticated parts of Greek geometry. Islamic mathematics reached its apogee in the Eastern part of the Islamic world between the tenth and twelfth centuries. Most medieval Islamic mathematicians wrote in Arabic, others in Persian. Al-Khwarizmi (8th–9th centuries) was instrumental in the adoption of the Hindu–Arabic numeral system and the development of algebra, introduced methods of simplifying equations, and used Euclidean geometry in his proofs. He was the first to treat algebra as an independent discipline in its own right, and presented the first systematic solution of linear and quadratic equations.: 14  Ibn Ishaq al-Kindi (801–873) worked on cryptography for the Abbasid Caliphate, and gave the first known recorded explanation of cryptanalysis and the first description of the method of frequency analysis. Avicenna (c. 980–1037) contributed to mathematical techniques such as casting out nines. Thābit ibn Qurra (835–901) calculated the solution to a chessboard problem involving an exponential series. Al-Farabi (c. 870–950) attempted to describe, geometrically, the repeating patterns popular in Islamic decorative motifs in his book Spiritual Crafts and Natural Secrets in the Details of Geometrical Figures. Omar Khayyam (1048–1131), known in the West as a poet, calculated the length of the year to within 5 decimal places, and found geometric solutions to all 13 forms of cubic equations, developing some quadratic equations still in use. Jamshīd al-Kāshī (c. 1380–1429) is credited with several theorems of trigonometry, including the law of cosines, also known as Al-Kashi's Theorem. He has been credited with the invention of decimal fractions, and with a method like Horner's to calculate roots. He calculated π correctly to 17 significant figures. Sometime around the seventh century, Islamic scholars adopted the Hindu–Arabic numeral system, describing their use in a standard type of text fī l-ḥisāb al hindī, (On the numbers of the Indians). A distinctive Western Arabic variant of the Eastern Arabic numerals began to emerge around the 10th century in the Maghreb and Al-Andalus (sometimes called ghubar numerals, though the term is not always accepted), which are the direct ancestor of the modern Arabic numerals used throughout the world. === Medicine === Islamic society paid careful attention to medicine, following a hadith enjoining the preservation of good health. Its physicians inherited knowledge and traditional medical beliefs from the civilisations of classical Greece, Rome, Syria, Persia and India. These included the writings of Hippocrates such as on the theory of the four humours, and the theories of Galen. al-Razi (c. 865–925) identified smallpox and measles, and recognized fever as a part of the body's defenses. He wrote a 23-volume compendium of Chinese, Indian, Persian, Syriac and Greek medicine. al-Razi questioned the classical Greek medical theory of how the four humours regulate life processes. He challenged Galen's work on several fronts, including the treatment of bloodletting, arguing that it was effective. al-Zahrawi (936–1013) was a surgeon whose most important surviving work is referred to as al-Tasrif (Medical Knowledge). It is a 30-volume set mainly discussing medical symptoms, treatments, and pharmacology. The last volume, on surgery, describes surgical instruments, supplies, and pioneering procedures. Avicenna (c. 980–1037) wrote the major medical textbook, The Canon of Medicine. Ibn al-Nafis (1213–1288) wrote an influential book on medicine; it largely replaced Avicenna's Canon in the Islamic world. He wrote commentaries on Galen and on Avicenna's works. One of these commentaries, discovered in 1924, described the circulation of blood through the lungs. === Optics and ophthalmology === Optics developed rapidly in this period. By the ninth century, there were works on physiological, geometrical and physical optics. Topics covered included mirror reflection. Hunayn ibn Ishaq (809–873) wrote the book Ten Treatises on the Eye; this remained influential in the West until the 17th century. Abbas ibn Firnas (810–887) developed lenses for magnification and the improvement of vision. Ibn Sahl (c. 940–1000) discovered the law of refraction known as Snell's law. He used the law to produce the first Aspheric lenses that focused light without geometric aberrations. In the eleventh century Ibn al-Haytham (Alhazen, 965–1040) rejected the Greek ideas about vision, whether the Aristotelian tradition that held that the form of the perceived object entered the eye (but not its matter), or that of Euclid and Ptolemy which held that the eye emitted a ray. Al-Haytham proposed in his Book of Optics that vision occurs by way of light rays forming a cone with its vertex at the center of the eye. He suggested that light was reflected from different surfaces in different directions, thus causing objects to look different. He argued further that the mathematics of reflection and refraction needed to be consistent with the anatomy of the eye. He was also an early proponent of the scientific method, the concept that a hypothesis must be proved by experiments based on confirmable procedures or mathematical evidence, five centuries before Renaissance scientists. === Pharmacology === Advances in botany and chemistry in the Islamic world encouraged developments in pharmacology. Muhammad ibn Zakarīya Rāzi (Rhazes) (865–915) promoted the medical uses of chemical compounds. Abu al-Qasim al-Zahrawi (Abulcasis) (936–1013) pioneered the preparation of medicines by sublimation and distillation. His Liber servitoris provides instructions for preparing "simples" from which were compounded the complex drugs then used. Sabur Ibn Sahl (died 869) was the first physician to describe a large variety of drugs and remedies for ailments. Al-Muwaffaq, in the 10th century, wrote The foundations of the true properties of Remedies, describing chemicals such as arsenious oxide and silicic acid. He distinguished between sodium carbonate and potassium carbonate, and drew attention to the poisonous nature of copper compounds, especially copper vitriol, and also of lead compounds. Al-Biruni (973–1050) wrote the Kitab al-Saydalah (The Book of Drugs), describing in detail the properties of drugs, the role of pharmacy and the duties of the pharmacist. Ibn Sina (Avicenna) described 700 preparations, their properties, their mode of action and their indications. He devoted a whole volume to simples in The Canon of Medicine. Works by Masawaih al-Mardini (c. 925–1015) and by Ibn al-Wafid (1008–1074) were printed in Latin more than fifty times, appearing as De Medicinis universalibus et particularibus by Mesue the Younger (died 1015) and as the Medicamentis simplicibus by Abenguefit (c. 997 – 1074) respectively. Peter of Abano (1250–1316) translated and added a supplement to the work of al-Mardini under the title De Veneris. Ibn al-Baytar (1197–1248), in his Al-Jami fi al-Tibb, described a thousand simples and drugs based directly on Mediterranean plants collected along the entire coast between Syria and Spain, for the first time exceeding the coverage provided by Dioscorides in classical times. Islamic physicians such as Ibn Sina described clinical trials for determining the efficacy of medical drugs and substances. === Physics === The fields of physics studied in this period, apart from optics and astronomy which are described separately, are aspects of mechanics: statics, dynamics, kinematics and motion. In the sixth century John Philoponus (c. 490 – c. 570) rejected the Aristotelian view of motion. He argued instead that an object acquires an inclination to move when it has a motive power impressed on it. In the eleventh century Ibn Sina adopted roughly the same idea, namely that a moving object has force which is dissipated by external agents like air resistance. Ibn Sina distinguished between "force" and "inclination" (mayl); he claimed that an object gained mayl when the object is in opposition to its natural motion. He concluded that continuation of motion depends on the inclination that is transferred to the object, and that the object remains in motion until the mayl is spent. He also claimed that a projectile in a vacuum would not stop unless it is acted upon. That view accords with Newton's first law of motion, on inertia. As a non-Aristotelian suggestion, it was essentially abandoned until it was described as "impetus" by Jean Buridan (c. 1295–1363), who was likely influenced by Ibn Sina's Book of Healing. In the Shadows, Abū Rayḥān al-Bīrūnī (973–1048) describes non-uniform motion as the result of acceleration. Ibn-Sina's theory of mayl tried to relate the velocity and weight of a moving object, a precursor of the concept of momentum. Aristotle's theory of motion stated that a constant force produces a uniform motion; Abu'l-Barakāt al-Baghdādī (c. 1080 – 1164/5) disagreed, arguing that velocity and acceleration are two different things, and that force is proportional to acceleration, not to velocity. The Banu Musa brothers, Jafar-Muhammad, Ahmad and al-Hasan (c. early 9th century) invented automated devices described in their Book of Ingenious Devices. Advances on the subject were also made by al-Jazari and Ibn Ma'ruf. === Zoology === Many classical works, including those of Aristotle, were transmitted from Greek to Syriac, then to Arabic, then to Latin in the Middle Ages. Aristotle's zoology remained dominant in its field for two thousand years. The Kitāb al-Hayawān (كتاب الحيوان, English: Book of Animals) is a 9th-century Arabic translation of History of Animals: 1–10, On the Parts of Animals: 11–14, and Generation of Animals: 15–19. The book was mentioned by Al-Kindī (died 850), and commented on by Avicenna (Ibn Sīnā) in his The Book of Healing. Avempace (Ibn Bājja) and Averroes (Ibn Rushd) commented on and criticised On the Parts of Animals and Generation of Animals. == Significance == Muslim scientists helped in laying the foundations for an experimental science with their contributions to the scientific method and their empirical, experimental and quantitative approach to scientific inquiry. In a more general sense, the positive achievement of Islamic science was simply to flourish, for centuries, in a wide range of institutions from observatories to libraries, madrasas to hospitals and courts, both at the height of the Islamic golden age and for some centuries afterwards. It did not lead to a scientific revolution like that in Early modern Europe, but such external comparisons are probably to be rejected as imposing "chronologically and culturally alien standards" on a successful medieval culture. == See also == == References == == Notes == == Sources == Linton, Christopher M. (2004). From Eudoxus to Einstein—A History of Mathematical Astronomy. Cambridge University Press. ISBN 978-0-521-82750-8. Masood, Ehsan (2009). Science and Islam: A History. Icon Books. ISBN 978-1-785-78202-2. McClellan, James E. III; Dorn, Harold, eds. (2006). Science and Technology in World History (2 ed.). Johns Hopkins. ISBN 978-0-8018-8360-6. Morelon, Régis; Rashed, Roshdi (1996). Encyclopedia of the History of Arabic Science. Vol. 3. Routledge. ISBN 978-0-415-12410-2. Turner, Howard R. (1997). Science in Medieval Islam: An Illustrated Introduction. University of Texas Press. ISBN 978-0-292-78149-8. == Further reading == Al-Daffa, Ali Abdullah; Stroyls, J.J. (1984). Studies in the exact sciences in medieval Islam. Wiley. ISBN 978-0-471-90320-8. Hogendijk, Jan P.; Sabra, Abdelhamid I. (2003). The Enterprise of Science in Islam: New Perspectives. MIT Press. ISBN 978-0-262-19482-2. Hill, Donald Routledge (1993). Islamic Science And Engineering. Edinburgh University Press. ISBN 978-0-7486-0455-5. Huff, Toby (1993). The Rise of Early Modern Science: Islam, China, and the West. Cambridge University Press. Kennedy, Edward S. (1983). Studies in the Islamic Exact Sciences. Syracuse University Press. ISBN 978-0-8156-6067-5. Lindberg, D. C.; Shank, M. H., eds. (2013). The Cambridge History of Science. Volume 2: Medieval Science. Cambridge University Press. (chapters 1–5 cover science, mathematics and medicine in Islam) Morelon, Régis; Rashed, Roshdi (1996). Encyclopedia of the History of Arabic Science. Vol. 2–3. Routledge. ISBN 978-0-415-02063-3. Saliba, George (2007). Islamic Science and the Making of the European Renaissance. MIT Press. ISBN 978-0-262-19557-7. == External links == "How Greek Science Passed to the Arabs" by De Lacy O'Leary Saliba, George. "Whose Science is Arabic Science in Renaissance Europe?". Habibi, Golareh. is there such a thing as Islamic science? the influence of Islam on the world of science, Science Creative Quarterly.
Wikipedia/Science_in_medieval_Islam
The many-worlds interpretation (MWI) is an interpretation of quantum mechanics that asserts that the universal wavefunction is objectively real, and that there is no wave function collapse. This implies that all possible outcomes of quantum measurements are physically realized in different "worlds". The evolution of reality as a whole in MWI is rigidly deterministic: 9  and local. Many-worlds is also called the relative state formulation or the Everett interpretation, after physicist Hugh Everett, who first proposed it in 1957. Bryce DeWitt popularized the formulation and named it many-worlds in the 1970s. In modern versions of many-worlds, the subjective appearance of wave function collapse is explained by the mechanism of quantum decoherence. Decoherence approaches to interpreting quantum theory have been widely explored and developed since the 1970s. MWI is considered a mainstream interpretation of quantum mechanics, along with the other decoherence interpretations, the Copenhagen interpretation, and hidden variable theories such as Bohmian mechanics. The many-worlds interpretation implies that there are many parallel, non-interacting worlds. It is one of a number of multiverse hypotheses in physics and philosophy. MWI views time as a many-branched tree, wherein every possible quantum outcome is realized. This is intended to resolve the measurement problem and thus some paradoxes of quantum theory, such as Wigner's friend,: 4–6  the EPR paradox: 462 : 118  and Schrödinger's cat, since every possible outcome of a quantum event exists in its own world. == Overview of the interpretation == The many-worlds interpretation's key idea is that the linear and unitary dynamics of quantum mechanics applies everywhere and at all times and so describes the whole universe. In particular, it models a measurement as a unitary transformation, a correlation-inducing interaction, between observer and object, without using a collapse postulate, and models observers as ordinary quantum-mechanical systems.: 35–38  This stands in contrast to the Copenhagen interpretation, in which a measurement is a "primitive" concept, not describable by unitary quantum mechanics; using the Copenhagen interpretation the universe is divided into a quantum and a classical domain, and the collapse postulate is central.: 29–30  In MWI there is no division between classical and quantum: everything is quantum and there is no collapse. MWI's main conclusion is that the universe (or multiverse in this context) is composed of a quantum superposition of an uncountable or undefinable: 14–17  amount or number of increasingly divergent, non-communicating parallel universes or quantum worlds. Sometimes dubbed Everett worlds,: 234  each is an internally consistent and actualized alternative history or timeline. The many-worlds interpretation uses decoherence to explain the measurement process and the emergence of a quasi-classical world. Wojciech H. Zurek, one of decoherence theory's pioneers, said: "Under scrutiny of the environment, only pointer states remain unchanged. Other states decohere into mixtures of stable pointer states that can persist, and, in this sense, exist: They are einselected." Zurek emphasizes that his work does not depend on a particular interpretation. The many-worlds interpretation shares many similarities with the decoherent histories interpretation, which also uses decoherence to explain the process of measurement or wave function collapse.: 9–11  MWI treats the other histories or worlds as real, since it regards the universal wave function as the "basic physical entity": 455  or "the fundamental entity, obeying at all times a deterministic wave equation".: 115  The decoherent histories interpretation, on the other hand, needs only one of the histories (or worlds) to be real.: 10  Several authors, including Everett, John Archibald Wheeler and David Deutsch, call many-worlds a theory or metatheory, rather than just an interpretation.: 328  Everett argued that it was the "only completely coherent approach to explaining both the contents of quantum mechanics and the appearance of the world." Deutsch dismissed the idea that many-worlds is an "interpretation", saying that to call it an interpretation "is like talking about dinosaurs as an 'interpretation' of fossil records".: 382  === Formulation === In his 1957 doctoral dissertation, Everett proposed that, rather than relying on external observation for analysis of isolated quantum systems, one could mathematically model an object, as well as its observers, as purely physical systems within the mathematical framework developed by Paul Dirac, John von Neumann, and others, discarding altogether the ad hoc mechanism of wave function collapse. === Relative state === Everett's original work introduced the concept of a relative state. Two (or more) subsystems, after a general interaction, become correlated, or as is now said, entangled. Everett noted that such entangled systems can be expressed as the sum of products of states, where the two or more subsystems are each in a state relative to each other. After a measurement or observation one of the pair (or triple, etc.) is the measured, object or observed system, and one other member is the measuring apparatus (which may include an observer) having recorded the state of the measured system. Each product of subsystem states in the overall superposition evolves over time independently of other products. Once the subsystems interact, their states have become correlated or entangled and can no longer be considered independent. In Everett's terminology, each subsystem state was now correlated with its relative state, since each subsystem must now be considered relative to the other subsystems with which it has interacted. In the example of Schrödinger's cat, after the box is opened, the entangled system is the cat, the poison vial and the observer. One relative triple of states would be the alive cat, the unbroken vial and the observer seeing an alive cat. Another relative triple of states would be the dead cat, the broken vial and the observer seeing a dead cat. In the example of a measurement of a continuous variable (e.g., position q) the object-observer system decomposes into a continuum of pairs of relative states: the object system's relative state becomes a Dirac delta function each centered on a particular value of q and the corresponding observer relative state representing an observer having recorded the value of q.: 57–64  The states of the pairs of relative states are, post measurement, correlated with each other. In Everett's scheme, there is no collapse; instead, the Schrödinger equation, or its quantum field theory, relativistic analog, holds all the time, everywhere. An observation or measurement is modeled by applying the wave equation to the entire system, comprising the object being observed and the observer. One consequence is that every observation causes the combined observer–object's wavefunction to change into a quantum superposition of two or more non-interacting branches. Thus the process of measurement or observation, or any correlation-inducing interaction, splits the system into sets of relative states, where each set of relative states, forming a branch of the universal wave function, is consistent within itself, and all future measurements (including by multiple observers) will confirm this consistency. === Renamed many-worlds === Everett had referred to the combined observer–object system as split by an observation, each split corresponding to the different or multiple possible outcomes of an observation. These splits generate a branching tree, where each branch is a set of all the states relative to each other. Bryce DeWitt popularized Everett's work with a series of publications calling it the Many Worlds Interpretation. Focusing on the splitting process, DeWitt introduced the term "world" to describe a single branch of that tree, which is a consistent history. All observations or measurements within any branch are consistent within themselves. Since many observation-like events have happened and are constantly happening, Everett's model implies that there are an enormous and growing number of simultaneously existing states or "worlds". === Properties === MWI removes the observer-dependent role in the quantum measurement process by replacing wave function collapse with the established mechanism of quantum decoherence. As the observer's role lies at the heart of all "quantum paradoxes" such as the EPR paradox and von Neumann's "boundary problem", this provides a clearer and easier approach to their resolution. Since the Copenhagen interpretation requires the existence of a classical domain beyond the one described by quantum mechanics, it has been criticized as inadequate for the study of cosmology. While there is no evidence that Everett was inspired by issues of cosmology,: 7  he developed his theory with the explicit goal of allowing quantum mechanics to be applied to the universe as a whole, hoping to stimulate the discovery of new phenomena. This hope has been realized in the later development of quantum cosmology. MWI is a realist, deterministic and local theory. It achieves this by removing wave function collapse, which is indeterministic and nonlocal, from the deterministic and local equations of quantum theory. MWI (like other, broader multiverse theories) provides a context for the anthropic principle, which may provide an explanation for the fine-tuned universe. MWI depends crucially on the linearity of quantum mechanics, which underpins the superposition principle. If the final theory of everything is non-linear with respect to wavefunctions, then many-worlds is invalid. All quantum field theories are linear and compatible with the MWI, a point Everett emphasized as a motivation for the MWI. While quantum gravity or string theory may be non-linear in this respect, there is as yet no evidence of this. Weingarten and Taylor & McCulloch have made separate proposals for how to define wavefunction branches in terms of quantum circuit complexity. === Alternative to wavefunction collapse === As with the other interpretations of quantum mechanics, the many-worlds interpretation is motivated by behavior that can be illustrated by the double-slit experiment. When particles of light (or anything else) pass through the double slit, a calculation assuming wavelike behavior of light can be used to identify where the particles are likely to be observed. Yet when the particles are observed in this experiment, they appear as particles (i.e., at definite places) and not as non-localized waves. Some versions of the Copenhagen interpretation of quantum mechanics proposed a process of "collapse" in which an indeterminate quantum system would probabilistically collapse onto, or select, just one determinate outcome to "explain" this phenomenon of observation. Wave function collapse was widely regarded as artificial and ad hoc, so an alternative interpretation in which the behavior of measurement could be understood from more fundamental physical principles was considered desirable. Everett's PhD work provided such an interpretation. He argued that for a composite system—such as a subject (the "observer" or measuring apparatus) observing an object (the "observed" system, such as a particle)—the claim that either the observer or the observed has a well-defined state is meaningless; in modern parlance, the observer and the observed have become entangled: we can only specify the state of one relative to the other, i.e., the state of the observer and the observed are correlated after the observation is made. This led Everett to derive from the unitary, deterministic dynamics alone (i.e., without assuming wave function collapse) the notion of a relativity of states. Everett noticed that the unitary, deterministic dynamics alone entailed that after an observation is made each element of the quantum superposition of the combined subject–object wave function contains two "relative states": a "collapsed" object state and an associated observer who has observed the same collapsed outcome; what the observer sees and the state of the object have become correlated by the act of measurement or observation. The subsequent evolution of each pair of relative subject–object states proceeds with complete indifference as to the presence or absence of the other elements, as if wave function collapse has occurred,: 67, 78  which has the consequence that later observations are always consistent with the earlier observations. Thus the appearance of the object's wave function's collapse has emerged from the unitary, deterministic theory itself. (This answered Einstein's early criticism of quantum theory: that the theory should define what is observed, not for the observables to define the theory.) Since the wave function appears to have collapsed then, Everett reasoned, there was no need to actually assume that it had collapsed. And so, invoking Occam's razor, he removed the postulate of wave function collapse from the theory.: 8  === Testability === In 1985, David Deutsch proposed a variant of the Wigner's friend thought experiment as a test of many-worlds versus the Copenhagen interpretation. It consists of an experimenter (Wigner's friend) making a measurement on a quantum system in an isolated laboratory, and another experimenter (Wigner) who would make a measurement on the first one. According to the many-worlds theory, the first experimenter would end up in a macroscopic superposition of seeing one result of the measurement in one branch, and another result in another branch. The second experimenter could then interfere these two branches in order to test whether it is in fact in a macroscopic superposition or has collapsed into a single branch, as predicted by the Copenhagen interpretation. Since then Lockwood, Vaidman, and others have made similar proposals, which require placing macroscopic objects in a coherent superposition and interfering them, a task currently beyond experimental capability. == Probability and the Born rule == Since the many-worlds interpretation's inception, physicists have been puzzled about the role of probability in it. As put by Wallace, there are two facets to the question: the incoherence problem, which asks why we should assign probabilities at all to outcomes that are certain to occur in some worlds, and the quantitative problem, which asks why the probabilities should be given by the Born rule. Everett tried to answer these questions in the paper that introduced many-worlds. To address the incoherence problem, he argued that an observer who makes a sequence of measurements on a quantum system will in general have an apparently random sequence of results in their memory, which justifies the use of probabilities to describe the measurement process.: 69–70  To address the quantitative problem, Everett proposed a derivation of the Born rule based on the properties that a measure on the branches of the wave function should have.: 70–72  His derivation has been criticized as relying on unmotivated assumptions. Since then several other derivations of the Born rule in the many-worlds framework have been proposed. There is no consensus on whether this has been successful. === Frequentism === DeWitt and Graham and Farhi et al., among others, have proposed derivations of the Born rule based on a frequentist interpretation of probability. They try to show that in the limit of uncountably many measurements, no worlds would have relative frequencies that didn't match the probabilities given by the Born rule, but these derivations have been shown to be mathematically incorrect. === Decision theory === A decision-theoretic derivation of the Born rule was produced by David Deutsch (1999) and refined by Wallace and Saunders. They consider an agent who takes part in a quantum gamble: the agent makes a measurement on a quantum system, branches as a consequence, and each of the agent's future selves receives a reward that depends on the measurement result. The agent uses decision theory to evaluate the price they would pay to take part in such a gamble, and concludes that the price is given by the utility of the rewards weighted according to the Born rule. Some reviews have been positive, although these arguments remain highly controversial; some theoretical physicists have taken them as supporting the case for parallel universes. For example, a New Scientist story on a 2007 conference about Everettian interpretations quoted physicist Andy Albrecht as saying, "This work will go down as one of the most important developments in the history of science." In contrast, the philosopher Huw Price, also attending the conference, found the Deutsch–Wallace–Saunders approach fundamentally flawed. === Symmetries and invariance === In 2005, Zurek produced a derivation of the Born rule based on the symmetries of entangled states; Schlosshauer and Fine argue that Zurek's derivation is not rigorous, as it does not define what probability is and has several unstated assumptions about how it should behave. In 2016, Charles Sebens and Sean M. Carroll, building on work by Lev Vaidman, proposed a similar approach based on self-locating uncertainty. In this approach, decoherence creates multiple identical copies of observers, who can assign credences to being on different branches using the Born rule. The Sebens–Carroll approach has been criticized by Adrian Kent, and Vaidman does not find it satisfactory. === Branch counting === In 2021, Simon Saunders produced a branch counting derivation of the Born rule. The crucial feature of this approach is to define the branches so that they all have the same magnitude or 2-norm. The ratios of the numbers of branches thus defined give the probabilities of the various outcomes of a measurement, in accordance with the Born rule. == Preferred basis problem == As originally formulated by Everett and DeWitt, the many-worlds interpretation had a privileged role for measurements: they determined which basis of a quantum system would give rise to the eponymous worlds. Without this the theory was ambiguous, as a quantum state can equally well be described (e.g.) as having a well-defined position or as being a superposition of two delocalized states. The assumption is that the preferred basis to use is the one which assigns a unique measurement outcome to each world. This special role for measurements is problematic for the theory, as it contradicts Everett and DeWitt's goal of having a reductionist theory and undermines their criticism of the ill-defined measurement postulate of the Copenhagen interpretation. This is known today as the preferred basis problem. The preferred basis problem has been solved, according to Saunders and Wallace, among others, by incorporating decoherence into the many-worlds theory. In this approach, the preferred basis does not have to be postulated, but rather is identified as the basis stable under environmental decoherence. In this way measurements no longer play a special role; rather, any interaction that causes decoherence causes the world to split. Since decoherence is never complete, there will always remain some infinitesimal overlap between two worlds, making it arbitrary whether a pair of worlds has split or not. Wallace argues that this is not problematic: it only shows that worlds are not a part of the fundamental ontology, but rather of the emergent ontology, where these approximate, effective descriptions are routine in the physical sciences. Since in this approach the worlds are derived, it follows that they must be present in any other interpretation of quantum mechanics that does not have a collapse mechanism, such as Bohmian mechanics. This approach to deriving the preferred basis has been criticized as creating circularity with derivations of probability in the many-worlds interpretation, as decoherence theory depends on probability and probability depends on the ontology derived from decoherence. Wallace contends that decoherence theory depends not on probability but only on the notion that one is allowed to do approximations in physics.: 253–254  == History == MWI originated in Everett's Princeton University PhD thesis "The Theory of the Universal Wave Function", developed under his thesis advisor John Archibald Wheeler, a shorter summary of which was published in 1957 under the title "Relative State Formulation of Quantum Mechanics" (Wheeler contributed the title "relative state"; Everett originally called his approach the "Correlation Interpretation", where "correlation" refers to quantum entanglement). The phrase "many-worlds" is due to Bryce DeWitt, who was responsible for the wider popularization of Everett's theory, which had been largely ignored for a decade after publication in 1957. Everett's proposal was not without precedent. In 1952, Erwin Schrödinger gave a lecture in Dublin in which at one point he jocularly warned his audience that what he was about to say might "seem lunatic". He went on to assert that while the Schrödinger equation seemed to be describing several different histories, they were "not alternatives but all really happen simultaneously". According to David Deutsch, this is the earliest known reference to many-worlds; Jeffrey A. Barrett describes it as indicating the similarity of "general views" between Everett and Schrödinger. Schrödinger's writings from the period also contain elements resembling the modal interpretation originated by Bas van Fraassen. Because Schrödinger subscribed to a kind of post-Machian neutral monism, in which "matter" and "mind" are only different aspects or arrangements of the same common elements, treating the wave function as physical and treating it as information became interchangeable. Leon Cooper and Deborah Van Vechten developed a very similar approach before reading Everett's work. Zeh also came to the same conclusions as Everett before reading his work, then built a new theory of quantum decoherence based on these ideas. According to people who knew him, Everett believed in the literal reality of the other quantum worlds. His son and wife reported that he "never wavered in his belief over his many-worlds theory". In their detailed review of Everett's work, Osnaghi, Freitas, and Freire Jr. note that Everett consistently used quotes around "real" to indicate a meaning within scientific practice.: 107  == Reception == MWI's initial reception was overwhelmingly negative, in the sense that it was ignored, with the notable exception of DeWitt. Wheeler made considerable efforts to formulate the theory in a way that would be palatable to Bohr, visited Copenhagen in 1956 to discuss it with him, and convinced Everett to visit as well, which happened in 1959. Nevertheless, Bohr and his collaborators completely rejected the theory. Everett had already left academia in 1957, never to return, and in 1980, Wheeler disavowed the theory. === Support === One of the strongest longtime advocates of MWI is David Deutsch. According to him, the single photon interference pattern observed in the double slit experiment can be explained by interference of photons in multiple universes. Viewed this way, the single photon interference experiment is indistinguishable from the multiple photon interference experiment. In a more practical vein, in one of the earliest papers on quantum computing, Deutsch suggested that parallelism that results from MWI could lead to "a method by which certain probabilistic tasks can be performed faster by a universal quantum computer than by any classical restriction of it". He also proposed that MWI will be testable (at least against "naive" Copenhagenism) when reversible computers become conscious via the reversible observation of spin. === Equivocal === Philosophers of science James Ladyman and Don Ross say that MWI could be true, but do not embrace it. They note that no quantum theory is yet empirically adequate for describing all of reality, given its lack of unification with general relativity, and so do not see a reason to regard any interpretation of quantum mechanics as the final word in metaphysics. They also suggest that the multiple branches may be an artifact of incomplete descriptions and of using quantum mechanics to represent the states of macroscopic objects. They argue that macroscopic objects are significantly different from microscopic objects in not being isolated from the environment, and that using quantum formalism to describe them lacks explanatory and descriptive power and accuracy. === Rejection === Some scientists consider some aspects of MWI to be unfalsifiable and hence unscientific because the multiple parallel universes are non-communicating, in the sense that no information can be passed between them. Victor J. Stenger remarked that Murray Gell-Mann's published work explicitly rejects the existence of simultaneous parallel universes. Collaborating with James Hartle, Gell-Mann worked toward the development of a more "palatable" post-Everett quantum mechanics. Stenger thought it fair to say that most physicists find MWI too extreme, though it "has merit in finding a place for the observer inside the system being analyzed and doing away with the troublesome notion of wave function collapse". Roger Penrose argues that the idea is flawed because it is based on an oversimplified version of quantum mechanics that does not account for gravity. In his view, applying conventional quantum mechanics to the universe implies the MWI, but the lack of a successful theory of quantum gravity negates the claimed universality of conventional quantum mechanics. According to Penrose, "the rules must change when gravity is involved". He further asserts that gravity helps anchor reality and "blurry" events have only one allowable outcome: "electrons, atoms, molecules, etc., are so minute that they require almost no amount of energy to maintain their gravity, and therefore their overlapping states. They can stay in that state forever, as described in standard quantum theory". On the other hand, "in the case of large objects, the duplicate states disappear in an instant due to the fact that these objects create a large gravitational field". Philosopher of science Robert P. Crease says that MWI is "one of the most implausible and unrealistic ideas in the history of science" because it means that everything conceivable happens. Science writer Philip Ball calls MWI's implications fantasies, since "beneath their apparel of scientific equations or symbolic logic, they are acts of imagination, of 'just supposing'". Theoretical physicist Gerard 't Hooft also dismisses the idea: "I do not believe that we have to live with the many-worlds interpretation. Indeed, it would be a stupendous number of parallel worlds, which are only there because physicists couldn't decide which of them is real." Asher Peres was an outspoken critic of MWI. A section of his 1993 textbook had the title Everett's interpretation and other bizarre theories. Peres argued that the various many-worlds interpretations merely shift the arbitrariness or vagueness of the collapse postulate to the question of when "worlds" can be regarded as separate, and that no objective criterion for that separation can actually be formulated. === Polls === A poll of 72 "leading quantum cosmologists and other quantum field theorists" conducted before 1991 by L. David Raub showed 58% agreement with "Yes, I think MWI is true". Max Tegmark reports the result of a "highly unscientific" poll taken at a 1997 quantum mechanics workshop. According to Tegmark, "The many worlds interpretation (MWI) scored second, comfortably ahead of the consistent histories and Bohm interpretations." In response to Sean M. Carroll's statement "As crazy as it sounds, most working physicists buy into the many-worlds theory", Michael Nielsen counters: "at a quantum computing conference at Cambridge in 1998, a many-worlder surveyed the audience of approximately 200 people ... Many-worlds did just fine, garnering support on a level comparable to, but somewhat below, Copenhagen and decoherence." But Nielsen notes that it seemed most attendees found it to be a waste of time: Peres "got a huge and sustained round of applause…when he got up at the end of the polling and asked 'And who here believes the laws of physics are decided by a democratic vote?'" A 2005 poll of fewer than 40 students and researchers taken after a course on the Interpretation of Quantum Mechanics at the Institute for Quantum Computing University of Waterloo found "Many Worlds (and decoherence)" to be the least favored. A 2011 poll of 33 participants at an Austrian conference on quantum foundations found 6 endorsed MWI, 8 "Information-based/information-theoretical", and 14 Copenhagen; the authors remark that MWI received a similar percentage of votes as in Tegmark's 1997 poll. == Speculative implications == DeWitt has said that Everett, Wheeler, and Graham "do not in the end exclude any element of the superposition. All the worlds are there, even those in which everything goes wrong and all the statistical laws break down." Tegmark affirmed that absurd or highly unlikely events are rare but inevitable under MWI: "Things inconsistent with the laws of physics will never happen—everything else will ... it's important to keep track of the statistics, since even if everything conceivable happens somewhere, really freak events happen only exponentially rarely." David Deutsch speculates in his book The Beginning of Infinity that some fiction, such as alternate history, could occur somewhere in the multiverse, as long as it is consistent with the laws of physics. According to Ladyman and Ross, many seemingly physically plausible but unrealized possibilities, such as those discussed in other scientific fields, generally have no counterparts in other branches, because they are in fact incompatible with the universal wave function. According to Carroll, human decision-making, contrary to common misconceptions, is best thought of as a classical process, not a quantum one, because it works on the level of neurochemistry rather than fundamental particles. Human decisions do not cause the world to branch into equally realized outcomes; even for subjectively difficult decisions, the "weight" of realized outcomes is almost entirely concentrated in a single branch.: 214–216  Quantum suicide is a thought experiment in quantum mechanics and the philosophy of physics that can purportedly distinguish between the Copenhagen interpretation of quantum mechanics and the many-worlds interpretation by a variation of the Schrödinger's cat thought experiment, from the cat's point of view. Quantum immortality refers to the subjective experience of surviving quantum suicide. Most experts believe the experiment would not work in the real world, because the world with the surviving experimenter has a lower "measure" than the world before the experiment, making it less likely that the experimenter will experience their survival.: 371  == See also == == Notes == == References == == Further reading == Peter Byrne, The Many Worlds of Hugh Everett III: Multiple Universes, Mutual Assured Destruction, and the Meltdown of a Nuclear Family, Oxford University Press, 2010. Jeffrey A. Barrett and Peter Byrne, eds., "The Everett Interpretation of Quantum Mechanics: Collected Works 1955–1980 with Commentary", Princeton University Press, 2012. Julian Brown, Minds, Machines, and the Multiverse, Simon & Schuster, 2000, ISBN 0-684-81481-1 Sean M. Carroll, Something deeply hidden, Penguin Random House, (2019) Paul C.W. Davies, Other Worlds, (1980) ISBN 0-460-04400-1 Osnaghi, Stefano; Freitas, Fabio; Olival Freire, Jr (2009). "The Origin of the Everettian Heresy" (PDF). Studies in History and Philosophy of Modern Physics. 40 (2): 97–123. Bibcode:2009SHPMP..40...97O. CiteSeerX 10.1.1.397.3933. doi:10.1016/j.shpsb.2008.10.002. Archived from the original (PDF) on 2016-05-28. Retrieved 2009-08-07. A study of the painful three-way relationship between Hugh Everett, John A Wheeler and Niels Bohr and how this affected the early development of the many-worlds theory. Vaidman, Lev, ed. (2024). The Many-Worlds Interpretation of Quantum Mechanics. MDPI. ISBN 978-3-7258-1070-3. David Wallace, Worlds in the Everett Interpretation, Studies in History and Philosophy of Modern Physics, 33, (2002), pp. 637–661, arXiv:quant-ph/0103092 John A. Wheeler and Wojciech Hubert Zurek (eds), Quantum Theory and Measurement, Princeton University Press, (1983), ISBN 0-691-08316-9 == External links == "Everettian Interpretations of Quantum Mechanics". Internet Encyclopedia of Philosophy. Everett's Relative-State Formulation of Quantum Mechanics – Jeffrey A. Barrett's article on Everett's formulation of quantum mechanics in the Stanford Encyclopedia of Philosophy. Many-Worlds Interpretation of Quantum Mechanics – Lev Vaidman's article on the many-worlds interpretation of quantum mechanics in the Stanford Encyclopedia of Philosophy. Hugh Everett III Manuscript Archive (UC Irvine) – Jeffrey A. Barrett, Peter Byrne, and James O. Weatherall (eds.). Henry Stapp's critique of MWI, focusing on the basis problem Canadian Journal of Physics 80, 1043–1052 (2002). Scientific American report on Many Worlds and Hugh Everett.
Wikipedia/Many-worlds_interpretation_of_quantum_mechanics
The B-theory of time, also called the "tenseless theory of time", is one of two positions regarding the temporal ordering of events in the philosophy of time. B-theorists argue that the flow of time is only a subjective illusion of human consciousness, that the past, present, and future are equally real, and that time is tenseless: temporal becoming is not an objective feature of reality. Therefore, there is nothing privileged about the present, ontologically speaking. The B-theory is derived from a distinction drawn by J. M. E. McTaggart between A series and B series. The B-theory is often drawn upon in theoretical physics, and is seen in theories such as eternalism. == Origin of terms == The terms A-theory and B-theory, first coined by Richard M. Gale in 1966, derive from Cambridge philosopher J. M. E. McTaggart's analysis of time and change in "The Unreality of Time" (1908), in which events are ordered via a tensed A-series or a tenseless B-series. It is popularly assumed that the A-theory represents time like an A-series, while the B-theory represents time like a B-series. Events (or "times"), McTaggart observed, may be characterized in two distinct but related ways. On the one hand they can be characterized as past, present or future, normally indicated in natural languages such as English by the verbal inflection of tenses or auxiliary adverbial modifiers. Alternatively, events may be described as earlier than, simultaneous with, or later than others. Philosophers are divided as to whether the tensed or tenseless mode of expressing temporal fact is fundamental. Some philosophers have criticised hybrid theories, where one holds a tenseless view of time but asserts that the present has special properties, as falling foul of McTaggart's paradox. For a thorough discussion of McTaggart's paradox, see R. D. Ingthorsson (2016). The debate between A-theorists and B-theorists is a continuation of a metaphysical dispute reaching back to the ancient Greek philosophers Heraclitus and Parmenides. Parmenides thought that reality is timeless and unchanging. Heraclitus, in contrast, believed that the world is a process of ceaseless change or flux. Reality for Heraclitus is dynamic and ephemeral. Indeed, the world is so fleeting, according to Heraclitus, that it is impossible to step twice into the same river. The metaphysical issues that continue to divide A-theorists and B-theorists concern the reality of the past, the reality of the future, and the ontological status of the present. == B-theory in metaphysics == The difference between A-theorists and B-theorists is often described as a dispute about temporal passage or 'becoming' and 'progressing'. B-theorists argue that this notion is purely psychological. Many A-theorists argue that in rejecting temporal 'becoming', B-theorists reject time's most vital and distinctive characteristic. It is common (though not universal) to identify A-theorists' views with belief in temporal passage. Another way to characterise the distinction revolves around what is known as the principle of temporal parity, the thesis that contrary to what appears to be the case, all times really exist in parity. A-theory (and especially presentism) denies that all times exist in parity, while B-theory insists all times exist in parity. B-theorists such as D. H. Mellor and J. J. C. Smart wish to eliminate all talk of past, present and future in favour of a tenseless ordering of events, believing the past, present, and future to be equally real, opposing the idea that they are irreducible foundations of temporality. B-theorists also argue that the past, present, and future feature very differently in deliberation and reflection. For example, we remember the past and anticipate the future, but not vice versa. B-theorists maintain that the fact that we know much less about the future simply reflects an epistemological difference between the future and the past: the future is no less real than the past; we just know less about it. == Opposition == === Irreducibility of tense === Earlier B-theorists argued that one could paraphrase tensed sentences (such as "the sun is now shining", uttered on September 28) into tenseless sentences (such as "on September 28, the sun shines") without loss of meaning. Later B-theorists argued that tenseless sentences could give the truth conditions of tensed sentences or their tokens. Quentin Smith argues that "now" cannot be reduced to descriptions of dates and times, because all date and time descriptions, and therefore truth conditionals, are relative to certain events. Tensed sentences, on the other hand, do not have such truth conditionals. The B-theorist could argue that "now" is reducible to a token-reflexive phrase such as "simultaneous with this utterance", yet Smith states that even such an argument fails to eliminate tense. One can think the statement "I am not uttering anything now", and such a statement would be true. The statement "I am not uttering anything simultaneous with this utterance" is self-contradictory, and cannot be true even when one thinks the statement. Finally, while tensed statements can express token-independent truth values, no token-reflexive statement can do so (by definition of the term "token-reflexive"). Smith claims that proponents of the B-theory argue that the inability to translate tensed sentences into tenseless sentences does not prove A-theory. Logician and philosopher Arthur Prior has also drawn a distinction between what he calls A-facts and B-facts. The latter are facts about tenseless relations, such as the fact that the year 2025 is 25 years later than the year 2000. The former are tensed facts, such as that the Jurassic age is in the past, or that the end of the universe is in the future. Prior asks the reader to imagine having a headache, and after the headache subsides, saying "thank goodness that's over." Prior argues that the B-theory cannot make sense of this sentence. It seems bizarre to be thankful that a headache is earlier than one's utterance, anymore than being thankful that the headache is later than one's utterance. Indeed, most people who say "thank goodness that's over" are not even thinking of their own utterance. Therefore, when people say "thank goodness that's over," they are thankful for an A-fact, and not a B-fact. Yet, A-facts are only possible on the A-theory of time. (See also: Further facts.) === Endurantism and perdurantism === Opponents also charge the B-theory with being unable to explain persistence of objects. The two leading explanations for this phenomenon are endurantism and perdurantism. According to the former, an object is wholly present at every moment of its existence. According to the latter, objects are extended in time and therefore have temporal parts. Hales and Johnson explain endurantism as follows: "something is an enduring object only if it is wholly present at each time in which it exists. An object is wholly present at a time if all of its parts co-exist at that time." Under endurantism, all objects must exist as wholes at each point in time, but an object such as a rotting fruit will have the property of being not rotten one day and being rotten on another. On eternalism, and hence the B-theory, it seems that one is committed to two conflicting states for the same object. The spacetime (Minkowskian) interpretation of relativity adds an additional problem for endurantism under B-theory. On the spacetime interpretation, an object may appear as a whole at its rest frame, but on an inertial frame, it will have proper parts at different positions, and therefore different parts at different times. Hence it will not exist as a whole at any time, contradicting endurantism. Opponents will then charge perdurantism with numerous difficulties of its own. First, it is controversial whether perdurantism can be formulated coherently. An object is defined as a collection of spatiotemporal parts, defined as pieces of a perduring object. If objects have temporal parts, this leads to difficulties. For example, the rotating discs argument asks the reader to imagine a world containing nothing more than a homogeneous spinning disk. Under endurantism, the same disc endures despite its rotations. The perdurantist supposedly has a difficult time explaining what it means for such a disc to have a determinate state of rotation. Temporal parts also seem to act unlike physical parts. A piece of chalk can be broken into two physical halves, but it seems nonsensical to talk about breaking it into two temporal halves. American epistemologist Roderick Chisholm argued that someone who hears the bird call "Bob White" knows "that his experience of hearing 'Bob' and his experience of hearing 'White' were not also had by two other things, each distinct from himself and from each other. The endurantist can explain the experience as "There exists an x such that x hears 'Bob' and then x hears 'White'" but the perdurantist cannot give such an account. Peter van Inwagen asks the reader to consider Descartes as a four-dimensional object that extends from 1596 to 1650. If Descartes had lived a much shorter life, he would have had a radically different set of temporal parts. This diminished Descartes, he argues, could not have been the same person on perdurantism, since their temporal extents and parts are so different. === First-person perspectives === Vincent Conitzer has argued against B-theory due to the existence of first-person perspectives and Benj Hellie's vertiginous question. He argues that arguments in favor of the A-theory of time are more effective as arguments for the combined position that A-theory is true and the "I" is metaphysically privileged from other perspectives. Caspar Hare has discussed similar ideas with the theories of egocentric presentism and perspectival realism. == Notes == == References == == External links == Markosian, Ned, 2002, "Time", Stanford Encyclopedia of Philosophy Arthur Prior, Stanford Encyclopedia of Philosophy
Wikipedia/B-theory_of_time
In philosophy, the philosophy of physics deals with conceptual and interpretational issues in physics, many of which overlap with research done by certain kinds of theoretical physicists. Historically, philosophers of physics have engaged with questions such as the nature of space, time, matter and the laws that govern their interactions, as well as the epistemological and ontological basis of the theories used by practicing physicists. The discipline draws upon insights from various areas of philosophy, including metaphysics, epistemology, and philosophy of science, while also engaging with the latest developments in theoretical and experimental physics. Contemporary work focuses on issues at the foundations of the three pillars of modern physics: Quantum mechanics: Interpretations of quantum theory, including the nature of quantum states, the measurement problem, and the role of observers. Implications of entanglement, nonlocality, and the quantum-classical relationship are also explored. Relativity: Conceptual foundations of special and general relativity, including the nature of spacetime, simultaneity, causality, and determinism. Compatibility with quantum mechanics, gravitational singularities, and philosophical implications of cosmology are also investigated. Statistical mechanics: Relationship between microscopic and macroscopic descriptions, interpretation of probability, origin of irreversibility and the arrow of time. Foundations of thermodynamics, role of information theory in understanding entropy, and implications for explanation and reduction in physics. Other areas of focus include the nature of physical laws, symmetries, and conservation principles; the role of mathematics; and philosophical implications of emerging fields like quantum gravity, quantum information, and complex systems. Philosophers of physics have argued that conceptual analysis clarifies foundations, interprets implications, and guides theory development in physics. == Philosophy of space and time == The existence and nature of space and time (or space-time) are central topics in the philosophy of physics. Issues include (1) whether space and time are fundamental or emergent, and (2) how space and time are operationally different from one another. === Time === In classical mechanics, time is taken to be a fundamental quantity (that is, a quantity which cannot be defined in terms of other quantities). However, certain theories such as loop quantum gravity claim that spacetime is emergent. As Carlo Rovelli, one of the founders of loop quantum gravity, has said: "No more fields on spacetime: just fields on fields". Time is defined via measurement—by its standard time interval. Currently, the standard time interval (called "conventional second", or simply "second") is defined as 9,192,631,770 oscillations of a hyperfine transition in the 133 caesium atom. (ISO 31-1). What time is and how it works follows from the above definition. Time then can be combined mathematically with the fundamental quantities of space and mass to define concepts such as velocity, momentum, energy, and fields. Both Isaac Newton and Galileo Galilei, as well as most people up until the 20th century, thought that time was the same for everyone everywhere. The modern conception of time is based on Albert Einstein's theory of relativity and Hermann Minkowski's spacetime, in which rates of time run differently in different inertial frames of reference, and space and time are merged into spacetime. Einstein's general relativity as well as the redshift of the light from receding distant galaxies indicate that the entire Universe and possibly space-time itself began about 13.8 billion years ago in the Big Bang. Einstein's theory of special relativity mostly (though not universally) made theories of time where there is something metaphysically special about the present seem much less plausible, as the reference-frame-dependence of time seems to not allow the idea of a privileged present moment. === Space === Space is one of the few fundamental quantities in physics, meaning that it cannot be defined via other quantities because there is nothing more fundamental known at present. Thus, similar to the definition of other fundamental quantities (like time and mass), space is defined via measurement. Currently, the standard space interval, called a standard metre or simply metre, is defined as the distance traveled by light in a vacuum during a time interval of 1/299792458 of a second (exact). In classical physics, space is a three-dimensional Euclidean space where any position can be described using three coordinates and parameterised by time. Special and general relativity use four-dimensional spacetime rather than three-dimensional space; and currently there are many speculative theories which use more than three spatial dimensions. == Philosophy of quantum mechanics == Quantum mechanics is a large focus of contemporary philosophy of physics, specifically concerning the correct interpretation of quantum mechanics. Very broadly, much of the philosophical work that is done in quantum theory is trying to make sense of superposition states: the property that particles seem to not just be in one determinate position at one time, but are somewhere 'here', and also 'there' at the same time. Such a radical view turns many common sense metaphysical ideas on their head. Much of contemporary philosophy of quantum mechanics aims to make sense of what the very empirically successful formalism of quantum mechanics tells us about the physical world. === Uncertainty principle === The uncertainty principle is a mathematical relation asserting an upper limit to the accuracy of the simultaneous measurement of any pair of conjugate variables, e.g. position and momentum. In the formalism of operator notation, this limit is the evaluation of the commutator of the variables' corresponding operators. The uncertainty principle arose as an answer to the question: How does one measure the location of an electron around a nucleus if an electron is a wave? When quantum mechanics was developed, it was seen to be a relation between the classical and quantum descriptions of a system using wave mechanics. === "Locality" and hidden variables === Bell's theorem is a term encompassing a number of closely related results in physics, all of which determine that quantum mechanics is incompatible with local hidden-variable theories given some basic assumptions about the nature of measurement. "Local" here refers to the principle of locality, the idea that a particle can only be influenced by its immediate surroundings, and that interactions mediated by physical fields cannot propagate faster than the speed of light. "Hidden variables" are putative properties of quantum particles that are not included in the theory but nevertheless affect the outcome of experiments. In the words of physicist John Stewart Bell, for whom this family of results is named, "If [a hidden-variable theory] is local it will not agree with quantum mechanics, and if it agrees with quantum mechanics it will not be local." The term is broadly applied to a number of different derivations, the first of which was introduced by Bell in a 1964 paper titled "On the Einstein Podolsky Rosen Paradox". Bell's paper was a response to a 1935 thought experiment that Albert Einstein, Boris Podolsky and Nathan Rosen proposed, arguing that quantum physics is an "incomplete" theory. By 1935, it was already recognized that the predictions of quantum physics are probabilistic. Einstein, Podolsky and Rosen presented a scenario that involves preparing a pair of particles such that the quantum state of the pair is entangled, and then separating the particles to an arbitrarily large distance. The experimenter has a choice of possible measurements that can be performed on one of the particles. When they choose a measurement and obtain a result, the quantum state of the other particle apparently collapses instantaneously into a new state depending upon that result, no matter how far away the other particle is. This suggests that either the measurement of the first particle somehow also influenced the second particle faster than the speed of light, or that the entangled particles had some unmeasured property which pre-determined their final quantum states before they were separated. Therefore, assuming locality, quantum mechanics must be incomplete, as it cannot give a complete description of the particle's true physical characteristics. In other words, quantum particles, like electrons and photons, must carry some property or attributes not included in quantum theory, and the uncertainties in quantum theory's predictions would then be due to ignorance or unknowability of these properties, later termed "hidden variables". Bell carried the analysis of quantum entanglement much further. He deduced that if measurements are performed independently on the two separated particles of an entangled pair, then the assumption that the outcomes depend upon hidden variables within each half implies a mathematical constraint on how the outcomes on the two measurements are correlated. This constraint would later be named the Bell inequality. Bell then showed that quantum physics predicts correlations that violate this inequality. Consequently, the only way that hidden variables could explain the predictions of quantum physics is if they are "nonlocal", which is to say that somehow the two particles can carry non-classical correlations no matter how widely they ever become separated. Multiple variations on Bell's theorem were put forward in the following years, introducing other closely related conditions generally known as Bell (or "Bell-type") inequalities. The first rudimentary experiment designed to test Bell's theorem was performed in 1972 by John Clauser and Stuart Freedman. More advanced experiments, known collectively as Bell tests, have been performed many times since. To date, Bell tests have consistently found that physical systems obey quantum mechanics and violate Bell inequalities; which is to say that the results of these experiments are incompatible with any local hidden variable theory. The exact nature of the assumptions required to prove a Bell-type constraint on correlations has been debated by physicists and by philosophers. While the significance of Bell's theorem is not in doubt, its full implications for the interpretation of quantum mechanics remain unresolved. === Interpretations of quantum mechanics === In March 1927, working in Niels Bohr's institute, Werner Heisenberg formulated the principle of uncertainty thereby laying the foundation of what became known as the Copenhagen interpretation of quantum mechanics. Heisenberg had been studying the papers of Paul Dirac and Pascual Jordan. He discovered a problem with measurement of basic variables in the equations. His analysis showed that uncertainties, or imprecisions, always turned up if one tried to measure the position and the momentum of a particle at the same time. Heisenberg concluded that these uncertainties or imprecisions in the measurements were not the fault of the experimenter, but fundamental in nature and are inherent mathematical properties of operators in quantum mechanics arising from definitions of these operators. The Copenhagen interpretation is somewhat loosely defined, as many physicists and philosophers of physics have advanced similar but not identical views of quantum mechanics. It is principally associated with Heisenberg and Bohr, despite their philosophical differences. Features common to Copenhagen-type interpretations include the idea that quantum mechanics is intrinsically indeterministic, with probabilities calculated using the Born rule, and the principle of complementarity, which states that objects have certain pairs of complementary properties that cannot all be observed or measured simultaneously. Moreover, the act of "observing" or "measuring" an object is irreversible, and no truth can be attributed to an object, except according to the results of its measurement. Copenhagen-type interpretations hold that quantum descriptions are objective, in that they are independent of any arbitrary factors in the physicist's mind.: 85–90  The many-worlds interpretation of quantum mechanics by Hugh Everett III claims that the wave-function of a quantum system is telling us claims about the reality of that physical system. It denies wavefunction collapse, and claims that superposition states should be interpreted literally as describing the reality of many-worlds where objects are located, and not simply indicating the indeterminacy of those variables. This is sometimes argued as a corollary of scientific realism, which states that scientific theories aim to give us literally true descriptions of the world. One issue for the Everett interpretation is the role that probability plays on this account. The Everettian account is completely deterministic, whereas probability seems to play an ineliminable role in quantum mechanics. Contemporary Everettians have argued that one can get an account of probability that follows the Born rule through certain decision-theoretic proofs, but there is as yet no consensus about whether any of these proofs are successful. Physicist Roland Omnès noted that it is impossible to experimentally differentiate between Everett's view, which says that as the wave-function decoheres into distinct worlds, each of which exists equally, and the more traditional view that says that a decoherent wave-function leaves only one unique real result. Hence, the dispute between the two views represents a great "chasm". "Every characteristic of reality has reappeared in its reconstruction by our theoretical model; every feature except one: the uniqueness of facts." == Philosophy of thermal and statistical physics == The philosophy of thermal and statistical physics is concerned with the foundational issues and conceptual implications of thermodynamics and statistical mechanics. These branches of physics deal with the macroscopic behavior of systems comprising a large number of microscopic entities, such as particles, and the nature of laws that emerge from these systems like irreversibility and entropy. Interest of philosophers in statistical mechanics first arose from the observation of an apparent conflict between the time-reversal symmetry of fundamental physical laws and the irreversibility observed in thermodynamic processes, known as the arrow of time problem. Philosophers have sought to understand how the asymmetric behavior of macroscopic systems, such as the tendency of heat to flow from hot to cold bodies, can be reconciled with the time-symmetric laws governing the motion of individual particles. Another key issue is the interpretation of probability in statistical mechanics, which is primarily concerned with the question of whether probabilities in statistical mechanics are epistemic, reflecting our lack of knowledge about the precise microstate of a system, or ontic, representing an objective feature of the physical world. The epistemic interpretation, also known as the subjective or Bayesian view, holds that probabilities in statistical mechanics are a measure of our ignorance about the exact state of a system. According to this view, we resort to probabilistic descriptions only due to the practical impossibility of knowing the precise properties of all its micro-constituents, like the positions and momenta of particles. As such, the probabilities are not objective features of the world but rather arise from our ignorance. In contrast, the ontic interpretation, also called the objective or frequentist view, asserts that probabilities in statistical mechanics are real, physical properties of the system itself. Proponents of this view argue that the probabilistic nature of statistical mechanics is not merely a reflection of our ignorance but an intrinsic feature of the physical world, and that even if we had complete knowledge of the microstate of a system, the macroscopic behavior would still be best described by probabilistic laws. == History == === Aristotelian physics === Aristotelian physics viewed the universe as a sphere with a center. Matter, composed of the classical elements: earth, water, air, and fire; sought to go down towards the center of the universe, the center of the Earth, or up, away from it. Things in the aether such as the Moon, the Sun, planets, or stars circled the center of the universe. Movement is defined as change in place, i.e. space. === Newtonian physics === The implicit axioms of Aristotelian physics with respect to movement of matter in space were superseded in Newtonian physics by Newton's first law of motion.Every body perseveres in its state either of rest or of uniform motion in a straight line, except insofar as it is compelled to change its state by impressed forces. "Every body" includes the Moon, and an apple; and includes all types of matter, air as well as water, stones, or even a flame. Nothing has a natural or inherent motion. Absolute space being three-dimensional Euclidean space, infinite and without a center. Being "at rest" means being at the same place in absolute space over time. The topology and affine structure of space must permit movement in a straight line at a uniform velocity; thus both space and time must have definite, stable dimensions. === Leibniz === Gottfried Wilhelm Leibniz, 1646–1716, was a contemporary of Newton. He contributed a fair amount to the statics and dynamics emerging around him, often disagreeing with Descartes and Newton. He devised a new theory of motion (dynamics) based on kinetic energy and potential energy, which posited space as relative, whereas Newton was thoroughly convinced that space was absolute. An important example of Leibniz's mature physical thinking is his Specimen Dynamicum of 1695. Until the discovery of subatomic particles and the quantum mechanics governing them, many of Leibniz's speculative ideas about aspects of nature not reducible to statics and dynamics made little sense. He anticipated Albert Einstein by arguing, against Newton, that space, time and motion are relative, not absolute: "As for my own opinion, I have said more than once, that I hold space to be something merely relative, as time is, that I hold it to be an order of coexistences, as time is an order of successions." == See also == == References == == Further reading == David Albert, 1994. Quantum Mechanics and Experience. Harvard Univ. Press. John D. Barrow and Frank J. Tipler, 1986. The Cosmological Anthropic Principle. Oxford Univ. Press. Beisbart, C. and S. Hartmann, eds., 2011. "Probabilities in Physics". Oxford Univ. Press. John S. Bell, 2004 (1987), Speakable and Unspeakable in Quantum Mechanics. Cambridge Univ. Press. David Bohm, 1980. Wholeness and the Implicate Order. Routledge. Nick Bostrom, 2002. Anthropic Bias: Observation Selection Effects in Science and Philosophy. Routledge. Thomas Brody, 1993, Ed. by Luis de la Peña and Peter E. Hodgson The Philosophy Behind Physics Springer ISBN 3-540-55914-0 Harvey Brown, 2005. Physical Relativity. Space-time structure from a dynamical perspective. Oxford Univ. Press. Butterfield, J., and John Earman, eds., 2007. Philosophy of Physics, Parts A and B. Elsevier. Craig Callender and Nick Huggett, 2001. Physics Meets Philosophy at the Planck Scale. Cambridge Univ. Press. David Deutsch, 1997. The Fabric of Reality. London: The Penguin Press. Bernard d'Espagnat, 1989. Reality and the Physicist. Cambridge Univ. Press. Trans. of Une incertaine réalité; le monde quantique, la connaissance et la durée. --------, 1995. Veiled Reality. Addison-Wesley. --------, 2006. On Physics and Philosophy. Princeton Univ. Press. Roland Omnès, 1994. The Interpretation of Quantum Mechanics. Princeton Univ. Press. --------, 1999. Quantum Philosophy. Princeton Univ. Press. Huw Price, 1996. Time's Arrow and Archimedes's Point. Oxford Univ. Press. Lawrence Sklar, 1992. Philosophy of Physics. Westview Press. ISBN 0-8133-0625-6, ISBN 978-0-8133-0625-4 Victor Stenger, 2000. Timeless Reality. Prometheus Books. Carl Friedrich von Weizsäcker, 1980. The Unity of Nature. Farrar Straus & Giroux. Werner Heisenberg, 1971. Physics and Beyond: Encounters and Conversations. Harper & Row (World Perspectives series), 1971. William Berkson, 1974. Fields of Force. Routledge and Kegan Paul, London. ISBN 0-7100-7626-6 Encyclopædia Britannica, Philosophy of Physics, David Z. Albert == External links == Stanford Encyclopedia of Philosophy: "Absolute and Relational Theories of Space and Motion"—Nick Huggett and Carl Hoefer "Being and Becoming in Modern Physics"—Steven Savitt "Boltzmann's Work in Statistical Physics"—Jos Uffink "Conventionality of Simultaneity"—Allen Janis "Early Philosophical Interpretations of General Relativity"—Thomas A. Ryckman "Everett's Relative-State Formulation of Quantum Mechanics"—Jeffrey A. Barrett "Experiments in Physics"—Allan Franklin "Holism and Nonseparability in Physics"—Richard Healey "Intertheory Relations in Physics"—Robert Batterman "Naturalism"—David Papineau "Philosophy of Statistical Mechanics"—Lawrence Sklar "Physicalism"—Daniel Sojkal "Quantum Mechanics"—Jenann Ismael "Reichenbach's Common Cause Principle"—Frank Artzenius "Structural Realism"—James Ladyman "Structuralism in Physics"—Heinz-Juergen Schmidt "Supertasks"—JB Manchak and Bryan Roberts "Symmetry and Symmetry Breaking"—Katherine Brading and Elena Castellani "Thermodynamic Asymmetry in Time"—Craig Callender "Time"—by Ned Markosian "Time Machines" —John Earman, Chris Wüthrich, and JB Manchak "Uncertainty principle"—Jan Hilgevoord and Jos Uffink "The Unity of Science"—Jordi Cat
Wikipedia/Philosophical_interpretation_of_classical_physics
An interpretation of quantum mechanics is an attempt to explain how the mathematical theory of quantum mechanics might correspond to experienced reality. Quantum mechanics has held up to rigorous and extremely precise tests in an extraordinarily broad range of experiments. However, there exist a number of contending schools of thought over their interpretation. These views on interpretation differ on such fundamental questions as whether quantum mechanics is deterministic or stochastic, local or non-local, which elements of quantum mechanics can be considered real, and what the nature of measurement is, among other matters. While some variation of the Copenhagen interpretation is commonly presented in textbooks, many other interpretations have been developed. Despite nearly a century of debate and experiment, no consensus has been reached among physicists and philosophers of physics concerning which interpretation best "represents" reality. == History == The definition of quantum theorists' terms, such as wave function and matrix mechanics, progressed through many stages. For instance, Erwin Schrödinger originally viewed the electron's wave function as its charge density smeared across space, but Max Born reinterpreted the absolute square value of the wave function as the electron's probability density distributed across space;: 24–33  the Born rule, as it is now called, matched experiment, whereas Schrödinger's charge density view did not. The views of several early pioneers of quantum mechanics, such as Niels Bohr and Werner Heisenberg, are often grouped together as the "Copenhagen interpretation", though physicists and historians of physics have argued that this terminology obscures differences between the views so designated. Copenhagen-type ideas were never universally embraced, and challenges to a perceived Copenhagen orthodoxy gained increasing attention in the 1950s with the pilot-wave interpretation of David Bohm and the many-worlds interpretation of Hugh Everett III. The physicist N. David Mermin once quipped, "New interpretations appear every year. None ever disappear." (Mermin also coined the saying "Shut up and calculate" to describe many physicists' attitude to quantum theory, a remark which is often misattributed to Richard Feynman.) As a rough guide to development of the mainstream view during the 1990s and 2000s, a "snapshot" of opinions was collected in a poll by Schlosshauer et al. at the "Quantum Physics and the Nature of Reality" conference of July 2011. The authors reference a similarly informal poll carried out by Max Tegmark at the "Fundamental Problems in Quantum Theory" conference in August 1997. The main conclusion of the authors is that "the Copenhagen interpretation still reigns supreme", receiving the most votes in their poll (42%), besides the rise to mainstream notability of the many-worlds interpretations: "The Copenhagen interpretation still reigns supreme here, especially if we lump it together with intellectual offsprings such as information-based interpretations and the quantum Bayesian interpretation. In Tegmark's poll, the Everett interpretation received 17% of the vote, which is similar to the number of votes (18%) in our poll." Some concepts originating from studies of interpretations have found more practical application in quantum information science. == Interpretive challenges == Abstract, mathematical nature of quantum field theories: the mathematical structure of quantum mechanics is abstract and does not result in a single, clear interpretation of its quantities. Apparent indeterministic and irreversible processes: in classical field theory, a physical property at a given location in the field is readily derived. In most mathematical formulations of quantum mechanics, measurement (understood as an interaction with a given state) has a special role in the theory, as it is the sole process that can cause a nonunitary, irreversible evolution of the state. Role of the observer in determining outcomes. Copenhagen-type interpretations imply that the wavefunction is a calculational tool, and represents reality only immediately after a measurement performed by an observer. Everettian interpretations grant that all possible outcomes are real, and that measurement-type interactions cause a branching process in which each possibility is realised. Classically unexpected correlations between remote objects: entangled quantum systems, as illustrated in the EPR paradox, obey statistics that seem to violate principles of local causality by action at a distance. Complementarity of proffered descriptions: complementarity holds that no set of classical physical concepts can simultaneously refer to all properties of a quantum system. For instance, wave description A and particulate description B can each describe quantum system S, but not simultaneously. This implies the composition of physical properties of S does not obey the rules of classical propositional logic when using propositional connectives (see "Quantum logic"). Like contextuality, the "origin of complementarity lies in the non-commutativity of operators" that describe quantum objects. Contextual behaviour of systems locally: Quantum contextuality demonstrates that classical intuitions, in which properties of a system hold definite values independent of the manner of their measurement, fail even for local systems. Also, physical principles such as Leibniz's Principle of the identity of indiscernibles no longer apply in the quantum domain, signaling that most classical intuitions may be incorrect about the quantum world. == Influential interpretations == === Copenhagen interpretation === The Copenhagen interpretation is a collection of views about the meaning of quantum mechanics principally attributed to Niels Bohr and Werner Heisenberg. It is one of the oldest attitudes towards quantum mechanics, as features of it date to the development of quantum mechanics during 1925–1927, and it remains one of the most commonly taught. There is no definitive historical statement of what is the Copenhagen interpretation, and there were in particular fundamental disagreements between the views of Bohr and Heisenberg. For example, Heisenberg emphasized a sharp "cut" between the observer (or the instrument) and the system being observed,: 133  while Bohr offered an interpretation that is independent of a subjective observer or measurement or collapse, which relies on an "irreversible" or effectively irreversible process that imparts the classical behavior of "observation" or "measurement". Features common to Copenhagen-type interpretations include the idea that quantum mechanics is intrinsically indeterministic, with probabilities calculated using the Born rule, and the principle of complementarity, which states certain pairs of complementary properties cannot all be observed or measured simultaneously. Moreover, properties only result from the act of "observing" or "measuring"; the theory avoids assuming definite values from unperformed experiments. Copenhagen-type interpretations hold that quantum descriptions are objective, in that they are independent of physicists' mental arbitrariness.: 85–90  The statistical interpretation of wavefunctions due to Max Born differs sharply from Schrödinger's original intent, which was to have a theory with continuous time evolution and in which wavefunctions directly described physical reality.: 24–33  === Many worlds === The many-worlds interpretation is an interpretation of quantum mechanics in which a universal wavefunction obeys the same deterministic, reversible laws at all times; in particular there is no (indeterministic and irreversible) wavefunction collapse associated with measurement. The phenomena associated with measurement are claimed to be explained by decoherence, which occurs when states interact with the environment. More precisely, the parts of the wavefunction describing observers become increasingly entangled with the parts of the wavefunction describing their experiments. Although all possible outcomes of experiments continue to lie in the wavefunction's support, the times at which they become correlated with observers effectively "split" the universe into mutually unobservable alternate histories. === Quantum information theories === Quantum informational approaches have attracted growing support. They subdivide into two kinds. Information ontologies, such as J. A. Wheeler's "it from bit". These approaches have been described as a revival of immaterialism. Interpretations where quantum mechanics is said to describe an observer's knowledge of the world, rather than the world itself. This approach has some similarity with Bohr's thinking. Collapse (also known as reduction) is often interpreted as an observer acquiring information from a measurement, rather than as an objective event. These approaches have been appraised as similar to instrumentalism. James Hartle writes, The state is not an objective property of an individual system but is that information, obtained from a knowledge of how a system was prepared, which can be used for making predictions about future measurements. ... A quantum mechanical state being a summary of the observer's information about an individual physical system changes both by dynamical laws, and whenever the observer acquires new information about the system through the process of measurement. The existence of two laws for the evolution of the state vector ... becomes problematical only if it is believed that the state vector is an objective property of the system ... The "reduction of the wavepacket" does take place in the consciousness of the observer, not because of any unique physical process which takes place there, but only because the state is a construct of the observer and not an objective property of the physical system. === Relational quantum mechanics === The essential idea behind relational quantum mechanics, following the precedent of special relativity, is that different observers may give different accounts of the same series of events: for example, to one observer at a given point in time, a system may be in a single, "collapsed" eigenstate, while to another observer at the same time, it may be in a superposition of two or more states. Consequently, if quantum mechanics is to be a complete theory, relational quantum mechanics argues that the notion of "state" describes not the observed system itself, but the relationship, or correlation, between the system and its observer(s). The state vector of conventional quantum mechanics becomes a description of the correlation of some degrees of freedom in the observer, with respect to the observed system. However, it is held by relational quantum mechanics that this applies to all physical objects, whether or not they are conscious or macroscopic. Any "measurement event" is seen simply as an ordinary physical interaction, an establishment of the sort of correlation discussed above. Thus the physical content of the theory has to do not with objects themselves, but the relations between them. === QBism === QBism, which originally stood for "quantum Bayesianism", is an interpretation of quantum mechanics that takes an agent's actions and experiences as the central concerns of the theory. This interpretation is distinguished by its use of a subjective Bayesian account of probabilities to understand the quantum mechanical Born rule as a normative addition to good decision-making. QBism draws from the fields of quantum information and Bayesian probability and aims to eliminate the interpretational conundrums that have beset quantum theory. QBism deals with common questions in the interpretation of quantum theory about the nature of wavefunction superposition, quantum measurement, and entanglement. According to QBism, many, but not all, aspects of the quantum formalism are subjective in nature. For example, in this interpretation, a quantum state is not an element of reality—instead it represents the degrees of belief an agent has about the possible outcomes of measurements. For this reason, some philosophers of science have deemed QBism a form of anti-realism. The originators of the interpretation disagree with this characterization, proposing instead that the theory more properly aligns with a kind of realism they call "participatory realism", wherein reality consists of more than can be captured by any putative third-person account of it. === Consistent histories === The consistent histories interpretation generalizes the conventional Copenhagen interpretation and attempts to provide a natural interpretation of quantum cosmology. The theory is based on a consistency criterion that allows the history of a system to be described so that the probabilities for each history obey the additive rules of classical probability. It is claimed to be consistent with the Schrödinger equation. According to this interpretation, the purpose of a quantum-mechanical theory is to predict the relative probabilities of various alternative histories (for example, of a particle). === Ensemble interpretation === The ensemble interpretation, also called the statistical interpretation, can be viewed as a minimalist interpretation. That is, it claims to make the fewest assumptions associated with the standard mathematics. It takes the statistical interpretation of Born to the fullest extent. The interpretation states that the wave function does not apply to an individual system – for example, a single particle – but is an abstract statistical quantity that only applies to an ensemble (a vast multitude) of similarly prepared systems or particles. In the words of Einstein: The attempt to conceive the quantum-theoretical description as the complete description of the individual systems leads to unnatural theoretical interpretations, which become immediately unnecessary if one accepts the interpretation that the description refers to ensembles of systems and not to individual systems. The most prominent current advocate of the ensemble interpretation is Leslie E. Ballentine, professor at Simon Fraser University, author of the text book Quantum Mechanics, A Modern Development. === De Broglie–Bohm theory === The de Broglie–Bohm theory of quantum mechanics (also known as the pilot wave theory) is a theory by Louis de Broglie and extended later by David Bohm to include measurements. Particles, which always have positions, are guided by the wavefunction. The wavefunction evolves according to the Schrödinger wave equation, and the wavefunction never collapses. The theory takes place in a single spacetime, is non-local, and is deterministic. The simultaneous determination of a particle's position and velocity is subject to the usual uncertainty principle constraint. The theory is considered to be a hidden-variable theory, and by embracing non-locality it satisfies Bell's inequality. The measurement problem is resolved, since the particles have definite positions at all times. Collapse is explained as phenomenological. === Transactional interpretation === The transactional interpretation of quantum mechanics (TIQM) by John G. Cramer is an interpretation of quantum mechanics inspired by the Wheeler–Feynman absorber theory. It describes the collapse of the wave function as resulting from a time-symmetric transaction between a possibility wave from the source to the receiver (the wave function) and a possibility wave from the receiver to source (the complex conjugate of the wave function). This interpretation of quantum mechanics is unique in that it not only views the wave function as a real entity, but the complex conjugate of the wave function, which appears in the Born rule for calculating the expected value for an observable, as also real. === Consciousness causes collapse === Eugene Wigner argued that human experimenter consciousness (or maybe even animal consciousness) was critical for the collapse of the wavefunction, but he later abandoned this interpretation after learning about quantum decoherence. Some specific proposals for consciousness caused wave-function collapse have been shown to be unfalsifiable and more broadly reasonable assumption about consciousness lead to the same conclusion. === Quantum logic === Quantum logic can be regarded as a kind of propositional logic suitable for understanding the apparent anomalies regarding quantum measurement, most notably those concerning composition of measurement operations of complementary variables. This research area and its name originated in the 1936 paper by Garrett Birkhoff and John von Neumann, who attempted to reconcile some of the apparent inconsistencies of classical Boolean logic with the facts related to measurement and observation in quantum mechanics. === Modal interpretations of quantum theory === Modal interpretations of quantum mechanics were first conceived of in 1972 by Bas van Fraassen, in his paper "A formal approach to the philosophy of science". Van Fraassen introduced a distinction between a dynamical state, which describes what might be true about a system and which always evolves according to the Schrödinger equation, and a value state, which indicates what is actually true about a system at a given time. The term "modal interpretation" now is used to describe a larger set of models that grew out of this approach. The Stanford Encyclopedia of Philosophy describes several versions, including proposals by Kochen, Dieks, Clifton, Dickson, and Bub. According to Michel Bitbol, Schrödinger's views on how to interpret quantum mechanics progressed through as many as four stages, ending with a non-collapse view that in respects resembles the interpretations of Everett and van Fraassen. Because Schrödinger subscribed to a kind of post-Machian neutral monism, in which "matter" and "mind" are only different aspects or arrangements of the same common elements, treating the wavefunction as ontic and treating it as epistemic became interchangeable. === Time-symmetric theories === Time-symmetric interpretations of quantum mechanics were first suggested by Walter Schottky in 1921. Several theories have been proposed that modify the equations of quantum mechanics to be symmetric with respect to time reversal. (See Wheeler–Feynman time-symmetric theory.) This creates retrocausality: events in the future can affect ones in the past, exactly as events in the past can affect ones in the future. In these theories, a single measurement cannot fully determine the state of a system (making them a type of hidden-variables theory), but given two measurements performed at different times, it is possible to calculate the exact state of the system at all intermediate times. The collapse of the wavefunction is therefore not a physical change to the system, just a change in our knowledge of it due to the second measurement. Similarly, they explain entanglement as not being a true physical state but just an illusion created by ignoring retrocausality. The point where two particles appear to "become entangled" is simply a point where each particle is being influenced by events that occur to the other particle in the future. Not all advocates of time-symmetric causality favour modifying the unitary dynamics of standard quantum mechanics. Thus a leading exponent of the two-state vector formalism, Lev Vaidman, states that the two-state vector formalism dovetails well with Hugh Everett's many-worlds interpretation. === Other interpretations === As well as the mainstream interpretations discussed above, a number of other interpretations have been proposed that have not made a significant scientific impact for whatever reason. These range from proposals by mainstream physicists to the more occult ideas of quantum mysticism. == Related concepts == Some ideas are discussed in the context of interpreting quantum mechanics but are not necessarily regarded as interpretations themselves. === Quantum Darwinism === Quantum Darwinism is a theory meant to explain the emergence of the classical world from the quantum world as due to a process of Darwinian natural selection induced by the environment interacting with the quantum system; where the many possible quantum states are selected against in favor of a stable pointer state. It was proposed in 2003 by Wojciech Zurek and a group of collaborators including Ollivier, Poulin, Paz and Blume-Kohout. The development of the theory is due to the integration of a number of Zurek's research topics pursued over the course of twenty-five years including pointer states, einselection and decoherence. === Objective-collapse theories === Objective-collapse theories differ from the Copenhagen interpretation by regarding both the wave function and the process of collapse as ontologically objective (meaning these exist and occur independent of the observer). In objective theories, collapse occurs either randomly ("spontaneous localization") or when some physical threshold is reached, with observers having no special role. Thus, objective-collapse theories are realistic, indeterministic, no-hidden-variables theories. Standard quantum mechanics does not specify any mechanism of collapse; quantum mechanics would need to be extended if objective collapse is correct. The requirement for an extension means that objective-collapse theories are alternatives to quantum mechanics rather than interpretations of it. Examples include the Ghirardi–Rimini–Weber theory the continuous spontaneous localization model the Penrose interpretation == Comparisons == The most common interpretations are summarized in the table below. The values shown in the cells of the table are not without controversy, for the precise meanings of some of the concepts involved are unclear and, in fact, are themselves at the center of the controversy surrounding the given interpretation. For another table comparing interpretations of quantum theory, see reference. No experimental evidence exists that distinguishes among these interpretations. To that extent, the physical theory stands, and is consistent with itself and with reality. Nevertheless, designing experiments that would test the various interpretations is the subject of active research. Most of these interpretations have variants. For example, it is difficult to get a precise definition of the Copenhagen interpretation as it was developed and argued by many people. == The silent approach == Although interpretational opinions are openly and widely discussed today, that was not always the case. A notable exponent of a tendency of silence was Paul Dirac who once wrote: "The interpretation of quantum mechanics has been dealt with by many authors, and I do not want to discuss it here. I want to deal with more fundamental things." This position is not uncommon among practitioners of quantum mechanics. Similarly Richard Feynman wrote many popularizations of quantum mechanics without ever publishing about interpretation issues like quantum measurement. Others, like Nico van Kampen and Willis Lamb, have openly criticized non-orthodox interpretations of quantum mechanics. == See also == == References == == Sources == Bub, J.; Clifton, R. (1996). "A uniqueness theorem for interpretations of quantum mechanics". Studies in History and Philosophy of Modern Physics. 27B: 181–219. doi:10.1016/1355-2198(95)00019-4. Rudolf Carnap, 1939, "The interpretation of physics", in Foundations of Logic and Mathematics of the International Encyclopedia of Unified Science. Chicago, Illinois: University of Chicago Press. Dickson, M., 1994, "Wavefunction tails in the modal interpretation" in Hull, D., Forbes, M., and Burian, R., eds., Proceedings of the PSA 1" 366–376. East Lansing, Michigan: Philosophy of Science Association. --------, and Clifton, R., 1998, "Lorentz-invariance in modal interpretations" in Dieks, D. and Vermaas, P., eds., The Modal Interpretation of Quantum Mechanics. Dordrecht: Kluwer Academic Publishers: 9–48. Fuchs, Christopher, 2002, "Quantum Mechanics as Quantum Information (and only a little more)". arXiv:quant-ph/0205039 --------, and A. Peres, 2000, "Quantum theory needs no 'interpretation'", Physics Today. Herbert, N., 1985. Quantum Reality: Beyond the New Physics. New York: Doubleday. ISBN 0-385-23569-0. Hey, Anthony, and Walters, P., 2003. The New Quantum Universe, 2nd ed. Cambridge University Press. ISBN 0-521-56457-3. Jackiw, Roman; Kleppner, D. (2000). "One Hundred Years of Quantum Physics". Science. 289 (5481): 893–898. arXiv:quant-ph/0008092. Bibcode:2000quant.ph..8092K. doi:10.1126/science.289.5481.893. PMID 17839156. S2CID 6604344. Max Jammer, 1966. The Conceptual Development of Quantum Mechanics. McGraw-Hill. --------, 1974. The Philosophy of Quantum Mechanics. Wiley & Sons. Al-Khalili, 2003. Quantum: A Guide for the Perplexed. London: Weidenfeld & Nicolson. de Muynck, W. M., 2002. Foundations of quantum mechanics, an empiricist approach. Dordrecht: Kluwer Academic Publishers. ISBN 1-4020-0932-1. Roland Omnès, 1999. Understanding Quantum Mechanics. Princeton, New Jersey: Princeton University Press. Karl Popper, 1963. Conjectures and Refutations. London: Routledge and Kegan Paul. The chapter "Three views Concerning Human Knowledge" addresses, among other things, instrumentalism in the physical sciences. Hans Reichenbach, 1944. Philosophic Foundations of Quantum Mechanics. University of California Press. Tegmark, Max; Wheeler, J. A. (2001). "100 Years of Quantum Mysteries". Scientific American. 284 (2): 68–75. Bibcode:2001SciAm.284b..68T. doi:10.1038/scientificamerican0201-68. S2CID 119375538. Bas van Fraassen, 1972, "A formal approach to the philosophy of science", in R. Colodny, ed., Paradigms and Paradoxes: The Philosophical Challenge of the Quantum Domain. Univ. of Pittsburgh Press: 303–366. John A. Wheeler and Wojciech Hubert Zurek (eds), Quantum Theory and Measurement, Princeton, New Jersey: Princeton University Press, ISBN 0-691-08316-9, LoC QC174.125.Q38 1983. == Further reading == Almost all authors below are professional physicists. David Z Albert, 1992. Quantum Mechanics and Experience. Cambridge, Massachusetts: Harvard University Press. ISBN 0-674-74112-9. John S. Bell, 1987. Speakable and Unspeakable in Quantum Mechanics. Cambridge University Press, ISBN 0-521-36869-3. The 2004 edition (ISBN 0-521-52338-9) includes two additional papers and an introduction by Alain Aspect. Dmitrii Ivanovich Blokhintsev, 1968. The Philosophy of Quantum Mechanics. D. Reidel Publishing Company. ISBN 90-277-0105-9. David Bohm, 1980. Wholeness and the Implicate Order. London: Routledge. ISBN 0-7100-0971-2. Adan Cabello (15 November 2004). "Bibliographic guide to the foundations of quantum mechanics and quantum information". arXiv:quant-ph/0012089. David Deutsch, 1997. The Fabric of Reality. London: Allen Lane. ISBN 0-14-027541-X; ISBN 0-7139-9061-9. Argues forcefully against instrumentalism. For general readers. F. J. Duarte (2014). Quantum Optics for Engineers. New York: CRC. ISBN 978-1439888537. Provides a pragmatic perspective on interpretations. For general readers. Bernard d'Espagnat, 1976. Conceptual Foundation of Quantum Mechanics, 2nd ed. Addison Wesley. ISBN 0-8133-4087-X. Bernard d'Espagnat, 1983. In Search of Reality. Springer. ISBN 0-387-11399-1. Bernard d'Espagnat, 2003. Veiled Reality: An Analysis of Quantum Mechanical Concepts. Westview Press. Bernard d'Espagnat, 2006. On Physics and Philosophy. Princetone, New Jersey: Princeton University Press. Arthur Fine, 1986. The Shaky Game: Einstein Realism and the Quantum Theory. Science and its Conceptual Foundations. Chicago, Illinois: University of Chicago Press. ISBN 0-226-24948-4. Ghirardi, Giancarlo, 2004. Sneaking a Look at God's Cards. Princeton, New Jersey: Princeton University Press. Gregg Jaeger (2009) Entanglement, Information, and the Interpretation of Quantum Mechanics. Springer. ISBN 978-3-540-92127-1. N. David Mermin (1990) Boojums all the way through. Cambridge University Press. ISBN 0-521-38880-5. Roland Omnès, 1994. The Interpretation of Quantum Mechanics. Princeton, New Jersey: Princeton University Press. ISBN 0-691-03669-1. Roland Omnès, 1999. Understanding Quantum Mechanics. Princeton, New Jersey: Princeton University Press. Roland Omnès, 1999. Quantum Philosophy: Understanding and Interpreting Contemporary Science. Princeton, New Jersey: Princeton University Press. Roger Penrose, 1989. The Emperor's New Mind. Oxford University Press. ISBN 0-19-851973-7. Especially chapter 6. Roger Penrose, 1994. Shadows of the Mind. Oxford University Press. ISBN 0-19-853978-9. Roger Penrose, 2004. The Road to Reality. New York: Alfred A. Knopf. Argues that quantum theory is incomplete. Lee Phillips, 2017. A brief history of quantum alternatives. Ars Technica. Styer, Daniel F.; Balkin, Miranda S.; Becker, Kathryn M.; Burns, Matthew R.; Dudley, Christopher E.; Forth, Scott T.; Gaumer, Jeremy S.; Kramer, Mark A.; et al. (March 2002). "Nine formulations of quantum mechanics" (PDF). American Journal of Physics. 70 (3): 288–297. Bibcode:2002AmJPh..70..288S. doi:10.1119/1.1445404. Baggott, Jim (25 April 2024). "'Shut up and calculate': how Einstein lost the battle to explain quantum reality". Nature. 629 (8010): 29–32. Bibcode:2024Natur.629...29B. doi:10.1038/d41586-024-01216-z. PMID 38664517. == External links == Stanford Encyclopedia of Philosophy: "Bohmian mechanics" by Sheldon Goldstein. "Collapse Theories." by Giancarlo Ghirardi. "Copenhagen Interpretation of Quantum Mechanics" by Jan Faye. "Everett's Relative State Formulation of Quantum Mechanics" by Jeffrey Barrett. "Many-Worlds Interpretation of Quantum Mechanics" by Lev Vaidman. "Modal Interpretation of Quantum Mechanics" by Michael Dickson and Dennis Dieks. "Philosophical Issues in Quantum Theory" by Wayne Myrvold. "Quantum-Bayesian and Pragmatist Views of Quantum Theory" by Richard Healey. "Quantum Entanglement and Information" by Jeffrey Bub. "Quantum mechanics" by Jenann Ismael. "Quantum Logic and Probability Theory" by Alexander Wilce. "Relational Quantum Mechanics" by Federico Laudisa and Carlo Rovelli. "The Role of Decoherence in Quantum Mechanics" by Guido Bacciagaluppi. Internet Encyclopedia of Philosophy: "Interpretations of Quantum Mechanics" by Peter J. Lewis. "Everettian Interpretations of Quantum Mechanics" by Christina Conroy.
Wikipedia/Quantum_metaphysics
The term physics envy is used to criticize modern writing and research of academics working in areas such as "softer sciences", philosophy, liberal arts, business administration education, humanities, and social sciences. The term argues that writing and working practices in these disciplines have overused confusing jargon and complicated mathematics to seem more 'rigorous' as in heavily mathematics-based natural science subjects like physics. == Background == The success of physics in "mathematicizing" itself, particularly since Isaac Newton's Principia Mathematica, is generally considered remarkable and often disproportionate compared to other areas of inquiry. "Physics envy" refers to the envy (perceived or real) of scholars in other disciplines for the mathematical precision of fundamental concepts obtained by physicists. It is an accusation raised against disciplines (typically against social sciences such as economics and psychology) when these academic areas try to express their fundamental concepts in terms of mathematics, which is seen as an unwarranted push for reductionism. Evolutionary biologist Ernst Mayr discusses the issue of the inability to reduce biology to its mathematical basis in his book What Makes Biology Unique?. Noam Chomsky discusses the ability and desirability of reduction to its mathematical basis in his article "Mysteries of Nature: How Deeply Hidden." Chomsky contributed extensively to the development of the field of theoretical linguistics, a formal science. == Examples == The social sciences have been accused of possessing an inferiority complex, which has been associated with physics envy. For instance, positivist scientists accept a mistaken image of natural science so it can be applied to the social sciences. The phenomenon also exists in business strategy research as demonstrated by historian Alfred Chandler Jr.'s strategy structure model. This framework holds that a firm must evaluate the environment in order to set up its structure that will implement strategies. Chandler also maintained that there is close connection "between mathematics, physics, and engineering graduates and the systemizing of the business strategy paradigm". In the field of artificial intelligence (AI), physics envy arises in cases of projects that lack interaction with each other, using only one idea due to the manner by which new hypotheses are tested and discarded in the pursuit of one true intelligence. == See also == Scientism Academese Newtonianism Philosophy of biology Philosophy of physics Philosophy of science Reductionism Unreasonable ineffectiveness of mathematics == Notes == == References == Chomsky, N. (2009). "The Mysteries of Nature: How Deeply Hidden?". Journal of Philosophy. 106 (4): 167–200. doi:10.5840/jphil2009106416. Collected in Chomsky, Noam (2010). "1. The Mysteries of Nature: How Deeply Hidden?". In Jean Bricmont; Julie Franck (eds.). Chomsky Notebook. Columbia University Press. ISBN 978-0-231-14475-9. Csikszent, M.; Hektner, J.M.; Schmidt, J.A. (2006). Experience Sampling Method: Measuring the Quality of Everyday Life. SAGE Publications. ISBN 978-1-4129-4923-1. Mayr, E. (2004). What Makes Biology Unique? Considerations on the Autonomy of a Scientific Discipline. Cambridge University Press. ISBN 978-0-521-84114-6. Mirowski, P. (1999). "The Ironies of Physics Envy". More Heat Than Light. Cambridge University Press. ISBN 0-521-42689-8. Schabas, M. (1993). "What's So Wrong with Physics Envy?". In de Marchi, N. (ed.). Non-Natural Social Science. Duke University Press. p. 45. ISBN 0-8223-1410-X. Schram, S.; Caterino, B. (2006). Making Political Science Matter: Debating Knowledge, Research, and Method. New York University Press. ISBN 978-0-8147-4033-0. == External links == Overcoming ‘Physics Envy’, op-ed by two political scientists. New York Times, published March 30, 2012 Physics Envy: "quants" and financial models, essay and book review of Models Behaving Badly by Emanuel Derman. Review by Burton Malkiel, WSJ, December 14, 2011 [1] Andrew Lo (MIT Sloan School) and Mark Mueller (MIT Sloan School and MIT Center for Theoretical Physics), "Warning: Physics Envy May be Hazardous to Your Wealth!" published in the Journal of Investment Management, Volume 8, Number 2, Second Quarter 2010 [2]
Wikipedia/Physics_envy
Some interpretations of quantum mechanics posit a central role for an observer of a quantum phenomenon. The quantum mechanical observer is tied to the issue of observer effect, where a measurement necessarily requires interacting with the physical object being measured, affecting its properties through the interaction. The term "observable" has gained a technical meaning, denoting a Hermitian operator that represents a measurement.: 55  == Foundation == The theoretical foundation of the concept of measurement in quantum mechanics is a contentious issue deeply connected to the many interpretations of quantum mechanics. A key focus point is that of wave function collapse, for which several popular interpretations assert that measurement causes a discontinuous change into an eigenstate of the operator associated with the quantity that was measured, a change which is not time-reversible. More explicitly, the superposition principle (ψ = Σnanψn) of quantum physics dictates that for a wave function ψ, a measurement will result in a state of the quantum system of one of the m possible eigenvalues fn , n = 1, 2, ..., m, of the operator ∧F which is in the space of the eigenfunctions ψn , n = 1, 2, ..., m. Once one has measured the system, one knows its current state; and this prevents it from being in one of its other states ⁠— it has apparently decohered from them without prospects of future strong quantum interference. This means that the type of measurement one performs on the system affects the end-state of the system. An experimentally studied situation related to this is the quantum Zeno effect, in which a quantum state would decay if left alone, but does not decay because of its continuous observation. The dynamics of a quantum system under continuous observation are described by a quantum stochastic master equation known as the Belavkin equation. Further studies have shown that even observing the results after the photon is produced leads to collapsing the wave function and loading a back-history as shown by delayed choice quantum eraser. When discussing the wave function ψ which describes the state of a system in quantum mechanics, one should be cautious of a common misconception that assumes that the wave function ψ amounts to the same thing as the physical object it describes. This flawed concept must then require existence of an external mechanism, such as a measuring instrument, that lies outside the principles governing the time evolution of the wave function ψ, in order to account for the so-called "collapse of the wave function" after a measurement has been performed. But the wave function ψ is not a physical object like, for example, an atom, which has an observable mass, charge and spin, as well as internal degrees of freedom. Instead, ψ is an abstract mathematical function that contains all the statistical information that an observer can obtain from measurements of a given system. In this case, there is no real mystery in that this mathematical form of the wave function ψ must change abruptly after a measurement has been performed. A consequence of Bell's theorem is that measurement on one of two entangled particles can appear to have a nonlocal effect on the other particle. Additional problems related to decoherence arise when the observer is modeled as a quantum system. == Description == The Copenhagen interpretation, which is the most widely accepted interpretation of quantum mechanics among physicists,: 248  posits that an "observer" or a "measurement" is merely a physical process. One of the founders of the Copenhagen interpretation, Werner Heisenberg, wrote: Of course the introduction of the observer must not be misunderstood to imply that some kind of subjective features are to be brought into the description of nature. The observer has, rather, only the function of registering decisions, i.e., processes in space and time, and it does not matter whether the observer is an apparatus or a human being; but the registration, i.e., the transition from the "possible" to the "actual," is absolutely necessary here and cannot be omitted from the interpretation of quantum theory. Niels Bohr, also a founder of the Copenhagen interpretation, wrote: all unambiguous information concerning atomic objects is derived from the permanent marks such as a spot on a photographic plate, caused by the impact of an electron left on the bodies which define the experimental conditions. Far from involving any special intricacy, the irreversible amplification effects on which the recording of the presence of atomic objects rests rather remind us of the essential irreversibility inherent in the very concept of observation. The description of atomic phenomena has in these respects a perfectly objective character, in the sense that no explicit reference is made to any individual observer and that therefore, with proper regard to relativistic exigencies, no ambiguity is involved in the communication of information. Likewise, Asher Peres stated that "observers" in quantum physics are similar to the ubiquitous "observers" who send and receive light signals in special relativity. Obviously, this terminology does not imply the actual presence of human beings. These fictitious physicists may as well be inanimate automata that can perform all the required tasks, if suitably programmed.: 12  Critics of the special role of the observer also point out that observers can themselves be observed, leading to paradoxes such as that of Wigner's friend; and that it is not clear how much consciousness is required. As John Bell inquired, "Was the wave function waiting to jump for thousands of millions of years until a single-celled living creature appeared? Or did it have to wait a little longer for some highly qualified measurer—with a PhD?" == Anthropocentric interpretation == The prominence of seemingly subjective or anthropocentric ideas like "observer" in the early development of the theory has been a continuing source of disquiet and philosophical dispute. A number of new-age religious or philosophical views give the observer a more special role, or place constraints on who or what can be an observer. As an example of such claims, Fritjof Capra declared, "The crucial feature of atomic physics is that the human observer is not only necessary to observe the properties of an object, but is necessary even to define these properties." There is no credible peer-reviewed research that backs such claims. == Confusion with uncertainty principle == The uncertainty principle has been frequently confused with the observer effect, evidently even by its originator, Werner Heisenberg. The uncertainty principle in its standard form describes how precisely it is possible to measure the position and momentum of a particle at the same time. If the precision in measuring one quantity is increased, the precision in measuring the other decreases. An alternative version of the uncertainty principle, more in the spirit of an observer effect, fully accounts for the disturbance the observer has on a system and the error incurred, although this is not how the term "uncertainty principle" is most commonly used in practice. == See also == Observer effect (physics) Quantum foundations == References ==
Wikipedia/Observer_(quantum_physics)