text
stringlengths
559
401k
source
stringlengths
13
121
Charged particle beams in a particle accelerator or a storage ring undergo a variety of different processes. Typically the beam dynamics is broken down into single particle dynamics and collective effects. Sources of collective effects include single or multiple inter-particle scattering and interaction with the vacuum chamber and other surroundings, formalized in terms of impedance. The collective effects of charged particle beams in particle accelerators share some similarity to the dynamics of plasmas. In particular, a charged particle beam may be considered as a non-neutral plasma, and one may find mathematical methods in common with the study of stability or instabilities. One may also find commonality with the field of fluid mechanics since the density of charged particles is often sufficient to be considered as flowing continuum. Another important topic is the attempt to mitigate collective effects by use of single bunch or multi-bunch feedback systems. == Types of collective effects == Collective effects can include emittance growth, bunch length or energy spread growth, instabilities, or particle losses. There are also multi-bunch effects. == Formalisms for treating collective effects == The collective beam motion may be modeled in a variety of ways. One may use macroparticle models, or else a continuum model. The evolution equation in the latter case is typically called the Vlasov equation, and requires one to write down the Hamiltonian function including the external magnetic fields, and the self interaction. Stochastic effects may be added by generalizing to the Fokker–Planck equation. == Software for computation of collective effects == Depending on the effects considered and the modeling formalism used, different software is available for simulation. The collective effects must typically be added in addition to the single particle dynamics, which may be modeled using a tracking code. See article on Accelerator physics codes. == References ==
Wikipedia/Collective_effects_(accelerator_physics)
A charged particle accelerator is a complex machine that takes elementary charged particles and accelerates them to very high energies. Accelerator physics is a field of physics encompassing all the aspects required to design and operate the equipment and to understand the resulting dynamics of the charged particles. There are software packages associated with each domain. The 1990 edition of the Los Alamos Accelerator Code Group's compendium provides summaries of more than 200 codes. Certain codes are still in use today, although many are obsolete. Another index of existing and historical accelerator simulation codes is located at the CERN CARE/HHH website. == Single particle dynamics codes == For many applications it is sufficient to track a single particle through the relevant electric and magnetic fields. Old codes no longer maintained by their original authors or home institutions include: BETA, AGS, ALIGN, COMFORT, DESIGN, DIMAD, HARMON, LEGO, LIAR, MAGIC, MARYLIE, PATRICIA, PETROS, RACETRACK, SYNCH, TRANSPORT, TURTLE, and UAL. Some legacy codes are maintained by commercial organizations for academic, industrial and medical accelerator facilities that continue to use those codes. TRACE 3-D and TURTLE are among the historic codes that are commercially maintained. Major maintained codes include: === Columns === Spin Tracking Tracking of a particle's spin. Taylor Maps Construction of Taylor series maps to high order that can be used for simulating particle motion and also can be used for such things as extracting single particle resonance strengths. Weak-Strong Beam-Beam Interaction Can simulate the beam-beam interaction with the simplification that one beam is essentially fixed in size. See below for a list of strong-strong interaction codes. Electromagnetic Field Tracking Can track (ray trace) a particle through arbitrary electromagnetic fields. Higher Energy Collective effects The interactions between the particles in the beam can have important effects on the behavior, control and dynamics. Collective effects take different forms from Intrabeam Scattering (IBS) which is a direct particle-particle interaction to wakefields which are mediated by the vacuum chamber wall of the machine the particles are traveling in. In general, the effect of direct particle-particle interactions is less with higher energy particle beams. At very low energies, space charge has a large effect on a particle beam and thus becomes hard to calculate. See below for a list of programs that can handle low energy space charge forces. Synchrotron radiation effects Can simulate the effect of synchrotron radiation emission on the particles being tracked. Radiation Tracking Ability to track the synchrotron radiation (mainly X-rays) produced by the acceleration of charged particles. This is not the same as simulating the effect of synchrotron radiation emission on the particles being tracked. Wakefields The electro-magnetic interaction between the beam and the vacuum chamber wall enclosing the beam are known as wakefields. Wakefields produce forces that affect the trajectory of the particles of the beam and can potentially destabilize the trajectories. Extensible Open source and object oriented coding to make it relatively easy to extend the capabilities. == Space charge codes == The self interaction (e.g. space charge) of the charged particle beam can cause growth of the beam, such as with bunch lengthening, or intrabeam scattering. Additionally, space charge effects may cause instabilities and associated beam loss. Typically, at relatively low energies (roughly for energies where the relativistic gamma factor is less than 10 or so), the Poisson equation is solved at intervals during the tracking using particle-in-cell algorithms. Space charge effects lessen at higher energies so at higher energies the space charge effects may be modeled using simpler algorithms that are computationally much faster than the algorithms used at lower energies. Codes that handle low energy space charge effects include: ASTRA Bmad CST Studio Suite GPT IMPACT ImpactX mbtrack ORBIT, PyORBIT OPAL PyHEADTAIL Synergia TraceWin Tranft VSim Warp Xsuite At higher energies, space charge effects include Touschek scattering and coherent synchrotron radiation (CSR). Codes that handle higher energy space charge include: Bmad ELEGANT MaryLie SAD == "Strong-strong" beam-beam effects codes == When two beams collide, the electromagnetic field of one beam will then have strong effects on the other one, called beam-beam effects. So called "weak-strong" simulations model one beam (called the "strong" beam since it affects the other beam) as a fixed distribution (typically a Gaussian distribution) which interacts with the particles of the other "weak" beam. This greatly simplifies the simulation. A full "strong-strong" simulation is more complicated and takes more simulation time. Strong-strong codes include GUINEA-PIG BeamBeam3D Xsuite == Impedance computation codes == An important class of collective effects may be summarized in terms of the beams response to an "impedance". An important job is thus the computation of this impedance for the machine. Codes for this computation include ABCI ACE3P CST Studio Suite GdfidL TBCI VSim == Magnet and other hardware-modeling codes == To control the charged particle beam, appropriate electric and magnetic fields must be created. There are software packages to help in the design and understanding of the magnets, RF cavities, and other elements that create these fields. Codes include ACE3P COMSOL Multiphysics CST Studio Suite OPERA VSim == Lattice description and data interchange issues == Given the variety of modeling tasks, there is not one common data format that has developed. For describing the layout of an accelerator and the corresponding elements, one uses a so-called "lattice file". There have been numerous attempts at unifying the lattice file formats used in different codes. One unification attempt is the Accelerator Markup Language, and the Universal Accelerator Parser. Another attempt at a unified approach to accelerator codes is the UAL or Universal Accelerator Library. As of 2023 neither of these formats are maintained. The file formats used in MAD may be the most common, with translation routines available to convert to an input form needed for a different code. Associated with the Elegant code is a data format called SDDS, with an associated suite of tools. If one uses a Matlab-based code, such as Accelerator Toolbox, one has available all the tools within Matlab. For the interchange of particle positions and electromagnetic fields, the OpenPMD standard defines a format which can then be implemented with a file format like HDF5. == Codes in applications of particle accelerators == There are many applications of particle accelerators. For example, two important applications are elementary particle physics and synchrotron radiation production. When performing a modeling task for any accelerator operation, the results of charged particle beam dynamics simulations must feed into the associated application. Thus, for a full simulation, one must include the codes in associated applications. For particle physics, the simulation may be continued in a detector with a code such as Geant4. For a synchrotron radiation facility, for example, the electron beam produces an x-ray beam that then travels down a beamline before reaching the experiment. Thus, the electron beam modeling software must interface with the x-ray optics modelling software such as SRW, Shadow, McXTrace, or Spectra. Bmad can model both X-rays and charged particle beams. The x-rays are used in an experiment which may be modeled and analyzed with various software, such as the DAWN science platform. OCELOT also includes both synchrotron radiation calculation and x-ray propagation models. Industrial and medical accelerators represent another area of important applications. A 2013 survey estimated that there were about 27,000 industrial accelerators and another 14,000 medical accelerators world wide, and those numbers have continued to increase since that time. Codes used at those facilities vary considerably and often include a mix of traditional codes and custom codes developed for specific applications. The Advanced Orbit Code (AOC) developed at Ion Beam Applications is an example. == See also == List of codes from UCLA Particle Beam Physics Laboratory Archived 2018-07-17 at the Wayback Machine Comparison of Accelerator Codes == References ==
Wikipedia/Accelerator_Physics_Codes
In polymer physics, the finite extensible nonlinear elastic (FENE) model, also called the FENE dumbbell model, represents the dynamics of a long-chained polymer. It simplifies the chain of monomers by connecting a sequence of beads with nonlinear springs. Its direct extension the FENE-P model, is more commonly used in computational fluid dynamics to simulate turbulent flow. The P stands for the last name of physicist Anton Peterlin, who developed an important approximation of the model in 1966. The FENE-P model was introduced by Robert Byron Bird et al. in the 1980s. In 1991 the FENE-MP model (PM for modified Peterlin) was introduced and in 1988 the FENE-CR was introduced by M.D. Chilcott and J.M. Rallison. == Formulation == The spring force in the FENE model is given Warner's spring force, as F i = k R i 1 − ( R i / L m a x ) 2 {\displaystyle {\textbf {F}}_{i}=k{\frac {{\textbf {R}}_{i}}{1-(R_{i}/L_{\rm {max}})^{2}}}} , where R i = | R i | {\displaystyle R_{i}=|{\textbf {R}}_{i}|} , k is the spring constant and Lmax the upper limit for the length extension. Total stretching force on i-th bead can be written as F i − F i − 1 {\displaystyle {\textbf {F}}_{i}-{\textbf {F}}_{i-1}} . The Werner's spring force approximate the inverse Langevin function found in other models. == FENE-P model == The FENE-P model takes the FENE model and assumes the Peterlin statistical average for the restoring force as F i = k R i 1 − ⟨ R i 2 / L m a x 2 ⟩ {\displaystyle {\textbf {F}}_{i}=k{\frac {{\textbf {R}}_{i}}{1-\langle R_{i}^{2}/L_{\rm {max}}^{2}\rangle }}} , where the ⟨ ⋯ ⟩ {\displaystyle \langle \cdots \rangle } indicates the statistical average. === Advantages and disanvatages === FENE-P is one of few polymer models that can be used in computational fluid dynamics simulations since it removes the need of statistical averaging at each grid point at any instant in time. It is demonstrated to be able to capture some of the most important polymeric flow behaviors such as polymer turbulence drag reduction and shear thinning. It is the most commonly used polymer model that can be used in a turbulence simulation since direct numerical simulation of turbulence is already extremely expensive. Due to its simplifications FENE-P is not able to show the hysteresis effects that polymers have, while the FENE model can. == References == Dynamics of dissolved polymer chains in isotropic turbulence == External links == QPolymer: an open source (for Mac OS X) FENE model Brownian dynamics simulation software Stretching of Polymers in Isotropic Turbulence: A Statistical Closure
Wikipedia/FENE_model
A macromolecule is a very large molecule important to biological processes, such as a protein or nucleic acid. It is composed of thousands of covalently bonded atoms. Many macromolecules are polymers of smaller molecules called monomers. The most common macromolecules in biochemistry are biopolymers (nucleic acids, proteins, and carbohydrates) and large non-polymeric molecules such as lipids, nanogels and macrocycles. Synthetic fibers and experimental materials such as carbon nanotubes are also examples of macromolecules. == Definition == The term macromolecule (macro- + molecule) was coined by Nobel laureate Hermann Staudinger in the 1920s, although his first relevant publication on this field only mentions high molecular compounds (in excess of 1,000 atoms). At that time the term polymer, as introduced by Berzelius in 1832, had a different meaning from that of today: it simply was another form of isomerism for example with benzene and acetylene and had little to do with size. Usage of the term to describe large molecules varies among the disciplines. For example, while biology refers to macromolecules as the four large molecules comprising living things, in chemistry, the term may refer to aggregates of two or more molecules held together by intermolecular forces rather than covalent bonds but which do not readily dissociate. According to the standard IUPAC definition, the term macromolecule as used in polymer science refers only to a single molecule. For example, a single polymeric molecule is appropriately described as a "macromolecule" or "polymer molecule" rather than a "polymer," which suggests a substance composed of macromolecules. Because of their size, macromolecules are not conveniently described in terms of stoichiometry alone. The structure of simple macromolecules, such as homopolymers, may be described in terms of the individual monomer subunit and total molecular mass. Complicated biomacromolecules, on the other hand, require multi-faceted structural description such as the hierarchy of structures used to describe proteins. In British English, the word "macromolecule" tends to be called "high polymer". == Properties == Macromolecules often have unusual physical properties that do not occur for smaller molecules. Another common macromolecular property that does not characterize smaller molecules is their relative insolubility in water and similar solvents, instead forming colloids. Many require salts or particular ions to dissolve in water. Similarly, many proteins will denature if the solute concentration of their solution is too high or too low. High concentrations of macromolecules in a solution can alter the rates and equilibrium constants of the reactions of other macromolecules, through an effect known as macromolecular crowding. This comes from macromolecules excluding other molecules from a large part of the volume of the solution, thereby increasing the effective concentrations of these molecules. == Major macromolecules == Proteins are polymers of amino acids joined by peptide bonds. DNA and RNA are polymers of nucleotides joined by phosphodiester bonds. These nucleotides consist of a phosphate group, a sugar (ribose in the case of RNA, deoxyribose in the case of DNA), and a nucleotide base (either adenine, guanine, thymine, uracil, or cytosine, where thymine occurs only in DNA and uracil only in RNA). Polysaccharides (such as starch, cellulose, and chitin) are polymers of monosaccharides joined by glycosidic bonds. Some lipids (organic nonpolar molecules) are macromolecules, with a variety of different structures. == Linear biopolymers == All living organisms are dependent on three essential biopolymers for their biological functions: DNA, RNA and proteins. Each of these molecules is required for life since each plays a distinct, indispensable role in the cell. The simple summary is that DNA makes RNA, and then RNA makes proteins. DNA, RNA, and proteins all consist of a repeating structure of related building blocks (nucleotides in the case of DNA and RNA, amino acids in the case of proteins). In general, they are all unbranched polymers, and so can be represented in the form of a string. Indeed, they can be viewed as a string of beads, with each bead representing a single nucleotide or amino acid monomer linked together through covalent chemical bonds into a very long chain. In most cases, the monomers within the chain have a strong propensity to interact with other amino acids or nucleotides. In DNA and RNA, this can take the form of Watson–Crick base pairs (G–C and A–T or A–U), although many more complicated interactions can and do occur. === Structural features === Because of the double-stranded nature of DNA, essentially all of the nucleotides take the form of Watson–Crick base pairs between nucleotides on the two complementary strands of the double helix. In contrast, both RNA and proteins are normally single-stranded. Therefore, they are not constrained by the regular geometry of the DNA double helix, and so fold into complex three-dimensional shapes dependent on their sequence. These different shapes are responsible for many of the common properties of RNA and proteins, including the formation of specific binding pockets, and the ability to catalyse biochemical reactions. ==== DNA is optimised for encoding information ==== DNA is an information storage macromolecule that encodes the complete set of instructions (the genome) that are required to assemble, maintain, and reproduce every living organism. DNA and RNA are both capable of encoding genetic information, because there are biochemical mechanisms which read the information coded within a DNA or RNA sequence and use it to generate a specified protein. On the other hand, the sequence information of a protein molecule is not used by cells to functionally encode genetic information.: 5  DNA has three primary attributes that allow it to be far better than RNA at encoding genetic information. First, it is normally double-stranded, so that there are a minimum of two copies of the information encoding each gene in every cell. Second, DNA has a much greater stability against breakdown than does RNA, an attribute primarily associated with the absence of the 2'-hydroxyl group within every nucleotide of DNA. Third, highly sophisticated DNA surveillance and repair systems are present which monitor damage to the DNA and repair the sequence when necessary. Analogous systems have not evolved for repairing damaged RNA molecules. Consequently, chromosomes can contain many billions of atoms, arranged in a specific chemical structure. ==== Proteins are optimised for catalysis ==== Proteins are functional macromolecules responsible for catalysing the biochemical reactions that sustain life.: 3  Proteins carry out all functions of an organism, for example photosynthesis, neural function, vision, and movement. The single-stranded nature of protein molecules, together with their composition of 20 or more different amino acid building blocks, allows them to fold in to a vast number of different three-dimensional shapes, while providing binding pockets through which they can specifically interact with all manner of molecules. In addition, the chemical diversity of the different amino acids, together with different chemical environments afforded by local 3D structure, enables many proteins to act as enzymes, catalyzing a wide range of specific biochemical transformations within cells. In addition, proteins have evolved the ability to bind a wide range of cofactors and coenzymes, smaller molecules that can endow the protein with specific activities beyond those associated with the polypeptide chain alone. ==== RNA is multifunctional ==== RNA is multifunctional, its primary function is to encode proteins, according to the instructions within a cell's DNA.: 5  They control and regulate many aspects of protein synthesis in eukaryotes. RNA encodes genetic information that can be translated into the amino acid sequence of proteins, as evidenced by the messenger RNA molecules present within every cell, and the RNA genomes of a large number of viruses. The single-stranded nature of RNA, together with tendency for rapid breakdown and a lack of repair systems means that RNA is not so well suited for the long-term storage of genetic information as is DNA. In addition, RNA is a single-stranded polymer that can, like proteins, fold into a very large number of three-dimensional structures. Some of these structures provide binding sites for other molecules and chemically active centers that can catalyze specific chemical reactions on those bound molecules. The limited number of different building blocks of RNA (4 nucleotides vs >20 amino acids in proteins), together with their lack of chemical diversity, results in catalytic RNA (ribozymes) being generally less-effective catalysts than proteins for most biological reactions. == Branched biopolymers == Carbohydrate macromolecules (polysaccharides) are formed from polymers of monosaccharides.: 11  Because monosaccharides have multiple functional groups, polysaccharides can form linear polymers (e.g. cellulose) or complex branched structures (e.g. glycogen). Polysaccharides perform numerous roles in living organisms, acting as energy stores (e.g. starch) and as structural components (e.g. chitin in arthropods and fungi). Many carbohydrates contain modified monosaccharide units that have had functional groups replaced or removed. Polyphenols consist of a branched structure of multiple phenolic subunits. They can perform structural roles (e.g. lignin) as well as roles as secondary metabolites involved in signalling, pigmentation and defense. == Synthetic macromolecules == Some examples of macromolecules are synthetic polymers (plastics, synthetic fibers, and synthetic rubber), graphene, and carbon nanotubes. Polymers may be prepared from inorganic matter as well as for instance in inorganic polymers and geopolymers. The incorporation of inorganic elements enables the tunability of properties and/or responsive behavior as for instance in smart inorganic polymers. == See also == List of biophysically important macromolecular crystal structures Small molecule Soft matter == References == == External links == Synopsis of Chapter 5, Campbell & Reece, 2002 Lecture notes on the structure and function of macromolecules Several (free) introductory macromolecule related internet-based courses Archived 2011-07-18 at the Wayback Machine Giant Molecules! by Ulysses Magee, ISSA Review Winter 2002–2003, ISSN 1540-9864. Cached HTML version of a missing PDF file. Retrieved March 10, 2010. The article is based on the book, Inventing Polymer Science: Staudinger, Carothers, and the Emergence of Macromolecular Chemistry by Yasu Furukawa.
Wikipedia/Macromolecules
The term file dynamics is the motion of many particles in a narrow channel. In science: in chemistry, physics, mathematics and related fields, file dynamics (sometimes called, single file dynamics) is the diffusion of N (N → ∞) identical Brownian hard spheres in a quasi-one-dimensional channel of length L (L → ∞), such that the spheres do not jump one on top of the other, and the average particle's density is approximately fixed. The most famous statistical properties of this process is that the mean squared displacement (MSD) of a particle in the file follows, M S D ≈ t 1 2 {\displaystyle \mathrm {MSD} \approx t^{\frac {1}{2}}} , and its probability density function (PDF) is Gaussian in position with a variance MSD. Results in files that generalize the basic file include: In files with a density law that is not fixed, but decays as a power law with an exponent a with the distance from the origin, the particle in the origin has a MSD that scales like, M S D ≈ t 1 + a 2 {\displaystyle MSD\approx t^{\frac {1+a}{2}}} , with a Gaussian PDF. When, in addition, the particles' diffusion coefficients are distributed like a power law with exponent γ (around the origin), the MSD follows, M S D ≈ t 1 − γ 2 / ( 1 + a ) − γ {\displaystyle MSD\approx t^{\frac {1-\gamma }{2/(1+a)-\gamma }}} , with a Gaussian PDF. In anomalous files that are renewal, namely, when all particles attempt a jump together, yet, with jumping times taken from a distribution that decays as a power law with an exponent, −1 − α, the MSD scales like the MSD of the corresponding normal file, in the power of α. In anomalous files of independent particles, the MSD is very slow and scales like, M S D ≈ l o g 2 ( t ) {\displaystyle MSD\approx log^{2}(t)} . Even more exciting, the particles form clusters in such files, defining a dynamical phase transition. This depends on the anomaly power α: the percentage of particles in clusters ξ follows, ξ ≈ 1 − α 3 {\displaystyle \xi \approx {\sqrt {1-\alpha ^{3}}}} . Other generalizations include: when the particles can bypass each other with a constant probability upon encounter, an enhanced diffusion is seen. When the particles interact with the channel, a slower diffusion is observed. Files in embedded in two-dimensions show similar characteristics of files in one dimension. Generalizations of the basic file are important since these models represent reality much more accurately than the basic file. Indeed, file dynamics are used in modeling numerous microscopic processes: the diffusion within biological and synthetic pores and porous material, the diffusion along 1D objects, such as in biological roads, the dynamics of a monomer in a polymer, etc. == Mathematical formulation == === Simple files === In simple Brownian files, P ( x , t ∣ x 0 ) {\displaystyle P(\mathbf {x} ,t\mid \mathbf {x_{0}} )} , the joint probability density function (PDF) for all the particles in file, obeys a normal diffusion equation: In P ( x , t ∣ x 0 ) {\displaystyle P(\mathbf {x} ,t\mid \mathbf {x_{0}} )} , x = { x − M , x − M + 1 , … , x M } {\displaystyle \mathbf {x} =\{x_{-M},x_{-M+1},\ldots ,x_{M}\}} is the set of particles' positions at time t {\displaystyle t} and x 0 {\displaystyle \mathbf {x_{0}} } is the set of the particles' initial positions at the initial time t 0 {\displaystyle t_{0}} (set to zero). Equation (1) is solved with the appropriate boundary conditions, which reflect the hard-sphere nature of the file: and with the appropriate initial condition: In a simple file, the initial density is fixed, namely, x 0 , j = j Δ {\displaystyle x_{0,j}=j\Delta } , where Δ {\displaystyle \Delta } is a parameter that represents a microscopic length. The PDFs' coordinates must obey the order: x − M ≤ x − M + 1 ≤ ⋯ ≤ x M {\displaystyle x_{-M}\leq x_{-M+1}\leq \cdots \leq x_{M}} . === Heterogeneous files === In such files, the equation of motion follows, with the boundary conditions: and with the initial condition, Eq. (3), where the particles’ initial positions obey: The file diffusion coefficients are taken independently from the PDF, where Λ has a finite value that represents the fastest diffusion coefficient in the file. === Renewal, anomalous, heterogeneous files === In renewal-anomalous files, a random period is taken independently from a waiting time probability density function (WT-PDF; see Continuous-time Markov process for more information) of the form: ψ α ( t ) ∼ k ( k t ) − 1 − α , 0 < α ≤ 1 {\displaystyle \psi _{\alpha }(t)\sim k(kt)^{-1-\alpha },0<\alpha \leq 1} , where k is a parameter. Then, all the particles in the file stand still for this random period, where afterwards, all the particles attempt jumping in accordance with the rules of the file. This procedure is carried on over and over again. The equation of motion for the particles’ PDF in a renewal-anomalous file is obtained when convoluting the equation of motion for a Brownian file with a kernel k α ( t ) {\displaystyle k_{\alpha }(t)} : Here, the kernel k α ( t ) {\displaystyle k_{\alpha }(t)} and the WT-PDF ψ α ( t ) {\displaystyle \psi _{\alpha }(t)} are related in Laplace space, k ¯ α ( s ) = s ψ ¯ α ( s ) 1 − ψ ¯ α ( s ) {\displaystyle {\bar {k}}_{\alpha }(s)={\frac {s{\bar {\psi }}_{\alpha }(s)}{1-{\bar {\psi }}_{\alpha }(s)}}} . (The Laplace transform of a function f ( t ) {\displaystyle f(t)} reads, f ¯ ( s ) = ∫ 0 ∞ f ( t ) e − s t d t {\displaystyle {\bar {f}}(s)=\int _{0}^{\infty }f(t)e^{-st}\,dt} .) The reflecting boundary conditions accompanied Eq. (8) are obtained when convoluting the boundary conditions of a Brownian file with the kernel k α ( t ) {\displaystyle k_{\alpha }(t)} , where here and in a Brownian file the initial conditions are identical. === Anomalous files with independent particles === When each particle in the anomalous file is assigned with its own jumping time drawn form ψ α ( t ) {\displaystyle \psi _{\alpha }(t)} ( ψ α ( t ) {\displaystyle \psi _{\alpha }(t)} is the same for all the particles), the anomalous file is not a renewal file. The basic dynamical cycle in such a file consists of the following steps: a particle with the fastest jumping time in the file, say, t i {\displaystyle t_{i}} for particle i, attempts a jump. Then, the waiting times for all the other particles are adjusted: we subtract t i {\displaystyle t_{i}} from each of them. Finally, a new waiting time is drawn for particle i. The most crucial difference among renewal anomalous files and anomalous files that are not renewal is that when each particle has its own clock, the particles are in fact connected also in the time domain, and the outcome is further slowness in the system (proved in the main text). The equation of motion for the PDF in anomalous files of independent particles reads: Note that the time argument in the PDF P ( x , t ∣ x 0 ) {\displaystyle P(\mathbf {x} ,\mathbf {t} \mid \mathbf {x_{0}} )} is a vector of times: t = { t i } i = − M M {\displaystyle \mathbf {t} =\{t_{i}\}_{i=-M}^{M}} , and t ′ ( i ) = { t c } c = − M , c ≠ i M {\displaystyle \mathbf {t} ^{'(i)}=\{t_{c}\}_{c=-M,c\neq i}^{M}} . Adding all the coordinates and performing the integration in the order of faster times first (the order is determined randomly from a uniform distribution in the space of configurations) gives the full equation of motion in anomalous files of independent particles (averaging of the equation over all configurations is therefore further required). Indeed, even Eq. (9) is very complicated, and averaging further complicates things. == Mathematical analysis == === Simple files === The solution of Eqs. (1)-(2) is a complete set of permutations of all initial coordinates appearing in the Gaussians, Here, the index p {\displaystyle p} goes on all the permutations of the initial coordinates, and contains N ! {\displaystyle N!} permutations. From Eq. (10), the PDF of a tagged particle in the file, P ( r , t ∣ r 0 ) {\displaystyle P(r,t\mid r_{0})} , is calculated In Eq. (11), R d = r d Δ {\displaystyle R_{d}=r_{d}\Delta } , r d = r − r 0 {\displaystyle r_{d}=r-r_{0}} ( r 0 {\displaystyle r_{0}} is the initial condition of the tagged particle), and τ = Δ − 2 D t {\displaystyle \tau =\Delta ^{-2}Dt} . The MSD for the tagged particle is obtained directly from Eq. (11): === Heterogeneous files === The solution of Eqs. (4)-(7) is approximated with the expression, Starting from Eq. (13), the PDF of the tagged particle in the heterogeneous file follows, The MSD of a tagged particle in a heterogeneous file is taken from Eq. (14): === Renewal anomalous heterogeneous files === The results of renewal-anomalous files are simply derived from the results of Brownian files. Firstly, the PDF in Eq. (8) is written in terms of the PDF that solves the un-convoluted equation, that is, the Brownian file equation; this relation is made in Laplace space: (The subscript nrml stands for normal dynamics.) From Eq. (16), it is straightforward relating the MSD of Brownian heterogeneous files and renewal-anomalous heterogeneous files, From Eq. (18), one finds that the MSD of a file with normal dynamics in the power of α {\displaystyle \alpha } is the MSD of the corresponding renewal-anomalous file, === Anomalous files with independent particles === The equation of motion for anomalous files with independent particles, (9), is very complicated. Solutions for such files are reached while deriving scaling laws and with numerical simulations. ==== Scaling laws for anomalous files of independent particles ==== Firstly, we write down the scaling law for the mean absolute displacement (MAD) in a renewal file with a constant density: Here, n {\displaystyle n} is the number of particles in the covered-length ⟨ ∣ r ∣ ⟩ {\displaystyle \langle \mid r\mid \rangle } , and ⟨ ∣ r ∣ ⟩ free {\displaystyle \langle \mid r\mid \rangle _{\text{free}}} is the MAD of a free anomalous particle, ⟨ ∣ r ∣ ⟩ free ∼ t α / 2 ) {\displaystyle \langle \mid r\mid \rangle _{\text{free}}\sim t^{\alpha /2})} . In Eq. (20), n {\displaystyle n} enters the calculations since all the particles within the distance ⟨ ∣ r ∣ ⟩ {\displaystyle \langle \mid r\mid \rangle } from the tagged one must move in the same direction in order that the tagged particle will reach a distance ⟨ ∣ r ∣ ⟩ {\displaystyle \langle \mid r\mid \rangle } from its initial position. Based on Eq. (20), we write a generalized scaling law for anomalous files of independent particles: The first term on the right hand side of Eq. (21) appears also in renewal files; yet, the term f(n) is unique. f(n) is the probability that accounts for the fact that for moving n anomalous independent particles in the same direction, when these particles indeed try jumping in the same direction (expressed with the term, ( ⟨ ∣ r ∣ ⟩ free / n ) {\displaystyle \langle \mid r\mid \rangle _{\text{free}}/n)} ), the particles in the periphery must move first so that the particles in the middle of the file will have the free space for moving, demanding faster jumping times for those in the periphery. f(n) appears since there is not a typical timescale for a jump in anomalous files, and the particles are independent, and so a particular particle can stand still for a very long time, substantially limiting the options of progress for the particles around him, during this time. Clearly, 0 < f ( n ) < 1 {\displaystyle 0<f(n)<1} , where f(n) = 1 for renewal files since the particles jump together, yet also in files of independent particles with α > 1 {\displaystyle \alpha >1} , since in such files there is a typical timescale for a jump, considered the time for a synchronized jump. We calculate f(n) from the number of configurations in which the order of the particles’ jumping times enables motion; that is, an order where the faster particles are always located towards the periphery. For n particles, there are n! different configurations, where one configuration is the optimal one; so, ( 1 / n ! ) ≤ f ( n ) {\displaystyle (1/n!)\leq f(n)} . Yet, although not optimal, propagation is also possible in many other configurations; when m is the number of particles that move, then, where ( n m ) ( n − m ) ! {\displaystyle {\dbinom {n}{m}}(n-m)!} counts the number of configurations in which those m particles around the tagged one have the optimal jumping order. Now, even when m~n/2, f ( n ) ∼ e − n / 2 {\displaystyle f(n)\sim e^{-n/2}} . Using in Eq. (21), f ( n ) ∼ e − n / n 0 {\displaystyle f(n)\sim e^{-n/n_{0}}} ( n 0 {\displaystyle n_{0}} a small number larger than 1), we see, (In Eq. (23), we use, M S D ∼∣ M A D ∣ 2 {\displaystyle MSD\sim \mid MAD\mid ^{2}} .) Equation (23) shows that asymptotically the particles are extremely slow in anomalous files of independent particles. ==== Numerical studies of anomalous files of independent particles ==== With numerical studies, one sees that anomalous files of independent particles form clusters. This phenomenon defines a dynamical phase transition. At steady state, the percentage of particles in cluster, ξ ( α ) {\displaystyle \xi (\alpha )} , follows, In Figure 1 we show trajectories from 9 particles in a file of 501 particles. (It is recommended opening the file in a new window). The upper panels show trajectories for α = 0.9 {\displaystyle \alpha =0.9} and the lower panels show trajectories for α = 0.1 {\displaystyle \alpha =0.1} . For each value of α {\displaystyle \alpha } shown are trajectories in the early stages of the simulations (left) and in all stages of the simulation (right). The panels exhibit the phenomenon of the clustering, where the trajectories attract each other and then move pretty much together. == See also == Brownian motion Langevin dynamics System dynamics == References ==
Wikipedia/File_dynamics
Size-exclusion chromatography, also known as molecular sieve chromatography, is a chromatographic method in which molecules in solution are separated by their shape, and in some cases size. It is usually applied to large molecules or macromolecular complexes such as proteins and industrial polymers. Typically, when an aqueous solution is used to transport the sample through the column, the technique is known as gel filtration chromatography, versus the name gel permeation chromatography, which is used when an organic solvent is used as a mobile phase. The chromatography column is packed with fine, porous beads which are commonly composed of dextran, agarose, or polyacrylamide polymers. The pore sizes of these beads are used to estimate the dimensions of macromolecules. SEC is a widely used polymer characterization method because of its ability to provide good molar mass distribution (Mw) results for polymers. Size-exclusion chromatography (SEC) is fundamentally different from all other chromatographic techniques in that separation is based on a simple procedure of classifying molecule sizes rather than any type of interaction. == Applications == The main application of size-exclusion chromatography is the fractionation of proteins and other water-soluble polymers, while gel permeation chromatography is used to analyze the molecular weight distribution of organic-soluble polymers. Either technique should not be confused with gel electrophoresis, where an electric field is used to "pull" molecules through the gel depending on their electrical charges. The amount of time a solute remains within a pore is dependent on the size of the pore. Larger solutes will have access to a smaller volume and vice versa. Therefore, a smaller solute will remain within the pore for a longer period of time compared to a larger solute. Even though size exclusion chromatography is widely utilized to study natural organic material, there are limitations. One of these limitations include that there is no standard molecular weight marker; thus, there is nothing to compare the results back to. If precise molecular weight is required, other methods should be used. == Advantages == The advantages of this method include good separation of large molecules from the small molecules with a minimal volume of eluate, and that various solutions can be applied without interfering with the filtration process, all while preserving the biological activity of the particles to separate. The technique is generally combined with others that further separate molecules by other characteristics, such as acidity, basicity, charge, and affinity for certain compounds. With size exclusion chromatography, there are short and well-defined separation times and narrow bands, which lead to good sensitivity. There is also no sample loss because solutes do not interact with the stationary phase. The other advantage to this experimental method is that in certain cases, it is feasible to determine the approximate molecular weight of a compound. The shape and size of the compound (eluent) determine how the compound interacts with the gel (stationary phase). To determine approximate molecular weight, the elution volumes of compounds with their corresponding molecular weights are obtained and then a plot of “Kav” vs “log(Mw)” is made, where K a v = ( V e − V o ) / ( V t − V o ) {\displaystyle K_{av}=(V_{e}-V_{o})/(V_{t}-V_{o})} and Mw is the molecular mass. This plot acts as a calibration curve, which is used to approximate the desired compound's molecular weight. The Ve component represents the volume at which the intermediate molecules elute such as molecules that have partial access to the beads of the column. In addition, Vt is the sum of the total volume between the beads and the volume within the beads. The Vo component represents the volume at which the larger molecules elute, which elute in the beginning. Disadvantages are, for example, that only a limited number of bands can be accommodated because the time scale of the chromatogram is short, and, in general, there must be a 10% difference in molecular mass to have a good resolution. == Discovery == The technique was invented in 1955 by Grant Henry Lathe and Colin R Ruthven, working at Queen Charlotte's Hospital, London. They later received the John Scott Award for this invention. While Lathe and Ruthven used starch gels as the matrix, Jerker Porath and Per Flodin later introduced dextran gels; other gels with size fractionation properties include agarose and polyacrylamide. A short review of these developments has appeared. There were also attempts to fractionate synthetic high polymers; however, it was not until 1964, when J. C. Moore of the Dow Chemical Company published his work on the preparation of gel permeation chromatography (GPC) columns based on cross-linked polystyrene with controlled pore size, that a rapid increase of research activity in this field began. It was recognized almost immediately that with proper calibration, GPC was capable to provide molar mass and molar mass distribution information for synthetic polymers. Because the latter information was difficult to obtain by other methods, GPC came rapidly into extensive use. == Theory and method == SEC is used primarily for the analysis of large molecules such as proteins or polymers. SEC works by trapping smaller molecules in the pores of the adsorbent ("stationary phase"). This process is usually performed within a column, which typically consists of a hollow tube tightly packed with micron-scale polymer beads containing pores of different sizes. These pores may be depressions on the surface or channels through the bead. As the solution travels down the column some particles enter into the pores. Larger particles cannot enter into as many pores. The larger the particles, the faster the elution. The larger molecules simply pass by the pores because those molecules are too large to enter the pores. Larger molecules therefore flow through the column more quickly than smaller molecules, that is, the smaller the molecule, the longer the retention time. One requirement for SEC is that the analyte does not interact with the surface of the stationary phases, with differences in elution time between analytes ideally being based solely on the solute volume the analytes can enter, rather than chemical or electrostatic interactions with the stationary phases. Thus, a small molecule that can penetrate every region of the stationary phase pore system can enter a total volume equal to the sum of the entire pore volume and the interparticle volume. This small molecule elutes late (after the molecule has penetrated all of the pore- and interparticle volume—approximately 80% of the column volume). At the other extreme, a very large molecule that cannot penetrate any the smaller pores can enter only the interparticle volume (~35% of the column volume) and elutes earlier when this volume of mobile phase has passed through the column. The underlying principle of SEC is that particles of different sizes elute (filter) through a stationary phase at different rates. This results in the separation of a solution of particles based on size. Provided that all the particles are loaded simultaneously or near-simultaneously, particles of the same size should elute together. However, as there are various measures of the size of a macromolecule (for instance, the radius of gyration and the hydrodynamic radius), a fundamental problem in the theory of SEC has been the choice of a proper molecular size parameter by which molecules of different kinds are separated. Experimentally, Benoit and co-workers found an excellent correlation between elution volume and a dynamically based molecular size, the hydrodynamic volume, for several different chain architecture and chemical compositions. The observed correlation based on the hydrodynamic volume became accepted as the basis of universal SEC calibration. Still, the use of the hydrodynamic volume, a size based on dynamical properties, in the interpretation of SEC data is not fully understood. This is because SEC is typically run under low flow rate conditions where hydrodynamic factor should have little effect on the separation. In fact, both theory and computer simulations assume a thermodynamic separation principle: the separation process is determined by the equilibrium distribution (partitioning) of solute macromolecules between two phases: a dilute bulk solution phase located at the interstitial space and confined solution phases within the pores of column packing material. Based on this theory, it has been shown that the relevant size parameter to the partitioning of polymers in pores is the mean span dimension (mean maximal projection onto a line). Although this issue has not been fully resolved, it is likely that the mean span dimension and the hydrodynamic volume are strongly correlated. Each size exclusion column has a range of molecular weights that can be separated. The exclusion limit defines the molecular weight at the upper end of the column 'working' range and is where molecules are too large to get trapped in the stationary phase. The lower end of the range is defined by the permeation limit, which defines the molecular weight of a molecule that is small enough to penetrate all pores of the stationary phase. All molecules below this molecular mass are so small that they elute as a single band. The filtered solution that is collected at the end is known as the eluate. The void volume includes any particles too large to enter the medium, and the solvent volume is known as the column volume. Following are the materials which are commonly used for porous gel beads in size exclusion chromatography == Factors affecting filtration == In real-life situations, particles in solution do not have a fixed size, resulting in the probability that a particle that would otherwise be hampered by a pore passing right by it. Also, the stationary-phase particles are not ideally defined; both particles and pores may vary in size. Elution curves, therefore, resemble Gaussian distributions. The stationary phase may also interact in undesirable ways with a particle and influence retention times, though great care is taken by column manufacturers to use stationary phases that are inert and minimize this issue. Like other forms of chromatography, increasing the column length enhances resolution, and increasing the column diameter increases column capacity. Proper column packing is important for maximum resolution: An over-packed column can collapse the pores in the beads, resulting in a loss of resolution. An under-packed column can reduce the relative surface area of the stationary phase accessible to smaller species, resulting in those species spending less time trapped in pores. Unlike affinity chromatography techniques, a solvent head at the top of the column can drastically diminish resolution as the sample diffuses prior to loading, broadening the downstream elution. == Analysis == In simple manual columns, the eluent is collected in constant volumes, known as fractions. The more similar the particles are in size the more likely they are in the same fraction and not detected separately. More advanced columns overcome this problem by constantly monitoring the eluent. The collected fractions are often examined by spectroscopic techniques to determine the concentration of the particles eluted. Common spectroscopy detection techniques are refractive index (RI) and ultraviolet (UV). When eluting spectroscopically similar species (such as during biological purification), other techniques may be necessary to identify the contents of each fraction. It is also possible to analyze the eluent flow continuously with RI, LALLS, Multi-Angle Laser Light Scattering MALS, UV, and/or viscosity measurements. The elution volume (Ve) decreases roughly linear with the logarithm of the molecular hydrodynamic volume. Columns are often calibrated using 4-5 standard samples (e.g., folded proteins of known molecular weight), and a sample containing a very large molecule such as thyroglobulin to determine the void volume. (Blue dextran is not recommended for Vo determination because it is heterogeneous and may give variable results) The elution volumes of the standards are divided by the elution volume of the thyroglobulin (Ve/Vo) and plotted against the log of the standards' molecular weights. == Applications == === Biochemical applications === In general, SEC is considered a low-resolution chromatography as it does not discern similar species very well, and is therefore often reserved for the final step of a purification. The technique can determine the quaternary structure of purified proteins that have slow exchange times, since it can be carried out under native solution conditions, preserving macromolecular interactions. SEC can also assay protein tertiary structure, as it measures the hydrodynamic volume (not molecular weight), allowing folded and unfolded versions of the same protein to be distinguished. For example, the apparent hydrodynamic radius of a typical protein domain might be 14 Å and 36 Å for the folded and unfolded forms, respectively. SEC allows the separation of these two forms, as the folded form elutes much later due to its smaller size. === Polymer synthesis === SEC can be used as a measure of both the size and the polydispersity of a synthesized polymer, that is, the ability to find the distribution of the sizes of polymer molecules. If standards of a known size are run previously, then a calibration curve can be created to determine the sizes of polymer molecules of interest in the solvent chosen for analysis (often THF). In alternative fashion, techniques such as light scattering and/or viscometry can be used online with SEC to yield absolute molecular weights that do not rely on calibration with standards of known molecular weight. Due to the difference in size of two polymers with identical molecular weights, the absolute determination methods are, in general, more desirable. A typical SEC system can quickly (in about half an hour) give polymer chemists information on the size and polydispersity of the sample. The preparative SEC can be used for polymer fractionation on an analytical scale. == Drawbacks == In SEC, mass is not measured so much as the hydrodynamic volume of the polymer molecules, that is, how much space a particular polymer molecule takes up when it is in solution. However, the approximate molecular weight can be calculated from SEC data because the exact relationship between molecular weight and hydrodynamic volume for polystyrene can be found. For this, polystyrene is used as a standard. But the relationship between hydrodynamic volume and molecular weight is not the same for all polymers, so only an approximate measurement can be obtained. Another drawback is the possibility of interaction between the stationary phase and the analyte. Any interaction leads to a later elution time and thus mimics a smaller analyte size. When performing this method, the bands of the eluting molecules may be broadened. This can occur by turbulence caused by the flow of the mobile phase molecules passing through the molecules of the stationary phase. In addition, molecular thermal diffusion and friction between the molecules of the glass walls and the molecules of the eluent contribute to the broadening of the bands. Besides broadening, the bands also overlap with each other. As a result, the eluent usually gets considerably diluted. A few precautions can be taken to prevent the likelihood of the bands broadening. For instance, one can apply the sample in a narrow, highly concentrated band on the top of the column. The more concentrated the eluent is, the more efficient the procedure would be. However, it is not always possible to concentrate the eluent, which can be considered as one more disadvantage. == Absolute size-exclusion chromatography == Absolute size-exclusion chromatography (ASEC) is a technique that couples a light scattering instrument, most commonly multi-angle light scattering (MALS) or another form of static light scattering (SLS), but possibly a dynamic light scattering (DLS) instrument, to a size-exclusion chromatography system for absolute molar mass and/or size measurements of proteins and macromolecules as they elute from the chromatography system. The definition of “absolute” in this case is that calibration of retention time on the column with a set of reference standards is not required to obtain molar mass or the hydrodynamic size, often referred to as hydrodynamic diameter (DH in units of nm). Non-ideal column interactions, such as electrostatic or hydrophobic surface interactions that modulate retention time relative to standards, do not impact the final result. Likewise, differences between conformation of the analyte and the standard have no effect on an absolute measurement; for example, with MALS analysis, the molar mass of inherently disordered proteins are characterized accurately even though they elute at much earlier times than globular proteins with the same molar mass, and the same is true of branched polymers which elute late compared to linear reference standards with the same molar mass. Another benefit of ASEC is that the molar mass and/or size is determined at each point in an eluting peak, and therefore indicates homogeneity or polydispersity within the peak. For example, SEC-MALS analysis of a monodisperse protein will show that the entire peak consists of molecules with the same molar mass, something that is not possible with standard SEC analysis. Determination of molar mass with SLS requires combining the light scattering measurements with concentration measurements. Therefore SEC-MALS typically includes the light scattering detector and either a differential refractometer or UV/Vis absorbance detector. In addition, MALS determines the rms radius Rg of molecules above a certain size limit, typically 10 nm. SEC-MALS can therefore analyze the conformation of polymers via the relationship of molar mass to Rg. For smaller molecules, either DLS or, more commonly, a differential viscometer is added to determine hydrodynamic radius and evaluate molecular conformation in the same manner. In SEC-DLS, the sizes of the macromolecules are measured as they elute into the flow cell of the DLS instrument from the size exclusion column set. The hydrodynamic size of the molecules or particles are measured and not their molecular weights. For proteins a Mark-Houwink type of calculation can be used to estimate the molecular weight from the hydrodynamic size. A major advantage of DLS coupled with SEC is the ability to obtain enhanced DLS resolution. Batch DLS is quick and simple and provides a direct measure of the average size, but the baseline resolution of DLS is a ratio of 3:1 in diameter. Using SEC, the proteins and protein oligomers are separated, allowing oligomeric resolution. Aggregation studies can also be done using ASEC. Though the aggregate concentration may not be calculated with light scattering (an online concentration detector such as that used in SEC-MALS for molar mass measurement also determines aggregate concentration), the size of the aggregate can be measured, only limited by the maximum size eluting from the SEC columns. Limitations of ASEC with DLS detection include flow-rate, concentration, and precision. Because a correlation function requires anywhere from 3–7 seconds to properly build, a limited number of data points can be collected across the peak. ASEC with SLS detection is not limited by flow rate and measurement time is essentially instantaneous, and the range of concentration is several orders of magnitude larger than for DLS. However, molar mass analysis with SEC-MALS does require accurate concentration measurements. MALS and DLS detectors are often combined in a single instrument for more comprehensive absolute analysis following separation by SEC. == See also == PEGylation Gel permeation chromatography Protein purification == References == == External links ==
Wikipedia/Size_exclusion_chromatography
Library and information science (LIS) are two interconnected disciplines that deal with information management. This includes organization, access, collection, and regulation of information, both in physical and digital forms. Library science and information science are two original disciplines; however, they are within the same field of study. Library science is applied information science. Library science is both an application and a subfield of information science. Due to the strong connection, sometimes the two terms are used synonymously. == Definition == Library science (previously termed library studies and library economy) is an interdisciplinary or multidisciplinary field that applies the practices, perspectives, and tools of management, information technology, education, and other areas to libraries; the collection, organization, preservation, and dissemination of information resources; and the political economy of information. Martin Schrettinger, a Bavarian librarian, coined the discipline within his work (1808–1828) Versuch eines vollständigen Lehrbuchs der Bibliothek-Wissenschaft oder Anleitung zur vollkommenen Geschäftsführung eines Bibliothekars. Rather than classifying information based on nature-oriented elements, as was previously done in his Bavarian library, Schrettinger organized books in alphabetical order. The first American school for library science was founded by Melvil Dewey at Columbia University in 1887. Historically, library science has also included archival science. This includes: how information resources are organized to serve the needs of selected user groups; how people interact with classification systems and technology; how information is acquired, evaluated and applied by people in and outside libraries as well as cross-culturally; how people are trained and educated for careers in libraries; the ethics that guide library service and organization; the legal status of libraries and information resources; and the applied science of computer technology used in documentation and records management. LIS should not be confused with information theory, the mathematical study of the concept of information. Library philosophy has been contrasted with library science as the study of the aims and justifications of librarianship as opposed to the development and refinement of techniques. == Education and training == Academic courses in library science include collection management, information systems and technology, research methods, user studies, information literacy, cataloging and classification, preservation, reference, statistics and management. Library science is constantly evolving, incorporating new topics like database management, information architecture and information management, among others. With the mounting acceptance of Wikipedia as a valued and reliable reference source, many libraries, museums, and archives have introduced the role of Wikipedian in residence. As a result, some universities are including coursework relating to Wikipedia and Knowledge Management in their MLIS programs. Becoming a library staff member does not always need a degree, and in some contexts the difference between being a library staff member and a librarian is the level of education. Most professional library jobs require a professional degree in library science or equivalent. In the United States and Canada the certification usually comes from a master's degree granted by an ALA-accredited institution. In Australia, a number of institutions offer degrees accepted by the ALIA (Australian Library and Information Association). Global standards of accreditation or certification in librarianship have yet to be developed. === United States and Canada === The Master of Library and Information Science (MLIS) is the master's degree that is required for most professional librarian positions in the United States and Canada. The MLIS was created after the older Master of Library Science (MLS) was reformed to reflect the information science and technology needs of the field. According to the American Library Association (ALA), "ALA-accredited degrees have [had] various names such as Master of Arts, Master of Librarianship, Master of Library and Information Studies, or Master of Science. The degree name is determined by the program. The [ALA] Committee for Accreditation evaluates programs based on their adherence to the Standards for Accreditation of Master's Programs in Library and Information Studies, not based on the name of the degree." == Types of librarianship == === Public === The study of librarianship for public libraries covers issues such as cataloging; collection development for a diverse community; information literacy; readers' advisory; community standards; public services-focused librarianship via community-centered programming; serving a diverse community of adults, children, and teens; intellectual freedom; censorship; and legal and budgeting issues. The public library as a commons or public sphere based on the work of Jürgen Habermas has become a central metaphor in the 21st century. In the United States there are four different types of public libraries: association libraries, municipal public libraries, school district libraries, and special district public libraries. Each receives funding through different sources, each is established by a different set of voters, and not all are subject to municipal civil service governance. === School === The study of school librarianship covers library services for children in Nursery, primary through secondary school. In some regions, the local government may have stricter standards for the education and certification of school librarians (who are sometimes considered a special case of teacher), than for other librarians, and the educational program will include those local criteria. School librarianship may also include issues of intellectual freedom, pedagogy, information literacy, and how to build a cooperative curriculum with the teaching staff. === Academic === The study of academic librarianship covers library services for colleges and universities. Issues of special importance to the field may include copyright; technology; digital libraries and digital repositories; academic freedom; open access to scholarly works; and specialized knowledge of subject areas important to the institution and the relevant reference works. Librarians often divide focus individually as liaisons on particular schools within a college or university. Academic librarians may be subject specific librarians. Some academic librarians are considered faculty, and hold similar academic ranks to those of professors, while others are not. In either case, the minimal qualification is a Master of Arts in Library Studies or a Master of Arts in Library Science. Some academic libraries may only require a master's degree in a specific academic field or a related field, such as educational technology. === Archival === The study of archives includes the training of archivists, librarians specially trained to maintain and build archives of records intended for historical preservation. Special issues include physical preservation, conservation, and restoration of materials and mass deacidification; specialist catalogs; solo work; access; and appraisal. Many archivists are also trained historians specializing in the period covered by the archive. There have been attempts to revive the concept of documentation and to speak of Library, information and documentation studies (or science). The archival mission includes three major goals: To identify papers and records with enduring value, preserve the identified papers, and make the papers available to others. While libraries receive items individually, archival items will usually become part of the archive's collection as a cohesive group. Major difference in collections is that library collections typically comprise published items (books, magazines, etc.), while archival collections are usually unpublished works (letters, diaries, etc.). Library collections are created by many individuals, as each author and illustrator create their own publication; in contrast, an archive usually collects the records of one person, family, institution, or organization, so the archival items will have fewer sources of authors. Behavior in an archive differs from behavior in other libraries. In most libraries, items are openly available to the public. Archival items almost never circulate, and someone interested in viewing documents must request them of the archivist and may only be able view them in a closed reading room. === Special === Special libraries are libraries established to meet the highly specialized requirements of professional or business groups. A library is special depending on whether it covers a specialized collection, a special subject, or a particular group of users, or even the type of parent organization, such as medical libraries or law libraries. The issues at these libraries are specific to their industries but may include solo work, corporate financing, specialized collection development, and extensive self-promotion to potential patrons. Special librarians have their own professional organization, the Special Libraries Association (SLA). Some special libraries, such as the CIA Library, may contain classified works. It is a resource to employees of the Central Intelligence Agency, containing over 125,000 written materials, subscribes to around 1,700 periodicals, and had collections in three areas: Historical Intelligence, Circulating, and Reference. In February 1997, three librarians working at the institution spoke to Information Outlook, a publication of the SLA, revealing that the library had been created in 1947, the importance of the library in disseminating information to employees, even with a small staff, and how the library organizes its materials. === Preservation === Preservation librarians most often work in academic libraries. Their focus is on the management of preservation activities that seek to maintain access to content within books, manuscripts, archival materials, and other library resources. Examples of activities managed by preservation librarians include binding, conservation, digital and analog reformatting, digital preservation, and environmental monitoring. == History == Libraries have existed for many centuries but library science is a more recent phenomenon, as early libraries were managed primarily by academics. === 17th and 18th century === The earliest text on "library operations", Advice on Establishing a Library was published in 1627 by French librarian and scholar Gabriel Naudé. Naudé wrote on many subjects including politics, religion, history, and the supernatural. He put into practice all the ideas put forth in Advice when given the opportunity to build and maintain the library of Cardinal Jules Mazarin. During the 'golden age of libraries' in the 17th century, publishers and sellers seeking to take advantage of the burgeoning book trade developed descriptive catalogs of their wares for distribution – a practice was adopted and further extrapolated by many libraries of the time to cover areas like philosophy, sciences, linguistics, and medicine In 1726 Gottfried Wilhelm Leibniz wrote Idea of Arranging a Narrower Library. === 19th century === Martin Schrettinger wrote the second textbook (the first in Germany) on the subject from 1808 to 1829. Some of the main tools used by LIS to provide access to the resources originated in 19th century to make information accessible by recording, identifying, and providing bibliographic control of printed knowledge. The origin for some of these tools were even earlier. Thomas Jefferson, whose library at Monticello consisted of thousands of books, devised a classification system inspired by the Baconian method, which grouped books more or less by subject rather than alphabetically, as it was previously done. The Jefferson collection provided the start of what became the Library of Congress. The first American school of librarianship opened in New York under the leadership of Melvil Dewey, noted for his 1876 decimal classification, on January 5, 1887, as the Columbia College School of Library Economy. The term library economy was common in the U.S. until 1942, with the term, library science, predominant through much of the 20th century. === 20th century === In the English-speaking world the term "library science" seems to have been used for the first time in India in the 1916 book Punjab Library Primer, written by Asa Don Dickinson and published by the University of Punjab, Lahore, Pakistan. This university was the first in Asia to begin teaching "library science". The Punjab Library Primer was the first textbook on library science published in English anywhere in the world. The first textbook in the United States was the Manual of Library Economy by James Duff Brown, published in 1903. Later, the term was used in the title of S. R. Ranganathan's The Five Laws of Library Science, published in 1931, which contains Ranganathan's titular theory. Ranganathan is also credited with the development of the first major analytical-synthetic classification system, the colon classification. In the United States, Lee Pierce Butler published his 1933 book An Introduction to Library Science (University of Chicago Press), where he advocated for research using quantitative methods and ideas in the social sciences with the aim of using librarianship to address society's information needs. He was one of the first faculty at the University of Chicago Graduate Library School, which changed the structure and focus of education for librarianship in the twentieth century. This research agenda went against the more procedure-based approach of the "library economy", which was mostly confined to practical problems in the administration of libraries. In 1923, Charles C. Williamson, who was appointed by the Carnegie Corporation, published an assessment of library science education entitled "The Williamson Report", which designated that universities should provide library science training. This report had a significant impact on library science training and education. Library research and practical work, in the area of information science, have remained largely distinct both in training and in research interests. William Stetson Merrill's A Code for Classifiers, released in several editions from 1914 to 1939, is an example of a more pragmatic approach, where arguments stemming from in-depth knowledge about each field of study are employed to recommend a system of classification. While Ranganathan's approach was philosophical, it was also tied more to the day-to-day business of running a library. A reworking of Ranganathan's laws was published in 1995 which removes the constant references to books. Michael Gorman's Our Enduring Values: Librarianship in the 21st Century features the eight principles necessary by library professionals and incorporates knowledge and information in all their forms, allowing for digital information to be considered. ==== From library science to LIS ==== By the late 1960s, mainly due to the meteoric rise of human computing power and the new academic disciplines formed therefrom, academic institutions began to add the term "information science" to their names. The first school to do this was at the University of Pittsburgh in 1964. More schools followed during the 1970s and 1980s. By the 1990s almost all library schools in the US had added information science to their names. Although there are exceptions, similar developments have taken place in other parts of the world. In India, the Department of Library Science, University of Madras (southern state of TamiilNadu, India) became the Department of Library and Information Science in 1976. In Denmark, for example, the "Royal School of Librarianship" changed its English name to The Royal School of Library and Information Science in 1997. === 21st century === The digital age has transformed how information is accessed and retrieved. "The library is now a part of a complex and dynamic educational, recreational, and informational infrastructure." Mobile devices and applications with wireless networking, high-speed computers and networks, and the computing cloud have deeply impacted and developed information science and information services. The evolution of the library sciences maintains its mission of access equity and community space, as well as the new means for information retrieval called information literacy skills. All catalogs, databases, and a growing number of books are available on the Internet. In addition, the expanding free access to open access journals and sources such as Wikipedia has fundamentally impacted how information is accessed. Information literacy is the ability to "determine the extent of information needed, access the needed information effectively and efficiently, evaluate information and its sources critically, incorporate selected information into one's knowledge base, use information effectively to accomplish a specific purpose, and understand the economic, legal, and social issues surrounding the use of information, and access and use information ethically and legally." In recent years, the concept of data literacy has emerged within library and information science as a complement to information literacy to refer to the ability to find, interpret, evaluate, manage, and ethically use data to support research, learning, and informed decision-making. In the early 2000s, dLIST, Digital Library for Information Sciences and Technology was established. It was the first open access archive for the multidisciplinary 'library and information sciences' building a global scholarly communication consortium and the LIS Commons in order to increase the visibility of research literature, bridge the divide between practice, teaching, and research communities, and improve visibility, uncitedness, and integrate scholarly work in the critical information infrastructures of archives, libraries, and museums. Social justice, an important ethical value in librarianship and in the 21st century has become an important research area, if not subdiscipline of LIS. == Journals == Some core journals in LIS are: Annual Review of Information Science and Technology (ARIST) (1966–2011) El Profesional de la Información (EPI) (1992–) (Formerly Information World en Español) Information Processing and Management Information Research: An International Electronic Journal (IR) (1995–) Italian Journal of Library and Information Studies (JLIS.it) Journal of Documentation (JDoc) (1945–) Journal of Information Science (JIS) (1979–) Journal of the Association for Information Science and Technology (formerly Journal of the American Society for Information Science and Technology) (JASIST) (1950–) Knowledge Organization Library Literature and Information Science Retrospective Library Trends (1952–) Scientometrics (journal) (1978–) The Library Quarterly (LQ) (1931–) Grandhalaya Sarvaswam (1915–) Important bibliographical databases in LIS are, among others, Social Sciences Citation Index and Library and Information Science Abstracts == Conferences == This is a list of some of the major conferences in the field. Annual meetings of the American Library Association. Annual meeting of the American Society for Information Science and Technology Annual meeting of the Association for Library and Information Science Education Conceptions of Library and Information Science i-Schools' iConferences The International Federation of Library Associations and Institutions (IFLA): World Library and Information Congress African Library and Information Associations and Institutions (AfLIA) Conference == Subfields == Information science grew out of documentation science and therefore has a tradition for considering scientific and scholarly communication, bibliographic databases, subject knowledge and terminology etc. An advertisement for a full Professor in information science at the Royal School of Library and Information Science, spring 2011, provides one view of which sub-disciplines are well-established: "The research and teaching/supervision must be within some (and at least one) of these well-established information science areas A curriculum study by Kajberg & Lørring in 2005 reported a "degree of overlap of the ten curricular themes with subject areas in the current curricula of responding LIS schools". Information seeking and Information retrieval 100% Library management and promotion 96% Knowledge management 86% Knowledge organization 82% Information literacy and learning 76% Library and society in a historical perspective (Library history) 66% The Information society: Barriers to the free access to information 64% Cultural heritage and digitisation of the cultural heritage (Digital preservation) 62% The library in the multi-cultural information society: International and intercultural communication 42% Mediation of culture in a special European context 26% " There is often an overlap between these subfields of LIS and other fields of study. Most information retrieval research, for example, belongs to computer science. Knowledge management is considered a subfield of management or organizational studies. === Metadata === Pre-Internet classification systems and cataloging systems were mainly concerned with two objectives: To provide rich bibliographic descriptions and relations between information objects, and To facilitate sharing of this bibliographic information across library boundaries. The development of the Internet and the information explosion that followed found many communities needing mechanisms for the description, authentication and management of their information. These communities developed taxonomies and controlled vocabularies to describe their knowledge, as well as unique information architectures to communicate these classifications and libraries found themselves as liaison or translator between these metadata systems. The concerns of cataloging in the Internet era have gone beyond simple bibliographic descriptions and the need for descriptive information about the ownership and copyright of a digital product – a publishing concern – and description for the different formats and accessibility features of a resource – a sociological concern – show the continued development and cross discipline necessity of resource description. In the 21st century, the usage of open data, open source and open protocols like OAI-PMH has allowed thousands of libraries and institutions to collaborate on the production of global metadata services previously offered only by increasingly expensive commercial proprietary products. Tools like BASE and Unpaywall automate the search of an academic paper across thousands of repositories by libraries and research institutions. === Knowledge organization === Library science is very closely related to issues of knowledge organization; however, the latter is a broader term that covers how knowledge is represented and stored (computer science/linguistics), how it might be automatically processed (artificial intelligence), and how it is organized outside the library in global systems such as the internet. In addition, library science typically refers to a specific community engaged in managing holdings as they are found in university and government libraries, while knowledge organization, in general, refers to this and also to other communities (such as publishers) and other systems (such as the Internet). The library system is thus one socio-technical structure for knowledge organization. The terms 'information organization' and 'knowledge organization' are often used synonymously.: 106  The fundamentals of their study - particularly theory relating to indexing and classification - and many of the main tools used by the disciplines in modern times to provide access to digital resources such as abstracting, metadata, resource description, systematic and alphabetic subject description, and terminology, originated in the 19th century and were developed, in part, to assist in making humanity's intellectual output accessible by recording, identifying, and providing bibliographic control of printed knowledge.: 105  Information has been published that analyses the relations between the philosophy of information (PI), library and information science (LIS), and social epistemology (SE). === Ethics === Practicing library professionals and members of the American Library Association recognize and abide by the ALA Code of Ethics. According to the American Library Association, "In a political system grounded in an informed citizenry, we are members of a profession explicitly committed to intellectual freedom and freedom of access to information. We have a special obligation to ensure the free flow of information and ideas to present and future generations." The ALA Code of Ethics was adopted in the winter of 1939, and updated on June 29, 2021. == See also == == Notes == == References == == Further reading == Åström, Fredrik (September 5, 2008). "Formalizing a discipline: The institutionalization of library and information science research in the Nordic countries". Journal of Documentation. 64 (5): 721–737. doi:10.1108/00220410810899736. Bawden, David; Robinson, Lyn (August 20, 2012). Introduction to Information Science. American Library Association. ISBN 978-1555708610. Hjørland, Birger (2000). "Library and information science: practice, theory, and philosophical basis". Information Processing & Management. 36 (3): 501–531. doi:10.1016/S0306-4573(99)00038-2. Järvelin, Kalervo; Vakkari, Pertti (January 1993). "The evolution of library and information science 1965–1985: A content analysis of journal articles". Information Processing & Management. 29 (1): 129–144. doi:10.1016/0306-4573(93)90028-C. McNicol, Sarah (March 2003). "LIS: the interdisciplinary research landscape". Journal of Librarianship and Information Science. 35 (1): 23–30. doi:10.1177/096100060303500103. S2CID 220912521. Dick, Archie L. (1995). "Library and Information Science as a Social Science: Neutral and Normative Conceptions". The Library Quarterly: Information, Community, Policy. 65 (2): 216–235. doi:10.1086/602777. JSTOR 4309022. S2CID 142825177. Foundational Books in Library Services.1976-2024. LHRT News & Notes. October, 2024. International Journal of Library Science (ISSN 0975-7546) Lafontaine, Gerard S. (1958). Dictionary of Terms Used in the Paper, Printing, and Allied Industries. Toronto: H. Smith Paper Mills. 110 p. The Oxford Guide to Library Research (2005) – ISBN 0195189981 Taşkın, Zehra (2021). "Forecasting the future of library and information science and its sub-fields". Scientometrics. 126 (2): 1527–1551. doi:10.1007/s11192-020-03800-2. PMC 7745590. PMID 33353991. Thompson, Elizabeth H. (1943). A.L.A. Glossary of Library Terms, with a Selection of Terms in Related Fields, prepared under the direction of the Committee on Library Terminology of the American Library Association. Chicago, Ill.: American Library Association. viii, 189 p. ISBN 978-0838900000 V-LIB 1.2 (2008 Vartavan Library Classification, over 700 fields of sciences & arts classified according to a relational philosophy, currently sold under license in the UK by Rosecastle Ltd. (see Vartavan-Frame) == External links == Media related to Library and information science at Wikimedia Commons LISNews.org – librarian and information science news LISWire.com – librarian and information science wire
Wikipedia/Library_and_information_science
In quantum computing, a quantum algorithm is an algorithm that runs on a realistic model of quantum computation, the most commonly used model being the quantum circuit model of computation. A classical (or non-quantum) algorithm is a finite sequence of instructions, or a step-by-step procedure for solving a problem, where each step or instruction can be performed on a classical computer. Similarly, a quantum algorithm is a step-by-step procedure, where each of the steps can be performed on a quantum computer. Although all classical algorithms can also be performed on a quantum computer,: 126  the term quantum algorithm is generally reserved for algorithms that seem inherently quantum, or use some essential feature of quantum computation such as quantum superposition or quantum entanglement. Problems that are undecidable using classical computers remain undecidable using quantum computers.: 127  What makes quantum algorithms interesting is that they might be able to solve some problems faster than classical algorithms because the quantum superposition and quantum entanglement that quantum algorithms exploit generally cannot be efficiently simulated on classical computers (see Quantum supremacy). The best-known algorithms are Shor's algorithm for factoring and Grover's algorithm for searching an unstructured database or an unordered list. Shor's algorithm runs much (almost exponentially) faster than the most efficient known classical algorithm for factoring, the general number field sieve. Grover's algorithm runs quadratically faster than the best possible classical algorithm for the same task, a linear search. == Overview == Quantum algorithms are usually described, in the commonly used circuit model of quantum computation, by a quantum circuit that acts on some input qubits and terminates with a measurement. A quantum circuit consists of simple quantum gates, each of which acts on some finite number of qubits. Quantum algorithms may also be stated in other models of quantum computation, such as the Hamiltonian oracle model. Quantum algorithms can be categorized by the main techniques involved in the algorithm. Some commonly used techniques/ideas in quantum algorithms include phase kick-back, phase estimation, the quantum Fourier transform, quantum walks, amplitude amplification and topological quantum field theory. Quantum algorithms may also be grouped by the type of problem solved; see, e.g., the survey on quantum algorithms for algebraic problems. == Algorithms based on the quantum Fourier transform == The quantum Fourier transform is the quantum analogue of the discrete Fourier transform, and is used in several quantum algorithms. The Hadamard transform is also an example of a quantum Fourier transform over an n-dimensional vector space over the field F2. The quantum Fourier transform can be efficiently implemented on a quantum computer using only a polynomial number of quantum gates. === Deutsch–Jozsa algorithm === The Deutsch–Jozsa algorithm solves a black-box problem that requires exponentially many queries to the black box for any deterministic classical computer, but can be done with a single query by a quantum computer. However, when comparing bounded-error classical and quantum algorithms, there is no speedup, since a classical probabilistic algorithm can solve the problem with a constant number of queries with small probability of error. The algorithm determines whether a function f is either constant (0 on all inputs or 1 on all inputs) or balanced (returns 1 for half of the input domain and 0 for the other half). === Bernstein–Vazirani algorithm === The Bernstein–Vazirani algorithm is the first quantum algorithm that solves a problem more efficiently than the best known classical algorithm. It was designed to create an oracle separation between BQP and BPP. === Simon's algorithm === Simon's algorithm solves a black-box problem exponentially faster than any classical algorithm, including bounded-error probabilistic algorithms. This algorithm, which achieves an exponential speedup over all classical algorithms that we consider efficient, was the motivation for Shor's algorithm for factoring. === Quantum phase estimation algorithm === The quantum phase estimation algorithm is used to determine the eigenphase of an eigenvector of a unitary gate, given a quantum state proportional to the eigenvector and access to the gate. The algorithm is frequently used as a subroutine in other algorithms. === Shor's algorithm === Shor's algorithm solves the discrete logarithm problem and the integer factorization problem in polynomial time, whereas the best known classical algorithms take super-polynomial time. It is unknown whether these problems are in P or NP-complete. It is also one of the few quantum algorithms that solves a non-black-box problem in polynomial time, where the best known classical algorithms run in super-polynomial time. === Hidden subgroup problem === The abelian hidden subgroup problem is a generalization of many problems that can be solved by a quantum computer, such as Simon's problem, solving Pell's equation, testing the principal ideal of a ring R and factoring. There are efficient quantum algorithms known for the Abelian hidden subgroup problem. The more general hidden subgroup problem, where the group is not necessarily abelian, is a generalization of the previously mentioned problems, as well as graph isomorphism and certain lattice problems. Efficient quantum algorithms are known for certain non-abelian groups. However, no efficient algorithms are known for the symmetric group, which would give an efficient algorithm for graph isomorphism and the dihedral group, which would solve certain lattice problems. === Estimating Gauss sums === A Gauss sum is a type of exponential sum. The best known classical algorithm for estimating these sums takes exponential time. Since the discrete logarithm problem reduces to Gauss sum estimation, an efficient classical algorithm for estimating Gauss sums would imply an efficient classical algorithm for computing discrete logarithms, which is considered unlikely. However, quantum computers can estimate Gauss sums to polynomial precision in polynomial time. === Fourier fishing and Fourier checking === Consider an oracle consisting of n random Boolean functions mapping n-bit strings to a Boolean value, with the goal of finding n n-bit strings z1,..., zn such that for the Hadamard-Fourier transform, at least 3/4 of the strings satisfy | f ~ ( z i ) | ⩾ 1 {\displaystyle |{\tilde {f}}(z_{i})|\geqslant 1} and at least 1/4 satisfy | f ~ ( z i ) | ⩾ 2. {\displaystyle |{\tilde {f}}(z_{i})|\geqslant 2.} This can be done in bounded-error quantum polynomial time (BQP). == Algorithms based on amplitude amplification == Amplitude amplification is a technique that allows the amplification of a chosen subspace of a quantum state. Applications of amplitude amplification usually lead to quadratic speedups over the corresponding classical algorithms. It can be considered as a generalization of Grover's algorithm. === Grover's algorithm === Grover's algorithm searches an unstructured database (or an unordered list) with N entries for a marked entry, using only O ( N ) {\displaystyle O({\sqrt {N}})} queries instead of the O ( N ) {\displaystyle O({N})} queries required classically. Classically, O ( N ) {\displaystyle O({N})} queries are required even allowing bounded-error probabilistic algorithms. Theorists have considered a hypothetical generalization of a standard quantum computer that could access the histories of the hidden variables in Bohmian mechanics. (Such a computer is completely hypothetical and would not be a standard quantum computer, or even possible under the standard theory of quantum mechanics.) Such a hypothetical computer could implement a search of an N-item database in at most O ( N 3 ) {\displaystyle O({\sqrt[{3}]{N}})} steps. This is slightly faster than the O ( N ) {\displaystyle O({\sqrt {N}})} steps taken by Grover's algorithm. However, neither search method would allow either model of quantum computer to solve NP-complete problems in polynomial time. === Quantum counting === Quantum counting solves a generalization of the search problem. It solves the problem of counting the number of marked entries in an unordered list, instead of just detecting whether one exists. Specifically, it counts the number of marked entries in an N {\displaystyle N} -element list with an error of at most ε {\displaystyle \varepsilon } by making only Θ ( ε − 1 N / k ) {\displaystyle \Theta \left(\varepsilon ^{-1}{\sqrt {N/k}}\right)} queries, where k {\displaystyle k} is the number of marked elements in the list. More precisely, the algorithm outputs an estimate k ′ {\displaystyle k'} for k {\displaystyle k} , the number of marked entries, with accuracy | k − k ′ | ≤ ε k {\displaystyle |k-k'|\leq \varepsilon k} . == Algorithms based on quantum walks == A quantum walk is the quantum analogue of a classical random walk. A classical random walk can be described by a probability distribution over some states, while a quantum walk can be described by a quantum superposition over states. Quantum walks are known to give exponential speedups for some black-box problems. They also provide polynomial speedups for many problems. A framework for the creation of quantum walk algorithms exists and is a versatile tool. === Boson sampling problem === The Boson Sampling Problem in an experimental configuration assumes an input of bosons (e.g., photons) of moderate number that are randomly scattered into a large number of output modes, constrained by a defined unitarity. When individual photons are used, the problem is isomorphic to a multi-photon quantum walk. The problem is then to produce a fair sample of the probability distribution of the output that depends on the input arrangement of bosons and the unitarity. Solving this problem with a classical computer algorithm requires computing the permanent of the unitary transform matrix, which may take a prohibitively long time or be outright impossible. In 2014, it was proposed that existing technology and standard probabilistic methods of generating single-photon states could be used as an input into a suitable quantum computable linear optical network and that sampling of the output probability distribution would be demonstrably superior using quantum algorithms. In 2015, investigation predicted the sampling problem had similar complexity for inputs other than Fock-state photons and identified a transition in computational complexity from classically simulable to just as hard as the Boson Sampling Problem, depending on the size of coherent amplitude inputs. === Element distinctness problem === The element distinctness problem is the problem of determining whether all the elements of a list are distinct. Classically, Ω ( N ) {\displaystyle \Omega (N)} queries are required for a list of size N {\displaystyle N} ; however, it can be solved in Θ ( N 2 / 3 ) {\displaystyle \Theta (N^{2/3})} queries on a quantum computer. The optimal algorithm was put forth by Andris Ambainis, and Yaoyun Shi first proved a tight lower bound when the size of the range is sufficiently large. Ambainis and Kutin independently (and via different proofs) extended that work to obtain the lower bound for all functions. === Triangle-finding problem === The triangle-finding problem is the problem of determining whether a given graph contains a triangle (a clique of size 3). The best-known lower bound for quantum algorithms is Ω ( N ) {\displaystyle \Omega (N)} , but the best algorithm known requires O(N1.297) queries, an improvement over the previous best O(N1.3) queries. === Formula evaluation === A formula is a tree with a gate at each internal node and an input bit at each leaf node. The problem is to evaluate the formula, which is the output of the root node, given oracle access to the input. A well studied formula is the balanced binary tree with only NAND gates. This type of formula requires Θ ( N c ) {\displaystyle \Theta (N^{c})} queries using randomness, where c = log 2 ⁡ ( 1 + 33 ) / 4 ≈ 0.754 {\displaystyle c=\log _{2}(1+{\sqrt {33}})/4\approx 0.754} . With a quantum algorithm, however, it can be solved in Θ ( N 1 / 2 ) {\displaystyle \Theta (N^{1/2})} queries. No better quantum algorithm for this case was known until one was found for the unconventional Hamiltonian oracle model. The same result for the standard setting soon followed. Fast quantum algorithms for more complicated formulas are also known. === Group commutativity === The problem is to determine if a black-box group, given by k generators, is commutative. A black-box group is a group with an oracle function, which must be used to perform the group operations (multiplication, inversion, and comparison with identity). The interest in this context lies in the query complexity, which is the number of oracle calls needed to solve the problem. The deterministic and randomized query complexities are Θ ( k 2 ) {\displaystyle \Theta (k^{2})} and Θ ( k ) {\displaystyle \Theta (k)} , respectively. A quantum algorithm requires Ω ( k 2 / 3 ) {\displaystyle \Omega (k^{2/3})} queries, while the best-known classical algorithm uses O ( k 2 / 3 log ⁡ k ) {\displaystyle O(k^{2/3}\log k)} queries. == BQP-complete problems == The complexity class BQP (bounded-error quantum polynomial time) is the set of decision problems solvable by a quantum computer in polynomial time with error probability of at most 1/3 for all instances. It is the quantum analogue to the classical complexity class BPP. A problem is BQP-complete if it is in BQP and any problem in BQP can be reduced to it in polynomial time. Informally, the class of BQP-complete problems are those that are as hard as the hardest problems in BQP and are themselves efficiently solvable by a quantum computer (with bounded error). === Computing knot invariants === Witten had shown that the Chern-Simons topological quantum field theory (TQFT) can be solved in terms of Jones polynomials. A quantum computer can simulate a TQFT, and thereby approximate the Jones polynomial, which as far as we know, is hard to compute classically in the worst-case scenario. === Quantum simulation === The idea that quantum computers might be more powerful than classical computers originated in Richard Feynman's observation that classical computers seem to require exponential time to simulate many-particle quantum systems, yet quantum many-body systems are able to "solve themselves." Since then, the idea that quantum computers can simulate quantum physical processes exponentially faster than classical computers has been greatly fleshed out and elaborated. Efficient (i.e., polynomial-time) quantum algorithms have been developed for simulating both Bosonic and Fermionic systems, as well as the simulation of chemical reactions beyond the capabilities of current classical supercomputers using only a few hundred qubits. Quantum computers can also efficiently simulate topological quantum field theories. In addition to its intrinsic interest, this result has led to efficient quantum algorithms for estimating quantum topological invariants such as Jones and HOMFLY polynomials, and the Turaev-Viro invariant of three-dimensional manifolds. === Solving a linear system of equations === In 2009, Aram Harrow, Avinatan Hassidim, and Seth Lloyd, formulated a quantum algorithm for solving linear systems. The algorithm estimates the result of a scalar measurement on the solution vector to a given linear system of equations. Provided that the linear system is sparse and has a low condition number κ {\displaystyle \kappa } , and that the user is interested in the result of a scalar measurement on the solution vector (instead of the values of the solution vector itself), then the algorithm has a runtime of O ( log ⁡ ( N ) κ 2 ) {\displaystyle O(\log(N)\kappa ^{2})} , where N {\displaystyle N} is the number of variables in the linear system. This offers an exponential speedup over the fastest classical algorithm, which runs in O ( N κ ) {\displaystyle O(N\kappa )} (or O ( N κ ) {\displaystyle O(N{\sqrt {\kappa }})} for positive semidefinite matrices). == Hybrid quantum/classical algorithms == Hybrid Quantum/Classical Algorithms combine quantum state preparation and measurement with classical optimization. These algorithms generally aim to determine the ground-state eigenvector and eigenvalue of a Hermitian operator. === QAOA === The quantum approximate optimization algorithm takes inspiration from quantum annealing, performing a discretized approximation of quantum annealing using a quantum circuit. It can be used to solve problems in graph theory. The algorithm makes use of classical optimization of quantum operations to maximize an "objective function." === Variational quantum eigensolver === The variational quantum eigensolver (VQE) algorithm applies classical optimization to minimize the energy expectation value of an ansatz state to find the ground state of a Hermitian operator, such as a molecule's Hamiltonian. It can also be extended to find excited energies of molecular Hamiltonians. === Contracted quantum eigensolver === The contracted quantum eigensolver (CQE) algorithm minimizes the residual of a contraction (or projection) of the Schrödinger equation onto the space of two (or more) electrons to find the ground- or excited-state energy and two-electron reduced density matrix of a molecule. It is based on classical methods for solving energies and two-electron reduced density matrices directly from the anti-Hermitian contracted Schrödinger equation. == See also == Quantum machine learning Quantum optimization algorithms Quantum sort Primality test == References == == External links == The Quantum Algorithm Zoo: A comprehensive list of quantum algorithms that provide a speedup over the fastest known classical algorithms. Andrew Childs' lecture notes on quantum algorithms The Quantum search algorithm - brute force Archived 1 September 2018 at the Wayback Machine. === Surveys === Dalzell, Alexander M.; et al. (2023). "Quantum algorithms: A survey of applications and end-to-end complexities". arXiv:2310.03011 [quant-ph]. Smith, J.; Mosca, M. (2012). "Algorithms for Quantum Computers". Handbook of Natural Computing. pp. 1451–1492. arXiv:1001.0767. doi:10.1007/978-3-540-92910-9_43. ISBN 978-3-540-92909-3. S2CID 16565723. Childs, A. M.; Van Dam, W. (2010). "Quantum algorithms for algebraic problems". Reviews of Modern Physics. 82 (1): 1–52. arXiv:0812.0380. Bibcode:2010RvMP...82....1C. doi:10.1103/RevModPhys.82.1. S2CID 119261679.
Wikipedia/Quantum_algorithms
In conservation, library and archival science, preservation is a set of preventive conservation activities aimed at prolonging the life of a record, book, or object while making as few changes as possible. Preservation activities vary widely and may include monitoring the condition of items, maintaining the temperature and humidity in collection storage areas, writing a plan in case of emergencies, digitizing items, writing relevant metadata, and increasing accessibility. Preservation, in this definition, is practiced in a library or an archive by a conservator, librarian, archivist, or other professional when they perceive a collection or record is in need of maintenance. Preservation should be distinguished from interventive conservation and restoration, which refers to the treatment and repair of individual items to slow the process of decay, or restore them to a usable state. "Preventive conservation" is used interchangeably with "preservation". == Fundamentals == === Standard functions of preservation programs === Risk management in collections is the preventive care of a collection as a whole. Preservation can include general collections maintenance activities such as security, environmental monitoring, preservation surveys, and more specialized activities such as mass deacidification. Disaster preparedness (RT: Disaster Plan / Business Continuation / Disaster Recovery / Disaster Mitigation Plan) is the practice of arranging for the necessary resources and planning the best course of action to prevent or minimize damage to a collection in the event of a disaster of any level of magnitude, whether natural or human-made. Digital preservation is the maintenance of digitally stored information. Some means of digital preservation include refreshing, migration, replication and emulation. This should not be confused with digitization, which is a process of creating digital information which must then itself be preserved digitally. Reformatting is the practice of creating copies of an object in another type of data-storage device. Reformatting processes include microfilming and digitization. === Media-specific issues and treatments === Books - Sizing and Leather Binding Ephemera and Realia Paper - Acid-free paper, Japanese tissue, Mummy paper, Paper splitting, and Print permanence Parchment - Parchment repair and Preservation of Illuminated Manuscripts Moving image - Film preservation and Video recording Sound recording - Preservation of magnetic audiotape Oral history preservation Language Preservation Visual material - Color photography § Preservation issues and Architectural reprography, a variety of technologies and media used to make multiple copies of original drawings or records created by architects, engineers, mapmakers and related professionals. Optical media preservation Ink === Digitization === A relatively new concept, digitization, has been hailed as a way to preserve historical items for future use. "Digitizing refers to the process of converting analog materials into digital form." For manuscripts, digitization is achieved through scanning an item and saving it to a digital format. For example, the Google Book Search program has partnered with over forty libraries around the world to digitize books. The goal of this library partnership project is to "make it easier for people to find relevant books – specifically, books they wouldn't find any other way such as those that are out of print – while carefully respecting authors' and publishers' copyrights." Although digitization seems to be a promising area for future preservation, there are also problems. The main problems are that digital space costs money, media and file formats may become obsolete, and backwards compatibility is not guaranteed. Higher-quality images take a longer time to scan, but are often more valuable for future use. Fragile items are often more difficult or more expensive to scan, which creates a selection problem for preservationists where they must decide if digital access in the future is worth potentially damaging the item during the scanning process. Other problems include scan quality, redundancy of digitized books among different libraries, and copyright law. However, many of these problems are being solved through educational initiatives. Educational programs are tailoring themselves to fit preservation needs and help new students understand preservation practices. Programs teaching graduate students about digital librarianship are especially important. Groups such as the Digital Preservation Network strive to ensure that "the complete scholarly record is preserved for future generations". The Library of Congress maintains a Sustainability of Digital Formats web site that educates institutions on various aspects of preservation: most notably, on approximately 200 digital format types and which are most likely to last into the future. Digital Preservation is another name for digitization, and is the term more commonly used in archival courses. The main goal of digital preservation is to guarantee that people will have access to the digitally preserved materials long into the future. == Practices == When practicing preservation, one has several factors to consider in order to properly preserve a record: 1) the storage environment of the record, 2) the criteria to determine when preservation is necessary, 3) what the standard preservation practices are for that particular institution, 4) research and testing, and 5) if any vendor services will be needed for further preservation and potentially conservation. === Storage environment === Environmental controls are necessary to facilitate the preservation of organic materials and are especially important to monitor in rare and special collections. Key environmental factors to watch include temperature, relative humidity, pests, pollutants, and light exposure. In general, the lower the temperature is, the better it is for the collection. However, since books and other materials are often housed in areas with people, a compromise must be struck to accommodate human comfort. A reasonable temperature to accomplish both goals is 65–68˚F (18–20 °C) however, if possible, film and photography collections should be kept in a segregated area at 55 ˚F (13 °C). Books and other materials take up and give off moisture making them sensitive to relative humidity. Very high humidity encourages mold growth and insect infestations. Low humidity causes materials to lose their flexibility. Fluctuations in relative humidity are more damaging than a constant humidity in the middle or low range. Generally, the relative humidity should be between 30–50% with as little variation as possible, however recommendations on specific levels to maintain vary depending on the type of material, i.e. paper-based, film, etc. A specialized dew point calculator for book preservation is available. Pests, such as insects and vermin, eat and destroy paper and the adhesive that secures book bindings. Food and drink in libraries, archives, and museums can increase the attraction of pests. An Integrated Pest Management system is one way to control pests in libraries. Particulate and gaseous pollutants, such as soot, ozone, sulfur dioxide, oxides of nitrogen, can cause dust, soiling, and irreversible molecular damage to materials. Pollutants are exceedingly small and not easily detectable or removable. A special filtration system in the building's HVAC is a helpful defense. Exposure to light also has a significant effect on materials. It is not only the light visible to humans that can cause damage, but also ultraviolet light and infrared radiation. Measured in lux or the amount of lumens/m2, the generally accepted level of illumination with sensitive materials is limited to 50 lux per day. Materials receiving more lux than recommended can be placed in dark storage periodically to prolong the original appearance of the object. Recent concerns about the impact of climate change on the management of cultural heritage objects as well as the historic environment has prompted research efforts to investigate alternative climate control methods and strategies that include the implementation of alternative climate control systems to replace or supplement traditional high-energy consuming HVAC systems as well as the introduction of passive preservation techniques. Rather than maintaining a flat line, consistent 24/7 condition for a collection's environment, fluctuation can occur within acceptable limits to create a preservation environment while also thinking of energy efficiency and taking advantage of the outside environment. Bound materials are sensitive to rapid temperature or humidity cycling due to differential expansion of the binding and pages, which may cause the binding to crack and/or the pages to warp. Changes in temperature and humidity should be done slowly so as to minimize the difference in expansion rates. However, an accelerated aging study on the effects of fluctuating temperature and humidity on paper color and strength showed no evidence that cycling of one temperature to another or one RH to another caused a different mechanism of decay. The preferred method for storing manuscripts, archival records, and other paper documents is to place them in acid-free paper folders which are then placed in acid-free of low-lignin boxes for further protection. Similarly, books that are fragile, valuable, oddly shaped, or in need of protection can be stored in archival boxes and enclosures. Additionally, housing books can protect them from many of the contributing factors to book damage: pests, light, temperature changes, and water. Contamination can occur at the time of manufacture, especially with electronic materials. It must be stopped before it spreads, but it is usually irreversible. === Preservation criteria === Making a proper decision is an important factor before starting preservation practices. Decision making for preservation should be made considering significance and value of materials. Significance is considered to have two major components: importance and quality. "Importance" relates to the collection's role as a record, and "quality" covers comprehensiveness, depth, uniqueness, authenticity and reputation of the collection. Moreover, analyzing the significance of materials can be used to uncover more about their meaning. Assessment of significance can also aid in documenting the provenance and context to argue the case for grant funding for the object and collection. Forms of significance can be historically, culturally, socially, or spiritually significant. In the preservation context, libraries and archives make decisions in different ways. In libraries, decision-making likely targets existing holding materials, whereas in archives, decisions for preservation are often made when they acquire materials. Therefore, different criteria might be needed on different occasions. In general, for archive criteria, the points include: the characteristics of a record (purpose, creator, etc.); the quality of the information in the record; the record in context (part of a series or not); potential use and possible limitations; and the cost against the benefits from its existence. For archival criteria, the following are evidence of significance: uniqueness, irreplaceability, high level of impact – over time or place, high level of influence, representation of a type, and comparative value (rarity, completeness, integrity relative to others of its kind). === Standards === Since the 1970s, the Northeast Document Conservation Center has stated that the study of understanding the needs of the archive/library is inherently important to their survival. To prolong the life of a collection, it is important that a systematic preservation plan is in place. The first step in planning a preservation program is to assess the institution's existing preservation needs. This process entails identifying the general and specific needs of the collection, establishing priorities, and gathering the resources to execute the plan. Because budget and time limitations require priorities to be set, standards have been established by the profession to determine what should be preserved in a collection. Considerations include existing condition, rarity, and evidentiary and market values. With non-paper formats, the availability of equipment to access the information will be a factor (for example, playback equipment for audio-visual materials, or microform readers). An institution should determine how many, if any, other institutions hold the material, and consider coordinating efforts with those that do. Institutions should establish an environment that prioritizes preservation and create an understanding among administration and staff. Additionally, the institution's commitment to preservation should be communicated to funders and stakeholders so that funds can be allocated towards preservation efforts. The first steps an institution should implement, according to the NEDCC, are to establish a policy that defines and charts the course of action and create a framework for carrying out goals and priorities. There are three methods for carrying out a preservation survey: general preservation assessment, collection condition surveys, and an item-by-item survey. General condition surveys can be part of a library inventory. Selection for treatment determines the survival of materials and should be done by a specialist, whether in relation to an established collection development policy or on an item by item basis. Once an object or collection has been chosen for preservation, the treatment must be determined that is most appropriate to the material and its collecting institution. If the information is most important, reformatting or creation of a surrogate is a likely option. If the artifact itself is of value, it will receive conservation treatment, ideally of a reversible nature. === Research and testing === With old media deteriorating or showing their vulnerabilities and new media becoming available, research remains active in the field of conservation and preservation. Everything from how to preserve paper media to creating and maintaining electronic resources and gauging their digital permanence is being explored by students and professionals in archives/libraries. The two main issues that most institutions tend to face are the rapid disintegration of acidic paper and water damage (due to flooding, plumbing problems, etc.). Therefore, these areas of preservation, as well as new digital technologies, receive much of the research attention. The American Library Association has many scholarly journals that publish articles on preservation topics, such as College and Research Libraries, Information Technology and Libraries, and Library Resources and Technical Services. Scholarly periodicals in this field from other publishers include International Preservation News, Journal of the American Institute for Conservation, and Collection Management among many others. == Education == Learning the proper methods of preservation is important and most archivists are educated on the subject at academic institutions that specifically cover archives and preservation. In the United States most repositories require archivists to have a degree from an ALA-accredited library school. Similar institutions exist in countries outside the US. Since 2010, the Andrew W. Mellon Foundation has enhanced funding for library and archives conservation education in three major conservation programs. These programs are all part of the Association of North American Graduate Programs in the Conservation of Cultural Property (ANAGPIC). Another educational resource available to preservationists is the Northeast Document Conservation Center or NEDCC. The Preservation, Planning and Publications Committee of the Preservation and Reformatting Section (PARS) in the Association for Library Collections & Technical Services has created a Preservation Education Directory of ALA Accredited schools in the U.S. and Canada offering courses in preservation. The directory is updated approximately every three years. The 10th Edition was made available on the ALCTS web site in March 2015.Additional preservation education is available to librarians through various professional organizations, such as: American Institute for Conservation American Library Association Amigos Library Services Preservation Service Association for Information and Image Management (AIIM) Association for Recorded Sound Collections (ARSC) Association of Moving Image Archivists (AMIA) Buffalo State College. Art Conservation Department, Buffalo, New York Campbell Center for Historic Preservation Studies, Mount Carroll, Illinois Conservation Center for Art and Historic Artifacts in Philadelphia, Pennsylvania George Eastman Museum School of Film & Video Preservation Rochester, New York International Federation of Film Archives (FIAF) Bologna, Italy Kilgarlin Center for Preservation of the Cultural Record Library Binding Institute Lyrasis New York University Institute of Fine Arts Conservation Center, New York, New York North Bennet Street School, Boston, Massachusetts Northeast Document Conservation Center (NEDCC) Queen's University. Master of Art Conservation Program, Ontario, Canada Rare Book School (RBS) at the University of Virginia Society of American Archivists University of Delaware/Winterthur Museum Art Conservation Program, Newark, Delaware The National Archives == Preservation in non-academic facilities == === Public libraries === Limited, tax-driven funding can often interfere with the ability for public libraries to engage in extensive preservation activities. Materials, particularly books, are often much easier to replace than to repair when damaged or worn. Public libraries usually try to tailor their services to meet the needs and desires of their local communities, which could cause an emphasis on acquiring new materials over preserving old ones. Librarians working in public facilities frequently have to make complicated decisions about how to best serve their patrons. Commonly, public library systems work with each other and sometimes with more academic libraries through interlibrary loan programs. By sharing resources, they are able to expand upon what might be available to their own patrons and share the burdens of preservation across a greater array of systems. === Archival repositories and special collections === Archival facilities focus specifically on rare and fragile materials. With staff trained in appropriate techniques, archives are often available to many public and private library facilities as an alternative to destroying older materials. Items that are unique, such as photographs, or items that are out of print, can be preserved in archival facilities more easily than in many library settings. === Museums === Because so many museum holdings are unique, including print materials, art, and other objects, preservationists are often most active in this setting; however, since most holdings are usually much more fragile, or possibly corrupted, conservation may be more necessary than preservation. This is especially common in art museums. Museums typically hold to the same practices led by archival institutions. == History == === Antecedents === Preservation as a formal profession in libraries and archives dates from the twentieth century, but its philosophy and practice has roots in many earlier traditions. In many ancient societies, appeals to heavenly protectors were used to preserve books, scrolls and manuscripts from insects, fire and decay. To the ancient Egyptians, the scarab or dung beetle (see: Scarab (artifact)) was a protector of written products. In ancient Babylon, Nabu is the heavenly patron of books and protector of clay tablets. Nabu is the Babylonian god of wisdom and writing, and is the patron of the scribes, librarians and archivists. In Arabic and other eastern societies, sometimes a traditional method to protect books and scrolls was a metaphysical appeal to "Kabi:Kaj", the "King of the Cockroaches". There are three saints in the Christian church that are closely associated with libraries as patrons: Saint Lawrence, Saint Jerome, and Catherine of Alexandria. In some Christian monasteries, prayers and curses were placed at the end of books to prevent theft, or to damn the thieves. Frequently called a "book curse", these were placed in the book to deter theft. The ancient Chinese god Wei T'O is the patron god of libraries and books. Many examples of appeals to Wei T'O can be found in Chinese manuscripts dated five hundred or more years ago. Wei T'O is especially invoked for the protection of books and libraries against fire. Since the modern books are suffering from acid decomposition (slow fires), Wei T'O is especially relevant to modern librarianship. A modern product to de-acidify paper is named in his honor. Sri Lankan symbols or images of the Sinhalese "Fire Demons" are hung in the corners of libraries and other buildings to appease the incendiary demons and to avert fire, lightning and cataclysm, according to Sinhalese mythology. Since fire and acid decomposition (also known as "slow fires") are a special problem for libraries because of the concentration of paper products, the "Fire Demons" are also included when used to assuage these destroyers of libraries and books. The Aztec and Mayan Indians of Latin America also had deities concerned with libraries. The major god, Quetzalcoatl, is credited with the discoveries of the arts, the calendar, and of writing. A single feather or plume at the beginning or at the end of a document or stone carving would indicate a dedication to the "Feathered Serpent". This symbol degenerated over time to a single fringed line. Human record-keeping arguably dates back to the cave painting boom of the Upper Paleolithic, some 32,000–40,000 years ago. More direct antecedents are the writing systems that developed in the 4th millennium BC. Written record keeping and information sharing practices, along with oral tradition, sustain and transmit information from one group to another. This level of preservation has been supplemented over the last century with the professional practice of preservation and conservation in the cultural heritage community. Oral tradition or oral culture, the transmission of information from one generation to the next without a writing system. Antiquarian practices, including scribal practice, burial practice, the libraries at Pergamum, Alexandria and other ancient archives. Medieval practices, including the scriptorium and relic collection Renaissance and the changing conception of artists and works of art Enlightenment and the Encyclopedists Romantic movement's imperative to preserve === Significant events === 1933: William Barrow introduces the field of conservation to paper deacidification when he publishes a paper on the acid paper problem. In later studies, Barrow tested paper from American books made between 1900 and 1949, and learned that after forty years the books had lost on average 96 percent of their original strength; after less than ten years, they had already lost 64 percent. Barrow determined that this rapid deterioration was not the direct result of using wood-pulp fibers, since rag papers of this period were also aging rapidly, but rather due to the residual sulfuric acid produced in both rag and wood pulp papers. Earlier papermaking methods left the final product only mildly alkaline or even neutral and such paper has maintained its strength for 300 to 800 years, despite sulfur dioxide and other air pollutants. The manufacturing methods used after 1870, however, employed sulfuric acid for sizing and bleaching the paper, which would eventually lead to yellowing, brittle paper. Barrow's 1933 article on the fragile state of wood pulp paper predicted the life expectancy, or "LE", of this paper was approximately 40–50 years. At that point the paper would begin to show signs of natural decay, and he concluded that research for a new media on which to write and print was needed. 1966: The Flood of the River Arno in Florence, Italy, damaged or destroyed millions of rare books and led to the development of restoration laboratories and new methods in conservation. Instrumental in this process was conservationist Peter Waters, who led a group of volunteers, called "mud angels", in restoring thousands of books and papers. This event awakened many historians, librarians, and other professionals to the importance of having a preservation plan. Many consider this flood to be one of the worst disasters since the burning of the Library of Alexandria. It spurred a resurgence in the profession of preservation and conservation worldwide, including the addition of a Preservation Office at the Library of Congress. 1987: Terry Saunders releases the film Slow Fires: On the Preservation of the Human Record which examines paper embrittlement resulting from acid decay 1989: March 7 ["Commitment Day"] Major US print publishers convene at NYPL to endorse a community-wide commitment to utilizing ISO 9706 certified permanent durable paper in order to combat the acid paper epidemic. === Significant people === William Barrow (1904–1967) was an American chemist and paper conservator, and a pioneer of library and archives conservation. He introduced the field of conservation to paper deacidification through alkalization. Paul N. Banks (1934–2000) was Conservator and Head of the Conservation Department at the Newberry Library from 1964 to 1981, and published regularly on bookbinding, book and paper conservation, and problems related to conservation. He designed and implemented a curriculum for the Columbia University School of Library Service that dealt directly with preservation training. Pamela Darling, author and historian, was Preservation Specialist for the Association of Research Libraries. Her works include materials to aid libraries in establishing their own comprehensive preservation programs. Carolyn Harris worked as head of Columbia University Libraries' Preservation Division from 1981 until 1987, where she worked closely with Paul Banks. She published extensive research throughout her career, especially dealing with mass deacidification of wood-pulp paper. Carolyn Price Horton (1909–2001), American conservator-restorer of books at the American Philosophical Society and Yale University. Helped museums and libraries in Florence recover books damaged from the 1966 flood of the Arno and the 1972 flood of the Corning Museum of Glass Peter Waters, former Conservation Officer at the Library of Congress in Washington, DC, worked in the areas of disaster recovery and preparedness, and the salvaging of water-damaged paper goods. Nicholson Baker is a contemporary American novelist and author of Double Fold, a criticism of libraries' destruction of paper-based media. Patricia Battin, as the first president of the Commission on Preservation and Access, worked to organize a national campaign both for the use of alkaline paper in publishing companies and for a national program of preservation microfilming. John F. Dean, Preservation and Conservation Librarian at Cornell University, has made contributions towards improving preservation efforts in developing countries. Specifically, Dean has created online tutorials for library conservation and preservation in Southeast Asia and Iraq and the Middle East. The Paul Banks and Carolyn Harris Preservation Award for outstanding preservation specialists in library and archival science, is given annually by the Association for Library Collections & Technical Services, a subdivision of the American Library Association. It is awarded in recognition of professional preservation specialists who have made significant contributions to the field. == Legal and ethical issues == Reformatting, or in any other way copying an item's contents, raises obvious copyright issues. In many cases, a library is allowed to make a limited number of copies of an item for preservation purposes. In the United States, certain exceptions have been made for libraries and archives. Ethics will play an important role in many aspects of the conservator's activities. When choosing which objects are in need of treatment, the conservator should do what is best for the object in question and not yield to pressure or opinion from outside sources. Conservators should refer to the AIC Code of Ethics and Guidelines for Practice, which states that the conservation professional must "strive to attain the highest possible standards in all aspects of conservation." One instance in which these decisions may get tricky is when the conservator is dealing with cultural objects. The AIC Code of Ethics and Guidelines for Practice has addressed such concerns, stating "All actions of the conservation professional must be governed by an informed respect for cultural property, its unique character and significance and the people or person who created it." This can be applied in both the care and long-term storage of objects in archives and institutions. It is important that preservation specialists be respectful of cultural property and the societies that created it, and it is also important for them to be aware of international and national laws pertaining to stolen items. In recent years there has been a rise in nations seeking out artifacts that have been stolen and are now in museums. In many cases museums are working with the nations to find a compromise to balance the need for reliable supervision as well as access for both the public and researchers. Conservators are not just bound by ethics to treat cultural and religious objects with respect, but also in some cases by law. For example, in the United States, conservators must comply with the Native American Graves Protection and Repatriation Act (NAGPRA). The First Archivists Circle, a group of Native American archivists, has also created Protocols for Native American Archival Materials. The non-binding guidelines are suggestions for libraries and archives with Native American archival materials. The care of cultural and sacred objects often affects the physical storage or the object. For example, sacred objects of the native peoples of the Western United States are supposed to be stored with sage to ensure their spiritual well-being. The idea of storing an object with plant material is inherently problematic to an archival collection because of the possibility of insect infestation. When conservators have faced this problem, they have addressed it by using freeze-dried sage, thereby meeting both conservation and cultural needs. Some individuals in the archival community have explored the possible moral responsibility to preserve all cultural phenomena, in regards to the concept of monumental preservation. Other advocates argue that such an undertaking is something that the indigenous or native communities that produce such cultural objects are better suited to perform. Currently, however, many indigenous communities are not financially able to support their own archives and museums. Still, indigenous archives are on the rise in the United States. == Criticism and reception == There is a longstanding tension between preservation of and access to library materials, particularly in the area of special collections. Handling materials promotes their progression to an unusable state, especially if they are handled carelessly. On the other hand, materials must be used in order to gain any benefit from them. In a collection with valuable materials, this conflict is often resolved by a number of measures which can include heightened security, requiring the use of gloves for photographs, restricting the materials researchers may bring with them into a reading room, and restricting use of materials to patrons who are not able to satisfy their research needs with less valuable copies of an item. These restrictions can be considered hindrances to researchers who feel that these measures are in place solely to keep materials out of the hands of the public. There is also controversy surrounding preservation methods. A major controversy at the end of the twentieth century centered on the practice of discarding items that had been microfilmed. This was the subject of novelist Nicholson Baker's book Double Fold, which chronicled his efforts to save many old runs of American newspapers (formerly owned by the British Library) from being sold to dealers or pulped. A similar concern persists over the retention of original documents reformatted by any means, analog or digital. Concerns include scholarly needs and legal requirements for authentic or original records as well as questions about the longevity, quality, and completeness of reformatted materials. Retention of originals as a source or fail-safe copy is now a fairly common practice. Another controversy revolving around different preservation methods is that of digitization of original material to maintain the intellectual content of the material while ignoring the physical nature of the book. Further, the Modern Language Association's Committee on the Future of the Print Record structured its "Statement on the Significance of Primary Records" on the inherent theoretical ideology that there is a need to preserve as many copies of a printed edition as is possible as texts and their textual settings are, quite simply, not separable, just as the artifactual characteristics of texts are as relevant and varied as the texts themselves (in the report mentioned herewith, G. Thomas Tanselle suggests that presently existing book stacks need not be abandoned with emerging technologies; rather they serve as vitally important original (primary) sources for future study). Many digitized items, such as back issues of periodicals, are provided by publishers and databases on a subscription basis. If these companies were to cease providing access to their digital information, facilities that elected to discard paper copies of these periodicals could face significant difficulties in providing access to these items. Discussion as to the best ways to utilize digital technologies is therefore ongoing, and the practice continues to evolve. Of course, the issues surrounding digital objects and their care in libraries and archives continues to expand as more and more of contemporary culture is created, stored, and used digitally. These born-digital materials raise their own new kinds of preservation challenges and in some cases they may even require use new kinds of tools and techniques. === The library as a sacred institution === In her book Sacred Stacks: The Higher Purpose of Libraries and Librarianship, Nancy Kalikow Maxwell discusses how libraries are capable of performing some of the same functions as religion. Many librarians feel that their work is done for some higher purpose. The same can be said for preservation librarians. One instance of the library's role as sacred is to provide a sense of immortality: with the ever-changing world outside, the library will remain stable and dependable. Preservation is a great help in this regard. Through digitization and reformatting, preservation librarians are able to retain material while at the same time adapting to new methods. In this way, libraries can adapt to the changes in user needs without changing the quality of the material itself. Through preservation efforts, patrons can rest assured that although materials are constantly deteriorating over time, the library itself will remain a stable, reliable environment for their information needs. Another sacred ability of the library is to provide information and a connection to the past. By working to slow down the processes of deterioration and decay of library materials, preservation practices help keep this link to the past alive. == See also == == Footnotes == == Publications == Cloonan, M. V. (Ed.). (2015). Preserving our heritage: perspectives from antiquity to the digital age. Neal-Schuman. == External links == Media related to Preservation (library and archival science) at Wikimedia Commons American Institute for Conservation (AIC) First Archivists Circle Protocols for Native American Archival Materials
Wikipedia/Preservation_(library_and_archival_science)
Degenerate matter occurs when the Pauli exclusion principle significantly alters a state of matter at low temperature. The term is used in astrophysics to refer to dense stellar objects such as white dwarfs and neutron stars, where thermal pressure alone is not enough to prevent gravitational collapse. The term also applies to metals in the Fermi gas approximation. Degenerate matter is usually modelled as an ideal Fermi gas, an ensemble of non-interacting fermions. In a quantum mechanical description, particles limited to a finite volume may take only a discrete set of energies, called quantum states. The Pauli exclusion principle prevents identical fermions from occupying the same quantum state. At lowest total energy (when the thermal energy of the particles is negligible), all the lowest energy quantum states are filled. This state is referred to as full degeneracy. This degeneracy pressure remains non-zero even at absolute zero temperature. Adding particles or reducing the volume forces the particles into higher-energy quantum states. In this situation, a compression force is required, and is made manifest as a resisting pressure. The key feature is that this degeneracy pressure does not depend on the temperature but only on the density of the fermions. Degeneracy pressure keeps dense stars in equilibrium, independent of the thermal structure of the star. A degenerate mass whose fermions have velocities close to the speed of light (particle kinetic energy larger than its rest mass energy) is called relativistic degenerate matter. The concept of degenerate stars, stellar objects composed of degenerate matter, was originally developed in a joint effort between Arthur Eddington, Ralph Fowler and Arthur Milne. == Concept == Quantum mechanics uses the word 'degenerate' in two ways: degenerate energy levels and as the low temperature ground state limit for states of matter.: 437  The electron degeneracy pressure occurs in the ground state systems which are non-degenerate in energy levels. The term "degeneracy" derives from work on the specific heat of gases that pre-dates the use of the term in quantum mechanics. Degenerate matter exhibits quantum mechanical properties when a fermion system temperature approaches absolute zero.: 30  These properties result from a combination of the Pauli exclusion principle and quantum confinement. The Pauli principle allows only one fermion in each quantum state and the confinement ensures that energy of these states increases as they are filled. The lowest states fill up and fermions are forced to occupy high energy states even at low temperature. While the Pauli principle and Fermi-Dirac distribution applies to all matter, the interesting cases for degenerate matter involve systems of many fermions. These cases can be understood with the help of the Fermi gas model. Examples include electrons in metals and in white dwarf stars and neutrons in neutron stars.: 436  The electrons are confined by Coulomb attraction to positive ion cores; the neutrons are confined by gravitation attraction. The fermions, forced in to higher levels by the Pauli principle, exert pressure preventing further compression. The allocation or distribution of fermions into quantum states ranked by energy is called the Fermi-Dirac distribution.: 30  Degenerate matter exhibits the results of Fermi-Dirac distribution. == Degeneracy pressure == Unlike a classical ideal gas, whose pressure is proportional to its temperature P = k B N T V , {\displaystyle P=k_{\rm {B}}{\frac {NT}{V}},} where P is pressure, kB is the Boltzmann constant, N is the number of particles (typically atoms or molecules), T is temperature, and V is the volume, the pressure exerted by degenerate matter depends only weakly on its temperature. In particular, the pressure remains nonzero even at absolute zero temperature. At relatively low densities, the pressure of a fully degenerate gas can be derived by treating the system as an ideal Fermi gas, in this way P = ( 3 π 2 ) 2 / 3 ℏ 2 5 m ( N V ) 5 / 3 , {\displaystyle P={\frac {(3\pi ^{2})^{2/3}\hbar ^{2}}{5m}}\left({\frac {N}{V}}\right)^{5/3},} where m is the mass of the individual particles making up the gas. At very high densities, where most of the particles are forced into quantum states with relativistic energies, the pressure is given by P = K ( N V ) 4 / 3 , {\displaystyle P=K\left({\frac {N}{V}}\right)^{4/3},} where K is another proportionality constant depending on the properties of the particles making up the gas. All matter experiences both normal thermal pressure and degeneracy pressure, but in commonly encountered gases, thermal pressure dominates so much that degeneracy pressure can be ignored. Likewise, degenerate matter still has normal thermal pressure; the degeneracy pressure dominates to the point that temperature has a negligible effect on the total pressure. The adjacent figure shows the thermal pressure (red line) and total pressure (blue line) in a Fermi gas, with the difference between the two being the degeneracy pressure. As the temperature falls, the density and the degeneracy pressure increase, until the degeneracy pressure contributes most of the total pressure. While degeneracy pressure usually dominates at extremely high densities, it is the ratio between degenerate pressure and thermal pressure which determines degeneracy. Given a sufficiently drastic increase in temperature (such as during a red giant star's helium flash), matter can become non-degenerate without reducing its density. Degeneracy pressure contributes to the pressure of conventional solids, but these are not usually considered to be degenerate matter because a significant contribution to their pressure is provided by electrical repulsion of atomic nuclei and the screening of nuclei from each other by electrons. The free electron model of metals derives their physical properties by considering the conduction electrons alone as a degenerate gas, while the majority of the electrons are regarded as occupying bound quantum states. This solid state contrasts with degenerate matter that forms the body of a white dwarf, where most of the electrons would be treated as occupying free particle momentum states. Exotic examples of degenerate matter include neutron degenerate matter, strange matter, metallic hydrogen and white dwarf matter. == Degenerate gases == Degenerate gases are gases composed of fermions such as electrons, protons, and neutrons rather than molecules of ordinary matter. The electron gas in ordinary metals and in the interior of white dwarfs are two examples. Following the Pauli exclusion principle, there can be only one fermion occupying each quantum state. In a degenerate gas, all quantum states are filled up to the Fermi energy. Most stars are supported against their own gravitation by normal thermal gas pressure, while in white dwarf stars the supporting force comes from the degeneracy pressure of the electron gas in their interior. In neutron stars, the degenerate particles are neutrons. A fermion gas in which all quantum states below a given energy level are filled is called a fully degenerate fermion gas. The difference between this energy level and the lowest energy level is known as the Fermi energy. === Electron degeneracy === In an ordinary fermion gas in which thermal effects dominate, most of the available electron energy levels are unfilled and the electrons are free to move to these states. As particle density is increased, electrons progressively fill the lower energy states and additional electrons are forced to occupy states of higher energy even at low temperatures. Degenerate gases strongly resist further compression because the electrons cannot move to already filled lower energy levels due to the Pauli exclusion principle. Since electrons cannot give up energy by moving to lower energy states, no thermal energy can be extracted. The momentum of the fermions in the fermion gas nevertheless generates pressure, termed "degeneracy pressure". Under high densities, matter becomes a degenerate gas when all electrons are stripped from their parent atoms. The core of a star, once hydrogen burning nuclear fusion reactions stops, becomes a collection of positively charged ions, largely helium and carbon nuclei, floating in a sea of electrons, which have been stripped from the nuclei. Degenerate gas is an almost perfect conductor of heat and does not obey ordinary gas laws. White dwarfs are luminous not because they are generating energy but rather because they have trapped a large amount of heat which is gradually radiated away. Normal gas exerts higher pressure when it is heated and expands, but the pressure in a degenerate gas does not depend on the temperature. When gas becomes super-compressed, particles position right up against each other to produce degenerate gas that behaves more like a solid. In degenerate gases the kinetic energies of electrons are quite high and the rate of collision between electrons and other particles is quite low, therefore degenerate electrons can travel great distances at velocities that approach the speed of light. Instead of temperature, the pressure in a degenerate gas depends only on the speed of the degenerate particles; however, adding heat does not increase the speed of most of the electrons, because they are stuck in fully occupied quantum states. Pressure is increased only by the mass of the particles, which increases the gravitational force pulling the particles closer together. Therefore, the phenomenon is the opposite of that normally found in matter where if the mass of the matter is increased, the object becomes bigger. In degenerate gas, when the mass is increased, the particles become spaced closer together due to gravity (and the pressure is increased), so the object becomes smaller. Degenerate gas can be compressed to very high densities, typical values being in the range of 10,000 kilograms per cubic centimeter. There is an upper limit to the mass of an electron-degenerate object, the Chandrasekhar limit, beyond which electron degeneracy pressure cannot support the object against collapse. The limit is approximately 1.44 solar masses for objects with typical compositions expected for white dwarf stars (carbon and oxygen with two baryons per electron). This mass cut-off is appropriate only for a star supported by ideal electron degeneracy pressure under Newtonian gravity; in general relativity and with realistic Coulomb corrections, the corresponding mass limit is around 1.38 solar masses. The limit may also change with the chemical composition of the object, as it affects the ratio of mass to number of electrons present. The object's rotation, which counteracts the gravitational force, also changes the limit for any particular object. Celestial objects below this limit are white dwarf stars, formed by the gradual shrinking of the cores of stars that run out of fuel. During this shrinking, an electron-degenerate gas forms in the core, providing sufficient degeneracy pressure as it is compressed to resist further collapse. Above this mass limit, a neutron star (primarily supported by neutron degeneracy pressure) or a black hole may be formed instead. === Neutron degeneracy === Neutron degeneracy is analogous to electron degeneracy and exists in neutron stars, which are partially supported by the pressure from a degenerate neutron gas. Neutron stars are formed either directly from the supernova of stars with masses between 10 and 25 M☉ (solar masses), or by white dwarfs acquiring a mass in excess of the Chandrasekhar limit of 1.44 M☉, usually either as a result of a merger or by feeding off of a close binary partner. Above the Chandrasekhar limit, the gravitational pressure at the core exceeds the electron degeneracy pressure, and electrons begin to combine with protons to produce neutrons (via inverse beta decay, also termed electron capture). The result is an extremely compact star composed of "nuclear matter", which is predominantly a degenerate neutron gas with a small admixture of degenerate proton and electron gases. Neutrons in a degenerate neutron gas are spaced much more closely than electrons in an electron-degenerate gas because the more massive neutron has a much shorter wavelength at a given energy. This phenomenon is compounded by the fact that the pressures within neutron stars are much higher than those in white dwarfs. The pressure increase is caused by the fact that the compactness of a neutron star causes gravitational forces to be much higher than in a less compact body with similar mass. The result is a star with a diameter on the order of a thousandth that of a white dwarf. The properties of neutron matter set an upper limit to the mass of a neutron star, the Tolman–Oppenheimer–Volkoff limit, which is analogous to the Chandrasekhar limit for white dwarf stars. === Proton degeneracy === Sufficiently dense matter containing protons experiences proton degeneracy pressure, in a manner similar to the electron degeneracy pressure in electron-degenerate matter: protons confined to a sufficiently small volume have a large uncertainty in their momentum due to the Heisenberg uncertainty principle. However, because protons are much more massive than electrons, the same momentum represents a much smaller velocity for protons than for electrons. As a result, in matter with approximately equal numbers of protons and electrons, proton degeneracy pressure is much smaller than electron degeneracy pressure, and proton degeneracy is usually modelled as a correction to the equations of state of electron-degenerate matter. === Quark degeneracy === At densities greater than those supported by neutron degeneracy, quark-degenerate matter may occur in the cores of neutron stars, depending on the equations of state of neutron-degenerate matter. There is no observational evidence to support this conjecture and theoretical models that predict de-confined quark matter are only valid at masses higher than any observed neutron star.: 435  == History == In 1914 Walther Nernst described the reduction of the specific heat of gases at very low temperature as "degeneration"; he attributed this to quantum effects. In subsequent work in various papers on quantum thermodynamics by Albert Einstein, by Max Planck, and by Erwin Schrödinger, the effect at low temperatures came to be called "gas degeneracy". A fully degenerate gas has no volume dependence on pressure when temperature approaches absolute zero. Early in 1927 Enrico Fermi and separately Llewellyn Thomas developed a semi-classical model for electrons in a metal. The model treated the electrons as a gas. Later in 1927, Arnold Sommerfeld applied the Pauli principle via Fermi-Dirac statistics to this electron gas model, computing the specific heat of metals; the result became Fermi gas model for metals. Sommerfeld called the low temperature region with quantum effects a "wholly degenerate gas". The concept of degenerate stars, stellar objects composed of degenerate matter, was originally developed in a joint effort between Arthur Eddington, Ralph Fowler and Arthur Milne. Eddington had suggested that the atoms in Sirius B were almost completely ionised and closely packed. Fowler described white dwarfs as composed of a gas of particles that became degenerate at low temperature; he also pointed out that ordinary atoms are broadly similar in regards to the filling of energy levels by fermions. In 1926, Milne proposed that degenerate matter is found in core of stars, not only in compact stars. In 1927 Ralph H. Fowler applied Fermi's model to the puzzle of the stability of white dwarf stars. This approach was extended to relativistic models by later studies and with the work of Subrahmanyan Chandrasekhar became the accepted model for star stability. == See also == Bose–Einstein condensate – Degenerate bosonic gas Fermi liquid theory – Theoretical model in physics Metallic hydrogen – High-pressure phase of hydrogen == Citations == == References == Cohen-Tanoudji, Claude (2011). Advances in Atomic Physics. World Scientific. p. 791. ISBN 978-981-277-496-5. Archived from the original on 2012-05-11. Retrieved 2012-01-31. == External links == Lecture 17: Stellar Evolution. Discusses degenerate gases in models of stars
Wikipedia/Neutron-degenerate_matter
In nuclear physics, the semi-empirical mass formula (SEMF; sometimes also called the Weizsäcker formula, Bethe–Weizsäcker formula, or Bethe–Weizsäcker mass formula to distinguish it from the Bethe–Weizsäcker process) is used to approximate the mass of an atomic nucleus from its number of protons and neutrons. As the name suggests, it is based partly on theory and partly on empirical measurements. The formula represents the liquid-drop model proposed by George Gamow, which can account for most of the terms in the formula and gives rough estimates for the values of the coefficients. It was first formulated in 1935 by German physicist Carl Friedrich von Weizsäcker, and although refinements have been made to the coefficients over the years, the structure of the formula remains the same today. The formula gives a good approximation for atomic masses and thereby other effects. However, it fails to explain the existence of lines of greater binding energy at certain numbers of protons and neutrons. These numbers, known as magic numbers, are the foundation of the nuclear shell model. == Liquid-drop model == The liquid-drop model was first proposed by George Gamow and further developed by Niels Bohr, John Archibald Wheeler and Lise Meitner. It treats the nucleus as a drop of incompressible fluid of very high density, held together by the nuclear force (a residual effect of the strong force): there is a similarity to the structure of a spherical liquid drop. While a crude model, the liquid-drop model accounts for the spherical shape of most nuclei and makes a rough prediction of binding energy. The corresponding mass formula is defined purely in terms of the numbers of protons and neutrons it contains. The original Weizsäcker formula defines five terms: Volume energy, when an assembly of nucleons of the same size is packed together into the smallest volume, each interior nucleon has a certain number of other nucleons in contact with it. So, this nuclear energy is proportional to the volume. Surface energy corrects for the previous assumption made that every nucleon interacts with the same number of other nucleons. This term is negative and proportional to the surface area, and is therefore roughly equivalent to liquid surface tension. Coulomb energy, the potential energy from each pair of protons. As this is a repelling force, the binding energy is reduced. Asymmetry energy (also called Pauli energy), which accounts for the Pauli exclusion principle. Unequal numbers of neutrons and protons imply filling higher energy levels for one type of particle, while leaving lower energy levels vacant for the other type. Pairing energy, which accounts for the tendency of proton pairs and neutron pairs to occur. An even number of particles is more stable than an odd number due to spin coupling. == Formula == The mass of an atomic nucleus, for N {\displaystyle N} neutrons, Z {\displaystyle Z} protons, and therefore A = N + Z {\displaystyle A=N+Z} nucleons, is given by m = N m n + Z m p − E B ( N , Z ) c 2 , {\displaystyle m=Nm_{\text{n}}+Zm_{\text{p}}-{\frac {E_{\text{B}}(N,Z)}{c^{2}}},} where m n {\displaystyle m_{\text{n}}} and m p {\displaystyle m_{\text{p}}} are the rest mass of a neutron and a proton respectively, and E B {\displaystyle E_{\text{B}}} is the binding energy of the nucleus. The semi-empirical mass formula states the binding energy is E B = a V A − a S A 2 / 3 − a C Z ( Z − 1 ) A 1 / 3 − a A ( N − Z ) 2 A ± δ ( N , Z ) . {\displaystyle E_{\text{B}}=a_{\text{V}}A-a_{\text{S}}A^{2/3}-a_{\text{C}}{\frac {Z(Z-1)}{A^{1/3}}}-a_{\text{A}}{\frac {(N-Z)^{2}}{A}}\pm \delta (N,Z).} The δ ( N , Z ) {\displaystyle \delta (N,Z)} term is either zero or ± δ 0 {\displaystyle \pm \delta _{0}} , depending on the parity of N {\displaystyle N} and Z {\displaystyle Z} , where δ 0 = a P A k P {\displaystyle \delta _{0}={a_{\text{P}}}{A^{k_{\text{P}}}}} for some exponent k P {\displaystyle k_{\text{P}}} . Note that as A = N + Z {\displaystyle A=N+Z} , the numerator of the a A {\displaystyle a_{\text{A}}} term can be rewritten as ( A − 2 Z ) 2 {\displaystyle (A-2Z)^{2}} . Each of the terms in this formula has a theoretical basis. The coefficients a V {\displaystyle a_{\text{V}}} , a S {\displaystyle a_{\text{S}}} , a C {\displaystyle a_{\text{C}}} , a A {\displaystyle a_{\text{A}}} , and a P {\displaystyle a_{\text{P}}} are determined empirically; while they may be derived from experiment, they are typically derived from least-squares fit to contemporary data. While typically expressed by its basic five terms, further terms exist to explain additional phenomena. Akin to how changing a polynomial fit will change its coefficients, the interplay between these coefficients as new phenomena are introduced is complex; some terms influence each other, whereas the a P {\displaystyle a_{\text{P}}} term is largely independent. === Volume term === The term a V A {\displaystyle a_{\text{V}}A} is known as the volume term. The volume of the nucleus is proportional to A, so this term is proportional to the volume, hence the name. The basis for this term is the strong nuclear force. The strong force affects both protons and neutrons, and as expected, this term is independent of Z. Because the number of pairs that can be taken from A particles is A ( A − 1 ) / 2 {\displaystyle A(A-1)/2} , one might expect a term proportional to A 2 {\displaystyle A^{2}} . However, the strong force has a very limited range, and a given nucleon may only interact strongly with its nearest neighbors and next nearest neighbors. Therefore, the number of pairs of particles that actually interact is roughly proportional to A, giving the volume term its form. The coefficient a V {\displaystyle a_{\text{V}}} is smaller than the binding energy possessed by the nucleons with respect to their neighbors ( E b {\displaystyle E_{\text{b}}} ), which is of order of 40 MeV. This is because the larger the number of nucleons in the nucleus, the larger their kinetic energy is, due to the Pauli exclusion principle. If one treats the nucleus as a Fermi ball of A {\displaystyle A} nucleons, with equal numbers of protons and neutrons, then the total kinetic energy is 3 5 A ε F {\displaystyle {\tfrac {3}{5}}A\varepsilon _{\text{F}}} , with ε F {\displaystyle \varepsilon _{\text{F}}} the Fermi energy, which is estimated as 38 MeV. Thus the expected value of a V {\displaystyle a_{\text{V}}} in this model is E b − 3 5 ε F ∼ 17 M e V , {\displaystyle E_{\text{b}}-{\tfrac {3}{5}}\varepsilon _{\text{F}}\sim 17~\mathrm {MeV} ,} not far from the measured value. === Surface term === The term a S A 2 / 3 {\displaystyle a_{\text{S}}A^{2/3}} is known as the surface term. This term, also based on the strong force, is a correction to the volume term. The volume term suggests that each nucleon interacts with a constant number of nucleons, independent of A. While this is very nearly true for nucleons deep within the nucleus, those nucleons on the surface of the nucleus have fewer nearest neighbors, justifying this correction. This can also be thought of as a surface-tension term, and indeed a similar mechanism creates surface tension in liquids. If the volume of the nucleus is proportional to A, then the radius should be proportional to A 1 / 3 {\displaystyle A^{1/3}} and the surface area to A 2 / 3 {\displaystyle A^{2/3}} . This explains why the surface term is proportional to A 2 / 3 {\displaystyle A^{2/3}} . It can also be deduced that a S {\displaystyle a_{\text{S}}} should have a similar order of magnitude to a V {\displaystyle a_{\text{V}}} . === Coulomb term === The term a C Z ( Z − 1 ) A 1 / 3 {\displaystyle a_{\text{C}}{\frac {Z(Z-1)}{A^{1/3}}}} or a C Z 2 A 1 / 3 {\displaystyle a_{\text{C}}{\frac {Z^{2}}{A^{1/3}}}} is known as the Coulomb or electrostatic term. The basis for this term is the electrostatic repulsion between protons. To a very rough approximation, the nucleus can be considered a sphere of uniform charge density. The potential energy of such a charge distribution can be shown to be E = 3 5 1 4 π ε 0 Q 2 R , {\displaystyle E={\frac {3}{5}}{\frac {1}{4\pi \varepsilon _{0}}}{\frac {Q^{2}}{R}},} where Q is the total charge, and R is the radius of the sphere. The value of a C {\displaystyle a_{\text{C}}} can be approximately calculated by using this equation to calculate the potential energy, using an empirical nuclear radius of R ≈ r 0 A 1 3 {\displaystyle R\approx r_{0}A^{\frac {1}{3}}} and Q = Ze. However, because electrostatic repulsion will only exist for more than one proton, Z 2 {\displaystyle Z^{2}} becomes Z ( Z − 1 ) {\displaystyle Z(Z-1)} : E = 3 5 1 4 π ε 0 Q 2 R = 3 5 1 4 π ε 0 ( Z e ) 2 r 0 A 1 / 3 = 3 e 2 Z 2 20 π ε 0 r 0 A 1 / 3 ≈ 3 e 2 Z ( Z − 1 ) 20 π ε 0 r 0 A 1 / 3 = a C Z ( Z − 1 ) A 1 / 3 , {\displaystyle E={\frac {3}{5}}{\frac {1}{4\pi \varepsilon _{0}}}{\frac {Q^{2}}{R}}={\frac {3}{5}}{\frac {1}{4\pi \varepsilon _{0}}}{\frac {(Ze)^{2}}{r_{0}A^{1/3}}}={\frac {3e^{2}Z^{2}}{20\pi \varepsilon _{0}r_{0}A^{1/3}}}\approx {\frac {3e^{2}Z(Z-1)}{20\pi \varepsilon _{0}r_{0}A^{1/3}}}=a_{\text{C}}{\frac {Z(Z-1)}{A^{1/3}}},} where now the electrostatic Coulomb constant a C {\displaystyle a_{\text{C}}} is a C = 3 e 2 20 π ε 0 r 0 . {\displaystyle a_{\text{C}}={\frac {3e^{2}}{20\pi \varepsilon _{0}r_{0}}}.} Using the fine-structure constant, we can rewrite the value of a C {\displaystyle a_{\text{C}}} as a C = 3 5 ℏ c α r 0 = 3 5 R P r 0 α m p c 2 , {\displaystyle a_{\text{C}}={\frac {3}{5}}{\frac {\hbar c\alpha }{r_{0}}}={\frac {3}{5}}{\frac {R_{\text{P}}}{r_{0}}}\alpha m_{\text{p}}c^{2},} where α {\displaystyle \alpha } is the fine-structure constant, and r 0 A 1 / 3 {\displaystyle r_{0}A^{1/3}} is the radius of a nucleus, giving r 0 {\displaystyle r_{0}} to be approximately 1.25 femtometers. R P {\displaystyle R_{\text{P}}} is the proton reduced Compton wavelength, and m p {\displaystyle m_{\text{p}}} is the proton mass. This gives a C {\displaystyle a_{\text{C}}} an approximate theoretical value of 0.691 MeV, not far from the measured value. === Asymmetry term === The term a A ( N − Z ) 2 A {\displaystyle a_{\text{A}}{\frac {(N-Z)^{2}}{A}}} is known as the asymmetry term (or Pauli term). The theoretical justification for this term is more complex. The Pauli exclusion principle states that no two identical fermions can occupy exactly the same quantum state in an atom. At a given energy level, there are only finitely many quantum states available for particles. What this means in the nucleus is that as more particles are "added", these particles must occupy higher energy levels, increasing the total energy of the nucleus (and decreasing the binding energy). Note that this effect is not based on any of the fundamental forces (gravitational, electromagnetic, etc.), only the Pauli exclusion principle. Protons and neutrons, being distinct types of particles, occupy different quantum states. One can think of two different "pools" of states – one for protons and one for neutrons. Now, for example, if there are significantly more neutrons than protons in a nucleus, some of the neutrons will be higher in energy than the available states in the proton pool. If we could move some particles from the neutron pool to the proton pool, in other words, change some neutrons into protons, we would significantly decrease the energy. The imbalance between the number of protons and neutrons causes the energy to be higher than it needs to be, for a given number of nucleons. This is the basis for the asymmetry term. The actual form of the asymmetry term can again be derived by modeling the nucleus as a Fermi ball of protons and neutrons. Its total kinetic energy is E k = 3 5 ( Z ε F,p + N ε F,n ) , {\displaystyle E_{\text{k}}={\frac {3}{5}}(Z\varepsilon _{\text{F,p}}+N\varepsilon _{\text{F,n}}),} where ε F,p {\displaystyle \varepsilon _{\text{F,p}}} and ε F,n {\displaystyle \varepsilon _{\text{F,n}}} are the Fermi energies of the protons and neutrons. Since these are proportional to Z 2 / 3 {\displaystyle Z^{2/3}} and N 2 / 3 {\displaystyle N^{2/3}} respectively, one gets E k = C ( Z 5 / 3 + N 5 / 3 ) {\displaystyle E_{\text{k}}=C(Z^{5/3}+N^{5/3})} for some constant C. The leading terms in the expansion in the difference N − Z {\displaystyle N-Z} are then E k = C 2 2 / 3 ( A 5 / 3 + 5 9 ( N − Z ) 2 A 1 / 3 ) + O ( ( N − Z ) 4 ) . {\displaystyle E_{\text{k}}={\frac {C}{2^{2/3}}}\left(A^{5/3}+{\frac {5}{9}}{\frac {(N-Z)^{2}}{A^{1/3}}}\right)+O{\big (}(N-Z)^{4}{\big )}.} At the zeroth order in the expansion the kinetic energy is just the overall Fermi energy ε F ≡ ε F,p = ε F,n {\displaystyle \varepsilon _{\text{F}}\equiv \varepsilon _{\text{F,p}}=\varepsilon _{\text{F,n}}} multiplied by 3 5 A {\displaystyle {\tfrac {3}{5}}A} . Thus we get E k = 3 5 ε F A + 1 3 ε F ( N − Z ) 2 A + O ( ( N − Z ) 4 ) . {\displaystyle E_{\text{k}}={\frac {3}{5}}\varepsilon _{\text{F}}A+{\frac {1}{3}}\varepsilon _{\text{F}}{\frac {(N-Z)^{2}}{A}}+O{\big (}(N-Z)^{4}{\big )}.} The first term contributes to the volume term in the semi-empirical mass formula, and the second term is minus the asymmetry term (remember, the kinetic energy contributes to the total binding energy with a negative sign). ε F {\displaystyle \varepsilon _{\text{F}}} is 38 MeV, so calculating a A {\displaystyle a_{\text{A}}} from the equation above, we get only half the measured value. The discrepancy is explained by our model not being accurate: nucleons in fact interact with each other and are not spread evenly across the nucleus. For example, in the shell model, a proton and a neutron with overlapping wavefunctions will have a greater strong interaction between them and stronger binding energy. This makes it energetically favourable (i.e. having lower energy) for protons and neutrons to have the same quantum numbers (other than isospin), and thus increase the energy cost of asymmetry between them. One can also understand the asymmetry term intuitively as follows. It should be dependent on the absolute difference | N − Z | {\displaystyle |N-Z|} , and the form ( N − Z ) 2 {\displaystyle (N-Z)^{2}} is simple and differentiable, which is important for certain applications of the formula. In addition, small differences between Z and N do not have a high energy cost. The A in the denominator reflects the fact that a given difference | N − Z | {\displaystyle |N-Z|} is less significant for larger values of A. === Pairing term === The term δ ( A , Z ) {\displaystyle \delta (A,Z)} is known as the pairing term (possibly also known as the pairwise interaction). This term captures the effect of spin coupling. It is given by δ ( A , Z ) = { + δ 0 for even Z , N ( even A ) , 0 for odd A , − δ 0 for odd Z , N ( even A ) , {\displaystyle \delta (A,Z)={\begin{cases}+\delta _{0}&{\text{for even }}Z,N~({\text{even }}A),\\0&{\text{for odd }}A,\\-\delta _{0}&{\text{for odd }}Z,N~({\text{even }}A),\end{cases}}} where δ 0 {\displaystyle \delta _{0}} is found empirically to have a value of about 1000 keV, slowly decreasing with mass number A. Odd-odd nuclei tend to undergo beta decay to an adjacent even-even nucleus by changing a neutron to a proton or vice versa. The pairs have overlapping wave functions and sit very close together with a bond stronger than any other configuration. When the pairing term is substituted into the binding energy equation, for even Z, N, the pairing term adds binding energy, and for odd Z, N the pairing term removes binding energy. The dependence on mass number is commonly parametrized as δ 0 = a P A k P . {\displaystyle \delta _{0}=a_{\text{P}}A^{k_{\text{P}}}.} The value of the exponent kP is determined from experimental binding-energy data. In the past its value was often assumed to be −3/4, but modern experimental data indicate that a value of −1/2 is nearer the mark: δ 0 = a P A − 1 / 2 {\displaystyle \delta _{0}=a_{\text{P}}A^{-1/2}} or δ 0 = a P A − 3 / 4 . {\displaystyle \delta _{0}=a_{\text{P}}A^{-3/4}.} Due to the Pauli exclusion principle the nucleus would have a lower energy if the number of protons with spin up were equal to the number of protons with spin down. This is also true for neutrons. Only if both Z and N are even, can both protons and neutrons have equal numbers of spin-up and spin-down particles. This is a similar effect to the asymmetry term. The factor A k P {\displaystyle A^{k_{\text{P}}}} is not easily explained theoretically. The Fermi-ball calculation we have used above, based on the liquid-drop model but neglecting interactions, will give an A − 1 {\displaystyle A^{-1}} dependence, as in the asymmetry term. This means that the actual effect for large nuclei will be larger than expected by that model. This should be explained by the interactions between nucleons. For example, in the shell model, two protons with the same quantum numbers (other than spin) will have completely overlapping wavefunctions and will thus have greater strong interaction between them and stronger binding energy. This makes it energetically favourable (i.e. having lower energy) for protons to form pairs of opposite spin. The same is true for neutrons. == Calculating coefficients == The coefficients are calculated by fitting to experimentally measured masses of nuclei. Their values can vary depending on how they are fitted to the data and which unit is used to express the mass. Several examples are as shown below. The formula does not consider the internal shell structure of the nucleus. The semi-empirical mass formula therefore provides a good fit to heavier nuclei, and a poor fit to very light nuclei, especially 4He. For light nuclei, it is usually better to use a model that takes this shell structure into account. == Examples of consequences of the formula == By maximizing Eb(A, Z) with respect to Z, one would find the best neutron–proton ratio N/Z for a given atomic weight A. We get N / Z ≈ 1 + a C 2 a A A 2 / 3 . {\displaystyle N/Z\approx 1+{\frac {a_{\text{C}}}{2a_{\text{A}}}}A^{2/3}.} This is roughly 1 for light nuclei, but for heavy nuclei the ratio grows in good agreement with experiment. By substituting the above value of Z back into Eb, one obtains the binding energy as a function of the atomic weight, Eb(A). Maximizing Eb(A)/A with respect to A gives the nucleus which is most strongly bound, i.e. most stable. The value we get is A = 63 (copper), close to the measured values of A = 62 (nickel) and A = 58 (iron). The liquid-drop model also allows the computation of fission barriers for nuclei, which determine the stability of a nucleus against spontaneous fission. It was originally speculated that elements beyond atomic number 104 could not exist, as they would undergo fission with very short half-lives, though this formula did not consider stabilizing effects of closed nuclear shells. A modified formula considering shell effects reproduces known data and the predicted island of stability (in which fission barriers and half-lives are expected to increase, reaching a maximum at the shell closures), though also suggests a possible limit to existence of superheavy nuclei beyond Z = 120 and N = 184. == References == == Sources == Freedman, R.; Young, H. (2004). Sears and Zemansky's University Physics with Modern Physics (11th ed.). Pearson Addison Wesley. pp. 1633–1634. ISBN 978-0-8053-8768-1. Liverhant, S. E. (1960). Elementary Introduction to Nuclear Reactor Physics. John Wiley & Sons. pp. 58–62. LCCN 60011725. Choppin, G.; Liljenzin, J.-O.; Rydberg, J. (2002). "Nuclear Mass and Stability" (PDF). Radiochemistry and Nuclear Chemistry (3rd ed.). Butterworth-Heinemann. pp. 41–57. ISBN 978-0-7506-7463-8. == External links == Nuclear liquid drop model in the hyperphysics online reference at Georgia State University. Liquid drop model with parameter fit from First Observations of Excited States in the Neutron Deficient Nuclei 160,161W and 159Ta, Alex Keenan, PhD thesis, University of Liverpool, 1999 (HTML version).
Wikipedia/Liquid-drop_model
The Goddard Space Flight Center (GSFC) is a major NASA space research laboratory located approximately 6.5 miles (10.5 km) northeast of Washington, D.C., in Greenbelt, Maryland, United States. Established on May 1, 1959, as NASA's first space flight center, GSFC employs about 10,000 civil servants and contractors. Named for American rocket propulsion pioneer Robert H. Goddard, it is one of ten major NASA field centers. GSFC is partially within the former Goddard census-designated place; it has a Greenbelt mailing address. GSFC is the largest combined organization of scientists and engineers in the United States dedicated to increasing knowledge of the Earth, the Solar System, and the Universe via observations from space. GSFC is a major US laboratory for developing and operating uncrewed scientific spacecraft. GSFC conducts scientific investigation, development, manufacturing and operation of space systems, and development of related technologies. Goddard scientists can develop and support a mission, and Goddard engineers and technicians can design and build the spacecraft for that mission. Goddard scientist John C. Mather shared the 2006 Nobel Prize in Physics for his work on COBE. GSFC also operates two spaceflight tracking and data acquisition networks (the Space Network and the Near Earth Network), develops and maintains advanced space and Earth science data information systems, and develops satellite systems for the National Oceanic and Atmospheric Administration (NOAA). GSFC manages operations for many NASA and international missions including the James Webb Space Telescope (JWST) and Hubble Space Telescope (HST), the Explorers Program, the Discovery Program, the Earth Observing System (EOS), INTEGRAL, MAVEN, OSIRIS-REx, the Solar and Heliospheric Observatory (SOHO), the Solar Dynamics Observatory (SDO), Tracking and Data Relay Satellite System (TDRS), Fermi, and Swift. Past missions managed by GSFC include the Rossi X-ray Timing Explorer (RXTE), Compton Gamma Ray Observatory, SMM, COBE, IUE, and ROSAT. == History == Founded as Beltsville Space Center, Goddard was NASA's first of four space centers. Its original charter was to perform five major functions on behalf of NASA: technology development and fabrication, planning, scientific research, technical operations, and project management. The center is organized into several directorates, each charged with one of these key functions. On May 1, 1959, the center was renamed the Goddard Space Flight Center (GSFC) for Robert H. Goddard. Its first 157 employees transferred from the United States Navy's Project Vanguard missile program, and continued their work at the Naval Research Laboratory in Washington, D.C., while the center was under construction. Goddard Space Flight Center contributed to Project Mercury, America's first human spaceflight program. The Center assumed a lead role for the project in its early days and managed the first 250 employees involved in the effort, who were stationed at Langley Research Center in Hampton, Virginia. However, the size and scope of Project Mercury soon prompted NASA to build a new Manned Spacecraft Center, now the Johnson Space Center, in Houston, Texas. Project Mercury's personnel and activities were transferred there in 1961. Goddard Space Flight Center remained involved in the crewed space flight program, providing computer support and radar tracking of flights through a worldwide network of ground stations called the Spacecraft Tracking and Data Acquisition Network (STDN). However, the Center focused primarily on designing uncrewed satellites and spacecraft for scientific research missions. Goddard pioneered several fields of spacecraft development, including modular spacecraft design, which reduced costs and made it possible to repair satellites in orbit. Goddard's Solar Max satellite, launched in 1980, was repaired by astronauts on the Space Shuttle Challenger in 1984. The Hubble Space Telescope, launched in 1990, remains in service and continues to grow in capability thanks to its modular design and multiple servicing missions by the Space Shuttle. Today, the center remains involved in each of NASA's key programs. Goddard has developed more instruments for planetary exploration than any other organization, among them scientific instruments sent to every planet in the Solar System. The center's contribution to the Earth Science Enterprise includes several spacecraft in the Earth Observing System fleet as well as EOSDIS, a science data collection, processing, and distribution system. For the crewed space flight program, Goddard develops tools for use by astronauts during extra-vehicular activity, and operates the Lunar Reconnaissance Orbiter, a spacecraft designed to study the Moon in preparation for future crewed exploration. == Missions == A fact sheet highlighting many of Goddard's previous missions is recorded on a 40th anniversary webpage. === Past === Goddard has been involved in designing, building, and operating spacecraft since the days of Explorer 1, the nation's first artificial satellite. The list of these missions reflects a diverse set of scientific objectives and goals. The Landsat series of spacecraft has been studying the Earth's resources since the launch of the first mission in 1972. TIROS-1 launched in 1960 as the first success in a long series of weather satellites. The Spartan platform deployed from the space shuttle, allowing simple, low-cost 2–3 day missions. The second of NASA's Great Observatories, the Compton Gamma Ray Observatory, operated for nine years before re-entering the Earth's atmosphere in 2000. Another of Goddard's space science observatories, the Cosmic Background Explorer, provided unique scientific data about the early universe. === Present === Goddard currently supports the operation of dozens of spacecraft collecting scientific data. These missions include Earth science projects like the Earth Observing System (EOS) that includes the Terra, Aqua, and Aura spacecraft flying alongside several projects from other Centers or other countries. Other major Earth science projects that are currently operating include the Tropical Rainfall Measuring Mission (TRMM) and the Global Precipitation Measurement mission (GPM), missions that provide data critical to hurricane predictions. Many Goddard projects support other organizations, such as the US Geological Survey on Landsat-7 and -8, and the National Oceanic and Atmospheric Administration (NOAA) on the Geostationary Operational Environmental Satellite (GOES) system that provide weather predictions. Other Goddard missions support a variety of space science disciplines. Goddard's most famous project is the Hubble Space Telescope, a unique science platform that has been breaking new ground in astronomy since 1990. Other missions such as the Wilkinson Microwave Anisotropy Probe (WMAP) study the structure and evolution of the universe. Other missions such as the Solar and Heliospheric Observatory (SOHO) are currently studying the Sun and how its behavior affects life on the Earth. The Lunar Reconnaissance Orbiter (LRO) is mapping out the composition and topography of the Moon and the Solar Dynamics Observatory (SDO) is tracking the Sun's energy and influence on the Earth. The OSIRIS-REx asteroid sample return mission returned a sample from asteroid 101955 Bennu in 2023 and under the name OSIRIS-APEX is headed to asteroid 99942 Apophis in 2029. Particularly noteworthy operations include the James Webb Space Telescope, which was launched in 2022 and enables investigations across many fields of astronomy and cosmology, such as observation of the first stars and the formation of the first galaxies. === Future === The Goddard community continually works on numerous operations and projects that have launch dates ranging from the upcoming year to a decade down the road. These operations also vary in what scientists hope they will uncover. == Science == === Addressing scientific questions === NASA's missions (and therefore Goddard's missions) address a broad range of scientific questions generally classified around four key areas: Earth sciences, astrophysics, heliophysics, and the Solar System. To simplify, Goddard studies Earth and Space. Within the Earth sciences area, Goddard plays a major role in research to advance our understanding of the Earth as an environmental system, looking at questions related to how the components of that environmental system have developed, how they interact and how they evolve. This is all important to enable scientists to understand the practical impacts of natural and human activities during the coming decades and centuries. Within Space Sciences, Goddard has distinguished itself with the 2006 Nobel Physics Prize given to John Mather and the COBE mission. Beyond the COBE mission, Goddard studies how the universe formed, what it is made of, how its components interact, and how it evolves. The center also contributes to research seeking to understand how stars and planetary systems form and evolve and studies the nature of the Sun's interaction with its surroundings. === From scientific questions to science missions === Based on existing knowledge accumulated through previous missions, new science questions are articulated. Missions are developed in the same way an experiment would be developed using the scientific method. In this context, Goddard does not work as an independent entity but rather as one of the 10 NASA centers working together to find answers to these scientific questions. Each mission starts with a set of scientific questions to be answered, and a set of scientific requirements for the mission, which build on what has already been discovered by prior missions. Scientific requirements spell out the types data that will need to be collected. These scientific requirements are then transformed into mission concepts that start to specify the kind of spacecraft and scientific instruments need to be developed for these scientific questions to be answered. Within Goddard, the Sciences and Exploration Directorate (SED) leads the center's scientific endeavors, including the development of technology related to scientific pursuits. === Collecting data in space – scientific instruments === Some of the most important technological advances developed by Goddard (and NASA in general) come from the need to innovate with new scientific instruments in order to be able to observe or measure phenomena in space that have never been measured or observed before. Instrument names tend to be known by their initials. In some cases, the mission's name gives an indication of the type of instrument involved. For example, the James Webb Space Telescope is, as its name indicates, a telescope, but it includes a suite of four distinct scientific instruments: Mid-Infrared Instrument (MIRI); Near-Infrared Camera (NIRCam); Near-Infrared Spectrograph (NIRSpec); Fine Guidance Sensor and Near Infrared Imager and Slitless Spectrograph (FGS-NIRISS). Scientists at Goddard work closely with the engineers to develop these instruments. Typically, a mission consists of a spacecraft with an instrument suite (multiple instruments) on board. In some cases, the scientific requirements dictate the need for multiple spacecraft. For example, the Magnetospheric Multiscale Mission (MMS) studies magnetic reconnection, a 3-D process. In order to capture data about this complex 3-D process, a set of four spacecraft fly in a tetrahedral formation. Each of the four spacecraft carries identical instrument suites. MMS is part of a larger program (Solar Terrestrial Probes) that studies the impact of the Sun on the Solar System. === Scientific collaborations === In many cases, Goddard works with partners (US Government agencies, aerospace industry, university-based research centers, other countries) that are responsible for developing the scientific instruments. In other cases, Goddard develops one or more of the instruments. The individual instruments are then integrated into an instrument suite which is then integrated with the spacecraft. In the case of MMS, for example, Southwest Research Institute (SwRI) was responsible for developing the scientific instruments and Goddard provides overall project management, mission systems engineering, the spacecraft, and mission operations. On the Lunar Reconnaissance Orbiter (LRO), six instruments have been developed by a range of partners. One of the instruments, the Lunar Orbiter Laser Altimeter (LOLA), was developed by Goddard. LOLA measures landing site slopes and lunar surface roughness in order to generate a 3-D map of the Moon. Another mission to be managed by Goddard is MAVEN. MAVEN is the second mission within the Mars Scout Program that is exploring the atmosphere of Mars in support of NASA's broader efforts to go to Mars. MAVEN carries eight instruments to measure characteristics of Mars' atmospheric gases, upper atmosphere, solar wind, and ionosphere. Instrument development partners include the University of Colorado at Boulder, and the University of California, Berkeley. Goddard contributed overall project management as well as two of the instruments, two magnetometers. === Managing scientific data === Once a mission is launched and reaches its destination, its instruments start collecting data. The data is transmitted back to Earth where it needs to be analyzed and stored for future reference. Goddard manages large collections of scientific data resulting from past and ongoing missions. The Earth Science Division hosts the Goddard Earth Science Data and Information Services Center (GES DISC). It offers Earth science data, information, and services to research scientists, applications scientists, applications users, and students. The NASA Space Science Data Coordinated Archive (NSSDCA), created at Goddard in 1966, hosts a permanent archive of space science data, including a large collection of images from space. == Spinoff technologies == Section 102(d) of the National Aeronautics and Space Act of 1958 calls for "the establishment of long-range studies of the potential benefits to be gained from, the opportunities for, and the problems involved in the utilization of aeronautical and space activities for peaceful and scientific purposes." Because of this mandate, the Technology Utilization Program was established in 1962 which required technologies to be brought down to Earth and commercialized in order to help the US economy and improve the quality of life. Documentation of these technologies that were spun off started in 1976 with "Spinoff 1976". Since then, NASA has produced a yearly publication of these spinoff technologies through the Innovative Partnerships Program Office. Goddard Space Flight Center has made significant contributions to the US economy and quality of life with the technologies it has spun off. Here are some examples: Weather balloon technology has helped firefighters with its short-range radios; aluminized Mylar in satellites has made sports equipment more insulated; laser optics systems have transformed the camera industry and life detection missions on other planets help scientists find bacteria in contaminated food. == Facilities == Goddard's partly wooded campus is 6.5 miles (10.5 km) northeast of Washington, D.C., in Prince George's County. The center is on Greenbelt Road, which is Maryland Route 193. Baltimore, Annapolis, and NASA Headquarters in Washington are 30–45 minutes away by highway. Greenbelt also has a train station with access to the Washington Metro system and the MARC commuter train's Camden line. === Testing chambers and Manufacturing Buildings === The High Bay Cleanroom located in building 29 is the world's largest ISO 7 cleanroom with 1.3 million cubic feet (37,000 m3) of space. Vacuum chambers in adjacent buildings 10 and 7 can be chilled or heated to ±200 °C (392 °F). Adjacent building 15 houses the High Capacity Centrifuge which is capable of generating 30 G on up to a 2.3-tonne (2.5-short-ton) load. Parsons Corporation assisted in the construction of the Class 10,000 cleanroom to support Hubble Space Telescope as well as other Goddard missions. === High Energy Astrophysics Science Archive Research Center === The High Energy Astrophysics Science Archive Research Center (HEASARC) is NASA's designated center for the archiving and dissemination of high energy astronomy data and information. Information on X-ray and gamma ray astronomy and related NASA mission archives are maintained for public information and science access. === Software Assurance Technology Center === The Software Assurance Technology Center (SATC) is a NASA department founded in 1992 as part of their Systems Reliability and Safety Office at Goddard Space Flight Center. Its purpose was "to become a center of excellence in software assurance, dedicated to making measurable improvement in both the quality and reliability of software developed for NASA at GSFC". The center has been the source of research papers on software metrics, assurance, and risk management. === Near Space Operations Control Center (NSOCC) === While NASA was in the midst of the Gemini mission era there was a need for a new kind of operations hub and the Manned Space Flight Network Control Center (MSFNOCC) was created in building 13. The name has changed over the years and as such, the capability has grown; the facility has been the GSFC hub for human space flight and launch vehicle missions for years and has the distinct honor of having supported every single Shuttle mission. After the MSFNOCC, the facility was renamed to the Network Control Center (NCC). It remained the NCC until 1997-1999 when the NIC was born. The NIC supported the beginning of the new age of growing space communications which included the International Space Station (ISS). The facility was later renovated from the floor up to become the Near Space Operations Control Center (NSOCC) in 2023. The NSOCC currently provides critical mission support for various launch efforts including SpaceX Crew & Cargo, Science missions such as JWST & PACE, and provides critical data services for Japan Aerospace Exploration Agency (JAXA) and European Space Agency (ESA). The NSOCC provides a console based workspace for various network elements to collaborate and provide the highest possible level of service to NASA and its customers. Some of the network elements included in the NSOCC support structure are Flight Dynamics Facility (FDF), Human Space Flight (HSF), Launch Vehicles (LV), and Robotics mission support leadership, Search and Rescue (SAR), and “Data Acquisition Processing and Handling Network Environment (DAPHNE+). === Goddard Visitor Center === The Goddard Visitor Center is open to the public Tuesdays through Sundays, free of charge, and features displays of spacecraft and technologies developed there. The Hubble Space Telescope is represented by models and deep space imagery from recent missions. The center also features a Science On a Sphere projection system. The center also features an Educator's Resource Center available for use by teachers and education volunteers such as Boy and Girl Scout leaders, and hosts special events during the year. As an example, in September 2008 the Center opened its gates for Goddard LaunchFest. The event, free to the public, included; robot competitions, tours of Goddard facilities hosted by NASA employees, and live entertainment on the Goddard grounds. GSFC also has a large ballroom for guest events such as lectures, presentations and dinner parties. === External facilities === GSFC operates three facilities that are not located at the Greenbelt site. These facilities are: The Wallops Flight Facility located in Wallops Island, Virginia, was established in 1945, and is one of the oldest launch sites in the world. Wallops manages NASA's sounding rocket program, and supports approximately 35 missions each year. The Goddard Institute for Space Studies (GISS) located at Columbia University in New York City, where much of the center's theoretical research is conducted. Operated in close association with Columbia and other area universities, the institute provides support research in geophysics, astrophysics, astronomy and meteorology. The Katherine Johnson Independent Verification and Validation Facility (IV&V) in Fairmont, West Virginia, was established in 1993 to improve the safety, reliability, and quality of software used in NASA missions. GSFC is also responsible for the White Sands Complex, a set of two sites in Las Cruces, New Mexico, but the site is owned by Johnson Space Center as part of the White Sands Test Facility. == Employees == Goddard Space Flight Center has a workforce of over 3,000 civil servant employees, 60% of whom are engineers and scientists. There are approximately 7,000 supporting contractors on site every day. It is one of the largest concentrations of the world's premier space scientists and engineers. The center is organized into eight directorates, which includes Applied Engineering and Technology, Flight Projects, Science and Exploration, and Safety & Mission Assurance. Co-op students from universities in all 50 States can be found around the campus every season through the Cooperative Education Program. During the summers, programs such as the Summer Institute in Engineering and Computer Applications (SIECA) and Excellence through Challenging Exploration and Leadership (EXCEL) provide internship opportunities to students from the US and territories such as Puerto Rico to learn and partake in challenging scientific and engineering work. == Community == The Goddard Space Flight Center maintains ties with local area communities through external volunteer and educational programs. Employees are encouraged to take part in mentoring programs and take on speaking roles at area schools. On Center, Goddard hosts regular colloquiums in engineering, leadership and science. These events are open to the general public, but attendees must sign up in advance to procure a visitors pass for access to the center's main grounds. Passes can be obtained at the security office main gate on Greenbelt Road. Goddard also hosts several different internship opportunities, including NASA DEVELOP at Goddard Space Flight Center. == List of center directors == == Queen Elizabeth II's visit == Queen Elizabeth II of the United Kingdom and her husband Prince Philip, Duke of Edinburgh visited Goddard Space Flight Center on Tuesday, May 8, 2007. The tour of Goddard was near the end of the Queen's visit to commemorate the 400th anniversary of the founding of Jamestown in Virginia. The Queen spoke with crew aboard the International Space Station from the Network Integration Center (NIC, now NSOCC) located in Building 13. == Panorama == == See also == Goddard Earth Observing System Marshall Space Flight Center Jet Propulsion Laboratory == References == == External links == Official website Latest Goddard News Goddard Employees Welfare Association (GEWA) Goddard Fact Sheets Archived April 6, 2023, at the Wayback Machine Cleanroom webcam Goddard Visitor Center Goddard Scientific Visualization Studio Katherine Johnson Independent Verification and Validation (IV&V) Facility Dreams, Hopes, Realities: NASA's Goddard Space Flight Center, The First Forty Years by Lane E. Wallace, 1999 (full on-line book) Goddard Amateur Radio Club Archived January 24, 2022, at the Wayback Machine WA3NAN is known worldwide for their HF retransmissions of space flight missions. The Goddard Homer E. Newell Memorial Library NASA Goddard Space Flight Center Documentary produced by WETA-TV - aired in 2008
Wikipedia/High_Energy_Astrophysics_Science_Archive_Research_Center
Scientific formalism is a family of approaches to the presentation of science. It is viewed as an important part of the scientific method, especially in the physical sciences. == Levels of formalism == There are multiple levels of scientific formalism possible. At the lowest level, scientific formalism deals with the symbolic manner in which the information is presented. To achieve formalism in a scientific theory at this level, one starts with a well defined set of axioms, and from this follows a formal system. However, at a higher level, scientific formalism also involves consideration of the axioms themselves. These can be viewed as questions of ontology. For example, one can, at the lower level of formalism, define a property called 'existence'. However, at the higher level, the question of whether an electron exists in the same sense that a bacterium exists still needs to be resolved. Some actual formal theories on facts have been proposed. == In modern physics == The scientific climate of the twentieth century revived these questions. From about the time of Isaac Newton to that of James Clerk Maxwell they had been dormant, in the sense that the physical sciences could rely on the status of the real numbers as a description of the continuum, and an agnostic view of atoms and their structure. Quantum mechanics, the dominant physical theory after about 1925, was formulated in a way which raised questions of both types. In the Newtonian framework there was indeed a degree of comfort in the answers one could give. Consider for example the question of whether the Earth really goes round the Sun. In a frame of reference adapted to calculating the Earth's orbit, this is a mathematical but also tautological statement. Newtonian mechanics can answer the question, whether it is not equally the case that the Sun goes round the Earth, as it indeed appears to Earth-based astronomers. In Newton's theory there is a basic, fixed frame of reference that is inertial. The 'correct answer' is that the point of view of an observer in an inertial frame of reference is privileged: other observers see artifacts of their acceleration relative to an inertial frame (the inertial forces). Before Newton, Galileo would draw the consequences, from the Copernican heliocentric model. He was, however, constrained to call his work (in effect) scientific formalism, under the old 'description' saving the phenomena. To avoid going against authority, the elliptic orbits of the heliocentric model could be labelled as a more convenient device for calculations, rather than an actual description of reality. In general relativity, Newton's inertial frames are no longer privileged. In quantum mechanics, Paul Dirac argued that physical models were not there to provide semantic constructs allowing us to understand microscopic physics in language comparable to that we use on the familiar scale of everyday objects. His attitude, adopted by many theoretical physicists, is that a good model is judged by our capacity to use it to calculate physical quantities that can be tested experimentally. Dirac's view is close to what Bas van Fraassen calls constructive empiricism. == Duhem == A physicist who took the issues involved seriously was Pierre Duhem, writing at the beginning of the twentieth century. He wrote an extended analysis of the approach he saw as characteristically British, in requiring field theories of theoretical physics to have a mechanical-physical interpretation. That was an accurate characterisation of what Dirac (himself British) would later argue against. The national characteristics specified by Duhem do not need to be taken too seriously, since he also claimed that the use of abstract algebra, namely quaternions, was also characteristically British (as opposed to French or German); as if the use of classical analysis methods alone was important one way or the other. Duhem also wrote on saving the phenomena. In addition to the Copernican Revolution debate of "saving the phenomena" (Greek: σῴζειν τὰ φαινόμενα, sozein ta phainomena) versus offering explanations that inspired Duhem was Thomas Aquinas, who wrote, regarding eccentrics and epicycles, that Reason may be employed in two ways to establish a point: firstly, for the purpose of furnishing sufficient proof of some principle [...]. Reason is employed in another way, not as furnishing a sufficient proof of a principle, but as confirming an already established principle, by showing the congruity of its results, as in astronomy the theory of eccentrics and epicycles is considered as established, because thereby the sensible appearances of the heavenly movements can be explained (possunt salvari apparentia sensibilia); not, however, as if this proof were sufficient, forasmuch as some other theory might explain them. [...] The idea that a physical interpretation—in common language or classical ideas and physical entities, though of or examined in an ontological or quasi-ontological sense—of a phenomenon in physics is not an ultimate or necessary condition for its understanding or validity, also appears in modern structural realist views on science. == Bellarmine == Robert Bellarmine wrote to heliocentrist Paolo Antonio Foscarini:Nor is it the same to demonstrate that by assuming the sun to be at the center and the earth in heaven one can save the appearances, and to demonstrate that in truth the sun is at the center and the earth in heaven; for I believe the first demonstration may be available, but I have very great doubts about the second… Modern physicist Pierre Duhem "suggests that in one respect, at least, Bellarmine had shown himself a better scientist than Galileo by disallowing the possibility of a 'strict proof of the earth's motion,' on the grounds that an astronomical theory merely 'saves the appearances' without necessarily revealing what 'really happens.'" == See also == Andreas Osiander Scientific community metaphor == Notes ==
Wikipedia/Saving_the_phenomena
The hierarchy of the sciences is a theory formulated by Auguste Comte in the 19th century. This theory states that science develops over time beginning with the simplest and most general scientific discipline, astronomy, which is the first to reach the "positive stage" (one of three in Comte's law of three stages). As one moves up the "hierarchy", this theory further states that sciences become more complex and less general, and that they will reach the positive stage later. Disciplines further up the hierarchy are said to depend more on the developments of their predecessors; the highest discipline on the hierarchy are the social sciences. According to this theory, there are higher levels of consensus and faster rates of advancement in physics and other natural sciences than there are in the social sciences. == Evidence == Research has shown that, after controlling for the number of hypotheses being tested, positive results are 2.3 times more likely in the social sciences than in the physical sciences. It has also been found that the degree of scientific consensus is highest in the physical sciences, lowest in the social sciences, and intermediate in the biological sciences. Dean Simonton argues that a composite measure of the scientific status of disciplines ranks psychology much closer to biology than to sociology. == See also == Unity of science == References ==
Wikipedia/Hierarchy_of_the_sciences
In physics, gravity (from Latin gravitas 'weight'), also known as gravitation or a gravitational interaction, is a fundamental interaction, a mutual attraction between all massive particles. On Earth, gravity takes a slightly different meaning: the observed force between objects and the Earth. This force is dominated by the combined gravitational interactions of particles but also includes effect of the Earth's rotation. Gravity gives weight to physical objects and is essential to understanding the mechanisms responsible for surface water waves and lunar tides. Gravity also has many important biological functions, helping to guide the growth of plants through the process of gravitropism and influencing the circulation of fluids in multicellular organisms. The gravitational attraction between primordial hydrogen and clumps of dark matter in the early universe caused the hydrogen gas to coalesce, eventually condensing and fusing to form stars. At larger scales this results in galaxies and clusters, so gravity is a primary driver for the large-scale structures in the universe. Gravity has an infinite range, although its effects become weaker as objects get farther away. Gravity is accurately described by the general theory of relativity, proposed by Albert Einstein in 1915, which describes gravity in terms of the curvature of spacetime, caused by the uneven distribution of mass. The most extreme example of this curvature of spacetime is a black hole, from which nothing—not even light—can escape once past the black hole's event horizon. However, for most applications, gravity is well approximated by Newton's law of universal gravitation, which describes gravity as a force causing any two bodies to be attracted toward each other, with magnitude proportional to the product of their masses and inversely proportional to the square of the distance between them. Scientists are currently working to develop a theory of gravity consistent with quantum mechanics, a quantum gravity theory, which would allow gravity to be united in a common mathematical framework (a theory of everything) with the other three fundamental interactions of physics. Although experiments are now being conducted to prove (or disprove) whether gravity is quantum, it is not known with certainty. == Definitions == Gravity is the word used to describe both a fundamental physical interaction and the observed consequences of that interaction on macroscopic objects on Earth. Gravity is, by far, the weakest of the four fundamental interactions, approximately 1038 times weaker than the strong interaction, 1036 times weaker than the electromagnetic force, and 1029 times weaker than the weak interaction. As a result, it has no significant influence at the level of subatomic particles. However, gravity is the most significant interaction between objects at the macroscopic scale, and it determines the motion of planets, stars, galaxies, and even light. Gravity, as the gravitational attraction at the surface of a planet or other celestial body, may also include the centrifugal force resulting from the planet's rotation (see § Earth's gravity). == History == === Ancient world === The nature and mechanism of gravity were explored by a wide range of ancient scholars. In Greece, Aristotle believed that objects fell towards the Earth because the Earth was the center of the Universe and attracted all of the mass in the Universe towards it. He also thought that the speed of a falling object should increase with its weight, a conclusion that was later shown to be false. While Aristotle's view was widely accepted throughout Ancient Greece, there were other thinkers such as Plutarch who correctly predicted that the attraction of gravity was not unique to the Earth. Although he did not understand gravity as a force, the ancient Greek philosopher Archimedes discovered the center of gravity of a triangle. He postulated that if two equal weights did not have the same center of gravity, the center of gravity of the two weights together would be in the middle of the line that joins their centers of gravity. Two centuries later, the Roman engineer and architect Vitruvius contended in his De architectura that gravity is not dependent on a substance's weight but rather on its "nature". In the 6th century CE, the Byzantine Alexandrian scholar John Philoponus proposed the theory of impetus, which modifies Aristotle's theory that "continuation of motion depends on continued action of a force" by incorporating a causative force that diminishes over time. In 628 CE, the Indian mathematician and astronomer Brahmagupta proposed the idea that gravity is an attractive force that draws objects to the Earth and used the term gurutvākarṣaṇ to describe it.: 105  In the ancient Middle East, gravity was a topic of fierce debate. The Persian intellectual Al-Biruni believed that the force of gravity was not unique to the Earth, and he correctly assumed that other heavenly bodies should exert a gravitational attraction as well. In contrast, Al-Khazini held the same position as Aristotle that all matter in the Universe is attracted to the center of the Earth. === Scientific revolution === In the mid-16th century, various European scientists experimentally disproved the Aristotelian notion that heavier objects fall at a faster rate. In particular, the Spanish Dominican priest Domingo de Soto wrote in 1551 that bodies in free fall uniformly accelerate. De Soto may have been influenced by earlier experiments conducted by other Dominican priests in Italy, including those by Benedetto Varchi, Francesco Beato, Luca Ghini, and Giovan Bellaso which contradicted Aristotle's teachings on the fall of bodies. The mid-16th century Italian physicist Giambattista Benedetti published papers claiming that, due to specific gravity, objects made of the same material but with different masses would fall at the same speed. With the 1586 Delft tower experiment, the Flemish physicist Simon Stevin observed that two cannonballs of differing sizes and weights fell at the same rate when dropped from a tower. In the late 16th century, Galileo Galilei's careful measurements of balls rolling down inclines allowed him to firmly establish that gravitational acceleration is the same for all objects.: 334  Galileo postulated that air resistance is the reason that objects with a low density and high surface area fall more slowly in an atmosphere. In his 1638 work Two New Sciences Galileo proved that that the distance traveled by a falling object is proportional to the square of the time elapsed. His method was a form of graphical numerical integration since concepts of algebra and calculus were unknown at the time.: 4  This was later confirmed by Italian scientists Jesuits Grimaldi and Riccioli between 1640 and 1650. They also calculated the magnitude of the Earth's gravity by measuring the oscillations of a pendulum. Galileo also broke with incorrect ideas of Aristotelian philosophy by regarding inertia as persistence of motion, not a tendency to come to rest. By considering that the laws of physics appear identical on a moving ship to those on land, Galileo developed the concepts of reference frame and the principle of relativity.: 5  These concepts would become central to Newton's mechanics, only to be transformed in Einstein's theory of gravity, the general theory of relativity.: 17  Johannes Kepler, in his 1609 book Astronomia nova described gravity as a mutual attraction, claiming that if the Earth and Moon were not held apart by some force they would come together. He recognized that mechanical forces cause action, creating a kind of celestial machine. On the other hand Kepler viewed the force of the Sun on the planets as magnetic and acting tangential to their orbits and he assumed with Aristotle that inertia meant objects tend to come to rest.: 846  In 1666, Giovanni Alfonso Borelli avoided the key problems that limited Kepler. By Borelli's time the concept of inertia had its modern meaning as the tendency of objects to remain in uniform motion and he viewed the Sun as just another heavenly body. Borelli developed the idea of mechanical equilibrium, a balance between inertia and gravity. Newton cited Borelli's influence on his theory.: 848  In 1657, Robert Hooke published his Micrographia, in which he hypothesized that the Moon must have its own gravity.: 57  In a communication to the Royal Society in 1666 and his 1674 Gresham lecture, An Attempt to prove the Annual Motion of the Earth, Hooke took the important step of combining related hypothesis and then forming predictions based on the hypothesis. He wrote: I will explain a system of the world very different from any yet received. It is founded on the following positions. 1. That all the heavenly bodies have not only a gravitation of their parts to their own proper centre, but that they also mutually attract each other within their spheres of action. 2. That all bodies having a simple motion, will continue to move in a straight line, unless continually deflected from it by some extraneous force, causing them to describe a circle, an ellipse, or some other curve. 3. That this attraction is so much the greater as the bodies are nearer. As to the proportion in which those forces diminish by an increase of distance, I own I have not discovered it.... Hooke was an important communicator who helped reformulate the scientific enterprise. He was one of the first professional scientists and worked as the then-new Royal Society's curator of experiments for 40 years. However his valuable insights remained hypotheses since he was unable to convert them in to a mathematical theory of gravity and work out the consequences.: 853  For this he turned to Newton, writing him a letter in 1679, outlining a model of planetary motion in a void or vacuum due to attractive action at a distance. This letter likely turned Newton's thinking in a new direction leading to his revolutionary work on gravity. When Newton reported his results in 1686, Hooke claimed the inverse square law portion was his "notion". === Newton's theory of gravitation === Before 1684, scientists including Christopher Wren, Robert Hooke and Edmund Halley determined that Kepler's third law, relating to planetary orbital periods, would prove the inverse square law if the orbits where circles. However the orbits were known to be ellipses. At Halley's suggestion, Newton tackled the problem and was able to prove that ellipses also proved the inverse square relation from Kepler's observations.: 13  In 1684, Isaac Newton sent a manuscript to Edmond Halley titled De motu corporum in gyrum ('On the motion of bodies in an orbit'), which provided a physical justification for Kepler's laws of planetary motion. Halley was impressed by the manuscript and urged Newton to expand on it, and a few years later Newton published a groundbreaking book called Philosophiæ Naturalis Principia Mathematica (Mathematical Principles of Natural Philosophy). The revolutionary aspect of Newton's theory of gravity was the unification of Earth-bound observations of acceleration with celestial mechanics.: 4  In his book, Newton described gravitation as a universal force, and claimed that it operated on objects "according to the quantity of solid matter which they contain and propagates on all sides to immense distances always at the inverse square of the distances".: 546  This formulation had two important parts. First was equating inertial mass and gravitational mass. Newton's 2nd law defines force via F = m a {\displaystyle F=ma} for inertial mass, his law of gravitational force uses the same mass. Newton did experiments with pendulums to verify this concept as best he could.: 11  The second aspect of Newton's formulation was the inverse square of distance. This aspect was not new: the astronomer Ismaël Bullialdus proposed it around 1640. Seeking proof, Newton made quantitative analysis around 1665, considering the period and distance of the Moon's orbit and considering the timing of objects falling on Earth. Newton did not publish these results at the time because he could not prove that the Earth's gravity acts as if all its mass were concentrated at its center. That proof took him twenty years.: 13  Newton's Principia was well received by the scientific community, and his law of gravitation quickly spread across the European world. More than a century later, in 1821, his theory of gravitation rose to even greater prominence when it was used to predict the existence of Neptune. In that year, the French astronomer Alexis Bouvard used this theory to create a table modeling the orbit of Uranus, which was shown to differ significantly from the planet's actual trajectory. In order to explain this discrepancy, many astronomers speculated that there might be a large object beyond the orbit of Uranus which was disrupting its orbit. In 1846, the astronomers John Couch Adams and Urbain Le Verrier independently used Newton's law to predict Neptune's location in the night sky, and the planet was discovered there within a day. Newton's formulation was later condensed into the inverse-square law: F = G m 1 m 2 r 2 , {\displaystyle F=G{\frac {m_{1}m_{2}}{r^{2}}},} where F is the force, m1 and m2 are the masses of the objects interacting, r is the distance between the centers of the masses and G is the gravitational constant 6.674×10−11 m3⋅kg−1⋅s−2. While G is also called Newton's constant, Newton did not use this constant or formula, he only discussed proportionality. But this allowed him to come to an astounding conclusion we take for granted today: the gravity of the Earth on the Moon is the same as the gravity of the Earth on an apple: M earth ∝ a apple R radius of earth 2 = a moon R lunar orbit 2 {\displaystyle M_{\text{earth}}\propto a_{\text{apple}}R_{\text{radius of earth}}^{2}=a_{\text{moon}}R_{\text{lunar orbit}}^{2}} Using the values known at the time, Newton was able to verify this form of his law. The value of G was eventually measured by Henry Cavendish in 1797.: 31  === Einstein's general relativity === Eventually, astronomers noticed an eccentricity in the orbit of the planet Mercury which could not be explained by Newton's theory: the perihelion of the orbit was increasing by about 42.98 arcseconds per century. The most obvious explanation for this discrepancy was an as-yet-undiscovered celestial body, such as a planet orbiting the Sun even closer than Mercury, but all efforts to find such a body turned out to be fruitless. In 1915, Albert Einstein developed a theory of general relativity which was able to accurately model Mercury's orbit. Einstein's theory brought two other ideas with independent histories into the physical theories of gravity: the principle of relativity and non-Euclidean geometry The principle of relativity, introduced by Galileo and used as a foundational principle by Newton, lead to a long and fruitless search for a luminiferous aether after Maxwell's equations demonstrated that light propagated at a fixed speed independent of reference frame. In Newton's mechanics, velocities add: a cannon ball shot from a moving ship would travel with a trajectory which included the motion of the ship. Since light speed was fixed, it was assumed to travel in a fixed, absolute medium. Many experiments sought to reveal this medium but failed and in 1905 Einstein's special relativity theory showed the aether was not needed. Special relativity proposed that mechanics be reformulated to use the Lorentz transformation already applicable to light rather than the Galilean transformation adopted by Newton. Special relativity, as in special case, specifically did not cover gravity.: 4  While relativity was associated with mechanics and thus gravity, the idea of altering geometry only joined the story of gravity once mechanics required the Lorentz transformations. Geometry was an ancient science that gradually broke free of Euclidean limitations when Carl Gauss discovered in the 1800s that surfaces in any number of dimensions could be characterized by a metric, a distance measurement along the shortest path between two points that reduces to Euclidean distance at infinitesimal separation. Gauss' student Bernhard Riemann developed this into a complete geometry by 1854. These geometries are locally flat but have global curvature.: 4  In 1907, Einstein took his first step by using special relativity to create a new form of the equivalence principle. The equivalence of inertial mass and gravitational mass was a known empirical law. The m in Newton's first law, F = m a {\displaystyle F=ma} , has the same value as the m in Newton's law of gravity on Earth, F = G M m / r 2 {\displaystyle F=GMm/r^{2}} . In what he later described as "the happiest thought of my life" Einstein realized this meant that in free-fall, an accelerated coordinate system exists with no local gravitational field. Every description of gravity in any other coordinate system must transform to give no field in the free-fall case, a powerful invariance constraint on all theories of gravity.: 20  Einstein's description of gravity was accepted by the majority of physicists for two reasons. First, by 1910 his special relativity was accepted in German physics and was spreading to other countries. Second, his theory explained experimental results like the perihelion of Mercury and the bending of light around the Sun better than Newton's theory. In 1919, the British astrophysicist Arthur Eddington was able to confirm the predicted deflection of light during that year's solar eclipse. Eddington measured starlight deflections twice those predicted by Newtonian corpuscular theory, in accordance with the predictions of general relativity. Although Eddington's analysis was later disputed, this experiment made Einstein famous almost overnight and caused general relativity to become widely accepted in the scientific community. In 1959, American physicists Robert Pound and Glen Rebka performed an experiment in which they used gamma rays to confirm the prediction of gravitational time dilation. By sending the rays down a 74-foot tower and measuring their frequency at the bottom, the scientists confirmed that light is Doppler shifted as it moves towards a source of gravity. The observed shift also supports the idea that time runs more slowly in the presence of a gravitational field (many more wave crests pass in a given interval). If light moves outward from a strong source of gravity it will be observed with a redshift. The time delay of light passing close to a massive object was first identified by Irwin I. Shapiro in 1964 in interplanetary spacecraft signals. In 1971, scientists discovered the first-ever black hole in the galaxy Cygnus. The black hole was detected because it was emitting bursts of x-rays as it consumed a smaller star, and it came to be known as Cygnus X-1. This discovery confirmed yet another prediction of general relativity, because Einstein's equations implied that light could not escape from a sufficiently large and compact object. Frame dragging, the idea that a rotating massive object should twist spacetime around it, was confirmed by Gravity Probe B results in 2011. In 2015, the LIGO observatory detected faint gravitational waves, the existence of which had been predicted by general relativity. Scientists believe that the waves emanated from a black hole merger that occurred 1.5 billion light-years away. == On Earth == Every planetary body (including the Earth) is surrounded by its own gravitational field, which can be conceptualized with Newtonian physics as exerting an attractive force on all objects. Assuming a spherically symmetrical planet, the strength of this field at any given point above the surface is proportional to the planetary body's mass and inversely proportional to the square of the distance from the center of the body. The strength of the gravitational field is numerically equal to the acceleration of objects under its influence. The rate of acceleration of falling objects near the Earth's surface varies very slightly depending on latitude, surface features such as mountains and ridges, and perhaps unusually high or low sub-surface densities. For purposes of weights and measures, a standard gravity value is defined by the International Bureau of Weights and Measures, under the International System of Units (SI). The force of gravity experienced by objects on Earth's surface is the vector sum of two forces: (a) The gravitational attraction in accordance with Newton's universal law of gravitation, and (b) the centrifugal force, which results from the choice of an earthbound, rotating frame of reference. The force of gravity is weakest at the equator because of the centrifugal force caused by the Earth's rotation and because points on the equator are farthest from the center of the Earth. The force of gravity varies with latitude, and the resultant acceleration increases from about 9.780 m/s2 at the Equator to about 9.832 m/s2 at the poles. === Gravity wave === Waves on oceans, lakes, and other bodies of water occur when the gravitational equilibrium at the surface of the water is disturbed by for example wind. Similar effects occur in the atmosphere where equilibrium is disturbed by thermal weather fronts or mountain ranges. == Astrophysics == === Stars and black holes === During star formation, gravitational attraction in a cloud of hydrogen gas competes with thermal gas pressure. As the gas density increases, the temperature rises, then the gas radiates energy, allowing additional gravitational condensation. If the mass of gas in the region is low, the process continues until a brown dwarf or gas-giant planet is produced. If more mass is available, the additional gravitational energy allows the central region to reach pressures sufficient for nuclear fusion, forming a star. In a star, again the gravitational attraction competes, with thermal and radiation pressure in hydrostatic equilibrium until the star's atomic fuel runs out. The next phase depends upon the total mass of the star. Very low mass stars slowly cool as white dwarf stars with a small core balancing gravitational attraction with electron degeneracy pressure. Stars with masses similar to the Sun go through a red giant phase before becoming white dwarf stars. Higher mass stars have complex core structures that burn helium and high atomic number elements ultimately producing an iron core. As their fuel runs out, these stars become unstable producing a supernova. The result can be a neutron star where gravitational attraction balances neutron degeneracy pressure or, for even higher masses, a black hole where gravity operates alone with such intensity that even light cannot escape.: 121  === Gravitational radiation === General relativity predicts that energy can be transported out of a system through gravitational radiation also known as gravitational waves. The first indirect evidence for gravitational radiation was through measurements of the Hulse–Taylor binary in 1973. This system consists of a pulsar and neutron star in orbit around one another. Its orbital period has decreased since its initial discovery due to a loss of energy, which is consistent for the amount of energy loss due to gravitational radiation. This research was awarded the Nobel Prize in Physics in 1993. The first direct evidence for gravitational radiation was measured on 14 September 2015 by the LIGO detectors. The gravitational waves emitted during the collision of two black holes 1.3 billion light years from Earth were measured. This observation confirms the theoretical predictions of Einstein and others that such waves exist. It also opens the way for practical observation and understanding of the nature of gravity and events in the Universe including the Big Bang. Neutron star and black hole formation also create detectable amounts of gravitational radiation. This research was awarded the Nobel Prize in Physics in 2017. === Dark matter === At the cosmological scale, gravity is a dominant player. About 5/6 of the total mass in the universe consists of dark matter which interacts through gravity but not through electromagnetic interactions. The gravitation of clumps of dark matter known as dark matter halos attract hydrogen gas leading to stars and galaxies. === Gravitational lensing === Gravity acts on light and matter equally, meaning that a sufficiently massive object could warp light around it and create a gravitational lens. This phenomenon was first confirmed by observation in 1979 using the 2.1 meter telescope at Kitt Peak National Observatory in Arizona, which saw two mirror images of the same quasar whose light had been bent around the galaxy YGKOW G1. Many subsequent observations of gravitational lensing provide additional evidence for substantial amounts of dark matter around galaxies. Gravitational lenses do not focus like eyeglass lenses, but rather lead to annular shapes called Einstein rings.: 370  === Speed of gravity === In December 2012, a research team in China announced that it had produced measurements of the phase lag of Earth tides during full and new moons which seem to prove that the speed of gravity is equal to the speed of light. This means that if the Sun suddenly disappeared, the Earth would keep orbiting the vacant point normally for 8 minutes, which is the time light takes to travel that distance. The team's findings were released in Science Bulletin in February 2013. In October 2017, the LIGO and Virgo interferometer detectors received gravitational wave signals within 2 seconds of gamma ray satellites and optical telescopes seeing signals from the same direction. This confirmed that the speed of gravitational waves was the same as the speed of light. === Anomalies and discrepancies === There are some observations that are not adequately accounted for, which may point to the need for better theories of gravity or perhaps be explained in other ways. Galaxy rotation curves: Stars in galaxies follow a distribution of velocities where stars on the outskirts are moving faster than they should according to the observed distributions of luminous matter. Galaxies within galaxy clusters show a similar pattern. The pattern is considered strong evidence for dark matter, which would interact through gravitation but not electromagnetically; various modifications to Newtonian dynamics have also been proposed. Accelerated expansion: The expansion of the universe seems to be accelerating. Dark energy has been proposed to explain this. Flyby anomaly: Various spacecraft have experienced greater acceleration than expected during gravity assist maneuvers. The Pioneer anomaly has been shown to be explained by thermal recoil due to the distant sun radiation on one side of the space craft. == General relativity == In modern physics, general relativity is considered the most successful theory of gravitation. Physicists continue to work to find solutions to the Einstein field equations that form the basis of general relativity and continue to test the theory, finding excellent agreement in all cases.: p.9  === Constraints === Any theory of gravity must conform to the requirements of special relativity and experimental observations. Newton's theory of gravity assumes action at a distance and therefore cannot be reconciled with special relativity. The simplest generalization of Newton's approach would be a scalar theory with the gravitational potential represented by a single number in a 4 dimensional spacetime. However this type of theory fails to predict gravitational redshift or the deviation of light by matter and gives values for the precession of Mercury which are incorrect. A vector field theory predicts negative energy gravitational waves so it also fails. Furthermore, no theory without curvature in spacetime can be consistent with special relativity. The simplest theory consistent with special relativity and the well-studied observations is general relativity. === General characteristics === Unlike Newton's formula with one parameter, G, force in general relativity is terms of 10 numbers formed in to a metric tensor.: 70 In general relativity the effects of gravitation are described in different ways in different frames of reference. In a free-falling or co-moving coordinate system, an object travels in a straight line. In other coordinate systems, the object accelerates and thus is seen to move under a force. The path in spacetime (not 3D space) taken by a free-falling object is called a geodesic and the length of that path as measured by time in the objects frame is the shortest (or rarely the longest) one. Consequently the effect of gravity can be described as curving spacetime. In a weak stationary gravitational field, general relativity reduces to Newton's equations. The corrections introduced by general relativity on Earth are on the order of 1 part in a billion.: 77  === Einstein field equations === The Einstein field equations are a system of 10 partial differential equations which describe how matter affects the curvature of spacetime. The system is may be expressed in the form G μ ν + Λ g μ ν = κ T μ ν , {\displaystyle G_{\mu \nu }+\Lambda g_{\mu \nu }=\kappa T_{\mu \nu },} where Gμν is the Einstein tensor, gμν is the metric tensor, Tμν is the stress–energy tensor, Λ is the cosmological constant, G {\displaystyle G} is the Newtonian constant of gravitation and c {\displaystyle c} is the speed of light. The constant κ = 8 π G c 4 {\displaystyle \kappa ={\frac {8\pi G}{c^{4}}}} is referred to as the Einstein gravitational constant. === Solutions === The non-linear second-order Einstein field equations are extremely complex and have been solved in only a few special cases. These cases however has been transformational in our understanding of the cosmos. Several solutions are the basis for understanding black holes and for our modern model of the evolution of the universe since the Big Bang.: 227  === Tests of general relativity === Testing the predictions of general relativity has historically been difficult, because they are almost identical to the predictions of Newtonian gravity for small energies and masses. A wide range of experiments provided support of general relativity.: p.1–9  Today, Einstein's theory of relativity is used for all gravitational calculations where absolute precision is desired, although Newton's inverse-square law is accurate enough for virtually all ordinary calculations.: 79  === Gravity and quantum mechanics === Despite its success in predicting the effects of gravity at large scales, general relativity is ultimately incompatible with quantum mechanics. This is because general relativity describes gravity as a smooth, continuous distortion of spacetime, while quantum mechanics holds that all forces arise from the exchange of discrete particles known as quanta. This contradiction is especially vexing to physicists because the other three fundamental forces (strong force, weak force and electromagnetism) were reconciled with a quantum framework decades ago. As a result, researchers have begun to search for a theory that could unite both gravity and quantum mechanics under a more general framework. One path is to describe gravity in the framework of quantum field theory (QFT), which has been successful to accurately describe the other fundamental interactions. The electromagnetic force arises from an exchange of virtual photons, where the QFT description of gravity is that there is an exchange of virtual gravitons. This description reproduces general relativity in the classical limit. However, this approach fails at short distances of the order of the Planck length, where a more complete theory of quantum gravity (or a new approach to quantum mechanics) is required. === Alternative theories === General relativity has withstood many tests over a large range of mass and size scales. When applied to interpret astronomical observations, cosmological models based on general relativity introduce two components to the universe, dark matter and dark energy, the nature of which is currently an unsolved problem in physics. The many successful, high precision predictions of the standard model of cosmology has led astrophysicists to conclude it and thus general relativity will be the basis for future progress. However, dark matter is not supported by the standard model of particle physics, physical models for dark energy do not match cosmological data, and some cosmological observations are inconsistent. These issues have led to the study of alternative theories of gravity. == See also == == References == == Further reading == I. Bernard Cohen (1999) [1687]. "A Guide to Newton's Principia". The Principia : mathematical principles of natural philosophy. By Newton, Isaac. Translated by Cohen, I. Bernard. University of California Press. ISBN 9780520088160. OCLC 313895715. Halliday, David; Resnick, Robert; Krane, Kenneth S. (2001). Physics v. 1. New York: John Wiley & Sons. ISBN 978-0-471-32057-9. Serway, Raymond A.; Jewett, John W. (2004). Physics for Scientists and Engineers (6th ed.). Brooks/Cole. ISBN 978-0-534-40842-8. Tipler, Paul (2004). Physics for Scientists and Engineers: Mechanics, Oscillations and Waves, Thermodynamics (5th ed.). W.H. Freeman. ISBN 978-0-7167-0809-4. Thorne, Kip S.; Misner, Charles W.; Wheeler, John Archibald (1973). Gravitation. W.H. Freeman. ISBN 978-0-7167-0344-0. Panek, Richard (2 August 2019). "Everything you thought you knew about gravity is wrong". The Washington Post. == External links == The Feynman Lectures on Physics Vol. I Ch. 7: The Theory of Gravitation "Gravitation", Encyclopedia of Mathematics, EMS Press, 2001 [1994] "Gravitation, theory of", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Wikipedia/Theory_of_gravitation
Alternative models to the Standard Higgs Model are models which are considered by many particle physicists to solve some of the Higgs boson's existing problems. Two of the most currently researched models are quantum triviality, and Higgs hierarchy problem. == Overview == In particle physics, elementary particles and forces give rise to the world around us. Physicists explain the behaviors of these particles and how they interact using the Standard Model—a widely accepted framework believed to explain most of the world we see around us. Initially, when these models were being developed and tested, it seemed that the mathematics behind those models, which were satisfactory in areas already tested, would also forbid elementary particles from having any mass, which showed clearly that these initial models were incomplete. In 1964 three groups of physicists almost simultaneously released papers describing how masses could be given to these particles, using approaches known as symmetry breaking. This approach allowed the particles to obtain a mass, without breaking other parts of particle physics theory that were already believed reasonably correct. This idea became known as the Higgs mechanism, and later experiments confirmed that such a mechanism does exist—but they could not show exactly how it happens. The simplest theory for how this effect takes place in nature, and the theory that became incorporated into the Standard Model, was that if one or more of a particular kind of "field" (known as a Higgs field) happened to permeate space, and if it could interact with elementary particles in a particular way, then this would give rise to a Higgs mechanism in nature. In the basic Standard Model there is one field and one related Higgs boson; in some extensions to the Standard Model there are multiple fields and multiple Higgs bosons. In the years since the Higgs field and boson were proposed as a way to explain the origins of symmetry breaking, several alternatives have been proposed that suggest how a symmetry breaking mechanism could occur without requiring a Higgs field to exist. Models which do not include a Higgs field or a Higgs boson are known as Higgsless models. In these models, strongly interacting dynamics rather than an additional (Higgs) field produce the non-zero vacuum expectation value that breaks electroweak symmetry. == List of alternative models == A partial list of proposed alternatives to a Higgs field as a source for symmetry breaking includes: Technicolor models break electroweak symmetry through new gauge interactions, which were originally modeled on quantum chromodynamics. Extra-dimensional Higgsless models use the fifth component of the gauge fields to play the role of the Higgs fields. It is possible to produce electroweak symmetry breaking by imposing certain boundary conditions on the extra dimensional fields, increasing the unitarity breakdown scale up to the energy scale of the extra dimension. Through the AdS/QCD correspondence this model can be related to technicolor models and to "UnHiggs" models in which the Higgs field is of unparticle nature. Models of composite W and Z vector bosons. Top quark condensate. "Unitary Weyl gauge". By adding a suitable gravitational term to the standard model action in curved spacetime, the theory develops a local conformal (Weyl) invariance. The conformal gauge is fixed by choosing a reference mass scale based on the gravitational coupling constant. This approach generates the masses for the vector bosons and matter fields similar to the Higgs mechanism without traditional spontaneous symmetry breaking. Asymptotically safe weak interactions based on some nonlinear sigma models. Preon and models inspired by preons such as Ribbon model of Standard Model particles by Sundance Bilson-Thompson, based in braid theory and compatible with loop quantum gravity and similar theories. This model not only explains mass but leads to an interpretation of electric charge as a topological quantity (twists carried on the individual ribbons) and colour charge as modes of twisting. Symmetry breaking driven by non-equilibrium dynamics of quantum fields above the electroweak scale. Unparticle physics and the unhiggs. These are models that posit that the Higgs sector and Higgs boson are scaling invariant, also known as unparticle physics. In theory of superfluid vacuum masses of elementary particles can arise as a result of interaction with the physical vacuum, similarly to the gap generation mechanism in superconductors. UV-completion by classicalization, in which the unitarization of the WW scattering happens by creation of classical configurations. == See also == Composite Higgs models == References == == External links == Higgsless model on arxiv.org
Wikipedia/Alternatives_to_the_Standard_Higgs_Model
In electromagnetism, the electromagnetic tensor or electromagnetic field tensor (sometimes called the field strength tensor, Faraday tensor or Maxwell bivector) is a mathematical object that describes the electromagnetic field in spacetime. The field tensor was developed by Arnold Sommerfeld after the four-dimensional tensor formulation of special relativity was introduced by Hermann Minkowski.: 22  The tensor allows related physical laws to be written concisely, and allows for the quantization of the electromagnetic field by the Lagrangian formulation described below. == Definition == The electromagnetic tensor, conventionally labelled F, is defined as the exterior derivative of the electromagnetic four-potential, A, a differential 1-form: F = d e f d A . {\displaystyle F\ {\stackrel {\mathrm {def} }{=}}\ \mathrm {d} A.} Therefore, F is a differential 2-form— an antisymmetric rank-2 tensor field—on Minkowski space. In component form, F μ ν = ∂ μ A ν − ∂ ν A μ . {\displaystyle F_{\mu \nu }=\partial _{\mu }A_{\nu }-\partial _{\nu }A_{\mu }.} where ∂ {\displaystyle \partial } is the four-gradient and A {\displaystyle A} is the four-potential. SI units for Maxwell's equations and the particle physicist's sign convention for the signature of Minkowski space (+ − − −), will be used throughout this article. === Relationship with the classical fields === The Faraday differential 2-form is given by F = ( E x / c ) d x ∧ d t + ( E y / c ) d y ∧ d t + ( E z / c ) d z ∧ d t + B x d y ∧ d z + B y d z ∧ d x + B z d x ∧ d y , {\displaystyle F=(E_{x}/c)\ dx\wedge dt+(E_{y}/c)\ dy\wedge dt+(E_{z}/c)\ dz\wedge dt+B_{x}\ dy\wedge dz+B_{y}\ dz\wedge dx+B_{z}\ dx\wedge dy,} where d t {\displaystyle dt} is the time element times the speed of light c {\displaystyle c} . This is the exterior derivative of its 1-form antiderivative A = A x d x + A y d y + A z d z − ( ϕ / c ) d t {\displaystyle A=A_{x}\ dx+A_{y}\ dy+A_{z}\ dz-(\phi /c)\ dt} , where ϕ ( x → , t ) {\displaystyle \phi ({\vec {x}},t)} has − ∇ → ϕ = E → {\displaystyle -{\vec {\nabla }}\phi ={\vec {E}}} ( ϕ {\displaystyle \phi } is a scalar potential for the irrotational/conservative vector field E → {\displaystyle {\vec {E}}} ) and A → ( x → , t ) {\displaystyle {\vec {A}}({\vec {x}},t)} has ∇ → × A → = B → {\displaystyle {\vec {\nabla }}\times {\vec {A}}={\vec {B}}} ( A → {\displaystyle {\vec {A}}} is a vector potential for the solenoidal vector field B → {\displaystyle {\vec {B}}} ). Note that { d F = 0 ⋆ d ⋆ F = J {\displaystyle {\begin{cases}dF=0\\{\star }d{\star }F=J\end{cases}}} where d {\displaystyle d} is the exterior derivative, ⋆ {\displaystyle {\star }} is the Hodge star, J = − J x d x − J y d y − J z d z + ρ d t {\displaystyle J=-J_{x}\ dx-J_{y}\ dy-J_{z}\ dz+\rho \ dt} (where J → {\displaystyle {\vec {J}}} is the electric current density, and ρ {\displaystyle \rho } is the electric charge density) is the 4-current density 1-form, is the differential forms version of Maxwell's equations. The electric and magnetic fields can be obtained from the components of the electromagnetic tensor. The relationship is simplest in Cartesian coordinates: E i = c F 0 i , {\displaystyle E_{i}=cF_{0i},} where c is the speed of light, and B i = − 1 / 2 ϵ i j k F j k , {\displaystyle B_{i}=-1/2\epsilon _{ijk}F^{jk},} where ϵ i j k {\displaystyle \epsilon _{ijk}} is the Levi-Civita tensor. This gives the fields in a particular reference frame; if the reference frame is changed, the components of the electromagnetic tensor will transform covariantly, and the fields in the new frame will be given by the new components. In contravariant matrix form with metric signature (+,-,-,-), F μ ν = [ 0 − E x / c − E y / c − E z / c E x / c 0 − B z B y E y / c B z 0 − B x E z / c − B y B x 0 ] . {\displaystyle F^{\mu \nu }={\begin{bmatrix}0&-E_{x}/c&-E_{y}/c&-E_{z}/c\\E_{x}/c&0&-B_{z}&B_{y}\\E_{y}/c&B_{z}&0&-B_{x}\\E_{z}/c&-B_{y}&B_{x}&0\end{bmatrix}}.} The covariant form is given by index lowering, F μ ν = η α ν F β α η μ β = [ 0 E x / c E y / c E z / c − E x / c 0 − B z B y − E y / c B z 0 − B x − E z / c − B y B x 0 ] . {\displaystyle F_{\mu \nu }=\eta _{\alpha \nu }F^{\beta \alpha }\eta _{\mu \beta }={\begin{bmatrix}0&E_{x}/c&E_{y}/c&E_{z}/c\\-E_{x}/c&0&-B_{z}&B_{y}\\-E_{y}/c&B_{z}&0&-B_{x}\\-E_{z}/c&-B_{y}&B_{x}&0\end{bmatrix}}.} The Faraday tensor's Hodge dual is G α β = 1 2 ϵ α β γ δ F γ δ = [ 0 − B x − B y − B z B x 0 E z / c − E y / c B y − E z / c 0 E x / c B z E y / c − E x / c 0 ] {\displaystyle {G^{\alpha \beta }={\frac {1}{2}}\epsilon ^{\alpha \beta \gamma \delta }F_{\gamma \delta }={\begin{bmatrix}0&-B_{x}&-B_{y}&-B_{z}\\B_{x}&0&E_{z}/c&-E_{y}/c\\B_{y}&-E_{z}/c&0&E_{x}/c\\B_{z}&E_{y}/c&-E_{x}/c&0\end{bmatrix}}}} From now on in this article, when the electric or magnetic fields are mentioned, a Cartesian coordinate system is assumed, and the electric and magnetic fields are with respect to the coordinate system's reference frame, as in the equations above. === Properties === The matrix form of the field tensor yields the following properties: Antisymmetry: F μ ν = − F ν μ {\displaystyle F^{\mu \nu }=-F^{\nu \mu }} Six independent components: In Cartesian coordinates, these are simply the three spatial components of the electric field (Ex, Ey, Ez) and magnetic field (Bx, By, Bz). Inner product: If one forms an inner product of the field strength tensor a Lorentz invariant is formed F μ ν F μ ν = 2 ( B 2 − E 2 c 2 ) {\displaystyle F_{\mu \nu }F^{\mu \nu }=2\left(B^{2}-{\frac {E^{2}}{c^{2}}}\right)} meaning this number does not change from one frame of reference to another. Pseudoscalar invariant: The product of the tensor F μ ν {\displaystyle F^{\mu \nu }} with its Hodge dual G μ ν {\displaystyle G^{\mu \nu }} gives a Lorentz invariant: G γ δ F γ δ = 1 2 ϵ α β γ δ F α β F γ δ = − 4 c B ⋅ E {\displaystyle G_{\gamma \delta }F^{\gamma \delta }={\frac {1}{2}}\epsilon _{\alpha \beta \gamma \delta }F^{\alpha \beta }F^{\gamma \delta }=-{\frac {4}{c}}\mathbf {B} \cdot \mathbf {E} \,} where ϵ α β γ δ {\displaystyle \epsilon _{\alpha \beta \gamma \delta }} is the rank-4 Levi-Civita symbol. The sign for the above depends on the convention used for the Levi-Civita symbol. The convention used here is ϵ 0123 = − 1 {\displaystyle \epsilon _{0123}=-1} . Determinant: det ( F ) = 1 c 2 ( B ⋅ E ) 2 {\displaystyle \det \left(F\right)={\frac {1}{c^{2}}}\left(\mathbf {B} \cdot \mathbf {E} \right)^{2}} which is proportional to the square of the above invariant. Trace: F = F μ μ = 0 {\displaystyle F={{F}^{\mu }}_{\mu }=0} which is equal to zero. === Significance === This tensor simplifies and reduces Maxwell's equations as four vector calculus equations into two tensor field equations. In electrostatics and electrodynamics, Gauss's law and Ampère's circuital law are respectively: ∇ ⋅ E = ρ ϵ 0 , ∇ × B − 1 c 2 ∂ E ∂ t = μ 0 J {\displaystyle \nabla \cdot \mathbf {E} ={\frac {\rho }{\epsilon _{0}}},\quad \nabla \times \mathbf {B} -{\frac {1}{c^{2}}}{\frac {\partial \mathbf {E} }{\partial t}}=\mu _{0}\mathbf {J} } and reduce to the inhomogeneous Maxwell equation: ∂ α F β α = − μ 0 J β {\displaystyle \partial _{\alpha }F^{\beta \alpha }=-\mu _{0}J^{\beta }} , where J α = ( c ρ , J ) {\displaystyle J^{\alpha }=(c\rho ,\mathbf {J} )} is the four-current. In magnetostatics and magnetodynamics, Gauss's law for magnetism and Maxwell–Faraday equation are respectively: ∇ ⋅ B = 0 , ∂ B ∂ t + ∇ × E = 0 {\displaystyle \nabla \cdot \mathbf {B} =0,\quad {\frac {\partial \mathbf {B} }{\partial t}}+\nabla \times \mathbf {E} =\mathbf {0} } which reduce to the Bianchi identity: ∂ γ F α β + ∂ α F β γ + ∂ β F γ α = 0 {\displaystyle \partial _{\gamma }F_{\alpha \beta }+\partial _{\alpha }F_{\beta \gamma }+\partial _{\beta }F_{\gamma \alpha }=0} or using the index notation with square brackets[note 1] for the antisymmetric part of the tensor: ∂ [ α F β γ ] = 0 {\displaystyle \partial _{[\alpha }F_{\beta \gamma ]}=0} Using the expression relating the Faraday tensor to the four-potential, one can prove that the above antisymmetric quantity turns to zero identically ( ≡ 0 {\displaystyle \equiv 0} ). This tensor equation reproduces the homogeneous Maxwell's equations. == Relativity == The field tensor derives its name from the fact that the electromagnetic field is found to obey the tensor transformation law, this general property of physical laws being recognised after the advent of special relativity. This theory stipulated that all the laws of physics should take the same form in all coordinate systems – this led to the introduction of tensors. The tensor formalism also leads to a mathematically simpler presentation of physical laws. The inhomogeneous Maxwell equation leads to the continuity equation: ∂ α J α = J α , α = 0 {\displaystyle \partial _{\alpha }J^{\alpha }=J^{\alpha }{}_{,\alpha }=0} implying conservation of charge. Maxwell's laws above can be generalised to curved spacetime by simply replacing partial derivatives with covariant derivatives: F [ α β ; γ ] = 0 {\displaystyle F_{[\alpha \beta ;\gamma ]}=0} and F α β ; α = μ 0 J β {\displaystyle F^{\alpha \beta }{}_{;\alpha }=\mu _{0}J^{\beta }} where the semicolon notation represents a covariant derivative, as opposed to a partial derivative. These equations are sometimes referred to as the curved space Maxwell equations. Again, the second equation implies charge conservation (in curved spacetime): J α ; α = 0 {\displaystyle J^{\alpha }{}_{;\alpha }\,=0} The stress-energy tensor of electromagnetism T μ ν = 1 μ 0 [ F μ α F ν α − 1 4 η μ ν F α β F α β ] , {\displaystyle T^{\mu \nu }={\frac {1}{\mu _{0}}}\left[F^{\mu \alpha }F^{\nu }{}_{\alpha }-{\frac {1}{4}}\eta ^{\mu \nu }F_{\alpha \beta }F^{\alpha \beta }\right]\,,} satisfies T α β , β + F α β J β = 0 . {\displaystyle {T^{\alpha \beta }}_{,\beta }+F^{\alpha \beta }J_{\beta }=0\,.} == Lagrangian formulation of classical electromagnetism == Classical electromagnetism and Maxwell's equations can be derived from the action: S = ∫ ( − 1 4 μ 0 F μ ν F μ ν − J μ A μ ) d 4 x {\displaystyle {\mathcal {S}}=\int \left(-{\begin{matrix}{\frac {1}{4\mu _{0}}}\end{matrix}}F_{\mu \nu }F^{\mu \nu }-J^{\mu }A_{\mu }\right)\mathrm {d} ^{4}x\,} where d 4 x {\displaystyle \mathrm {d} ^{4}x} is over space and time. This means the Lagrangian density is L = − 1 4 μ 0 F μ ν F μ ν − J μ A μ = − 1 4 μ 0 ( ∂ μ A ν − ∂ ν A μ ) ( ∂ μ A ν − ∂ ν A μ ) − J μ A μ = − 1 4 μ 0 ( ∂ μ A ν ∂ μ A ν − ∂ ν A μ ∂ μ A ν − ∂ μ A ν ∂ ν A μ + ∂ ν A μ ∂ ν A μ ) − J μ A μ {\displaystyle {\begin{aligned}{\mathcal {L}}&=-{\frac {1}{4\mu _{0}}}F_{\mu \nu }F^{\mu \nu }-J^{\mu }A_{\mu }\\&=-{\frac {1}{4\mu _{0}}}\left(\partial _{\mu }A_{\nu }-\partial _{\nu }A_{\mu }\right)\left(\partial ^{\mu }A^{\nu }-\partial ^{\nu }A^{\mu }\right)-J^{\mu }A_{\mu }\\&=-{\frac {1}{4\mu _{0}}}\left(\partial _{\mu }A_{\nu }\partial ^{\mu }A^{\nu }-\partial _{\nu }A_{\mu }\partial ^{\mu }A^{\nu }-\partial _{\mu }A_{\nu }\partial ^{\nu }A^{\mu }+\partial _{\nu }A_{\mu }\partial ^{\nu }A^{\mu }\right)-J^{\mu }A_{\mu }\\\end{aligned}}} The two middle terms in the parentheses are the same, as are the two outer terms, so the Lagrangian density is L = − 1 2 μ 0 ( ∂ μ A ν ∂ μ A ν − ∂ ν A μ ∂ μ A ν ) − J μ A μ . {\displaystyle {\mathcal {L}}=-{\frac {1}{2\mu _{0}}}\left(\partial _{\mu }A_{\nu }\partial ^{\mu }A^{\nu }-\partial _{\nu }A_{\mu }\partial ^{\mu }A^{\nu }\right)-J^{\mu }A_{\mu }.} Substituting this into the Euler–Lagrange equation of motion for a field: ∂ μ ( ∂ L ∂ ( ∂ μ A ν ) ) − ∂ L ∂ A ν = 0 {\displaystyle \partial _{\mu }\left({\frac {\partial {\mathcal {L}}}{\partial (\partial _{\mu }A_{\nu })}}\right)-{\frac {\partial {\mathcal {L}}}{\partial A_{\nu }}}=0} So the Euler–Lagrange equation becomes: − ∂ μ 1 μ 0 ( ∂ μ A ν − ∂ ν A μ ) + J ν = 0. {\displaystyle -\partial _{\mu }{\frac {1}{\mu _{0}}}\left(\partial ^{\mu }A^{\nu }-\partial ^{\nu }A^{\mu }\right)+J^{\nu }=0.\,} The quantity in parentheses above is just the field tensor, so this finally simplifies to ∂ μ F μ ν = μ 0 J ν {\displaystyle \partial _{\mu }F^{\mu \nu }=\mu _{0}J^{\nu }} That equation is another way of writing the two inhomogeneous Maxwell's equations (namely, Gauss's law and Ampère's circuital law) using the substitutions: 1 c E i = − F 0 i ϵ i j k B k = − F i j {\displaystyle {\begin{aligned}{\frac {1}{c}}E^{i}&=-F^{0i}\\\epsilon ^{ijk}B_{k}&=-F^{ij}\end{aligned}}} where i, j, k take the values 1, 2, and 3. === Hamiltonian form === The Hamiltonian density can be obtained with the usual relation, H ( ϕ i , π i ) = π i ϕ ˙ i ( ϕ i , π i ) − L . {\displaystyle {\mathcal {H}}(\phi ^{i},\pi _{i})=\pi _{i}{\dot {\phi }}^{i}(\phi ^{i},\pi _{i})-{\mathcal {L}}\,.} Here ϕ i = A i {\displaystyle \phi ^{i}=A^{i}} are the fields and the momentum density of the EM field is π i = T 0 i = 1 μ 0 F 0 α F i α = 1 μ 0 c E × B . {\displaystyle \pi _{i}=T_{0i}={\frac {1}{\mu _{0}}}F_{0}{}^{\alpha }F_{i\alpha }={\frac {1}{\mu _{0}c}}\mathbf {E} \times \mathbf {B} \,.} such that the conserved quantity associated with translation from Noether's theorem is the total momentum P = ∑ α m α x ˙ α + 1 μ 0 c ∫ V d 3 x E × B . {\displaystyle \mathbf {P} =\sum _{\alpha }m_{\alpha }{\dot {\mathbf {x} }}_{\alpha }+{\frac {1}{\mu _{0}c}}\int _{\mathcal {V}}\mathrm {d} ^{3}x\,\mathbf {E} \times \mathbf {B} \,.} The Hamiltonian density for the electromagnetic field is related to the electromagnetic stress-energy tensor T μ ν = 1 μ 0 [ F μ α F ν α − 1 4 η μ ν F α β F α β ] . {\displaystyle T^{\mu \nu }={\frac {1}{\mu _{0}}}\left[F^{\mu \alpha }F^{\nu }{}_{\alpha }-{\frac {1}{4}}\eta ^{\mu \nu }F_{\alpha \beta }F^{\alpha \beta }\right]\,.} as H = T 00 = 1 2 ( ϵ 0 E 2 + 1 μ 0 B 2 ) = 1 8 π ( E 2 + B 2 ) . {\displaystyle {\mathcal {H}}=T_{00}={\frac {1}{2}}\left(\epsilon _{0}\mathbf {E} ^{2}+{\frac {1}{\mu _{0}}}\mathbf {B} ^{2}\right)={\frac {1}{8\pi }}\left(\mathbf {E} ^{2}+\mathbf {B} ^{2}\right)\,.} where we have neglected the energy density of matter, assuming only the EM field, and the last equality assumes the CGS system. The momentum of nonrelativistic charges interarcting with the EM field in the Coulomb gauge ( ∇ ⋅ A = ∇ i A i = 0 {\displaystyle \nabla \cdot \mathbf {A} =\nabla _{i}A^{i}=0} ) is p α = m α x ˙ α + q α c A ( x α ) . {\displaystyle \mathbf {p} _{\alpha }=m_{\alpha }{\dot {\mathbf {x} }}_{\alpha }+{\frac {q_{\alpha }}{c}}\mathbf {A} (\mathbf {x} _{\alpha })\,.} The total Hamiltonian of the matter + EM field system is H = ∫ V d 3 x T 00 = H m a t + H e m . {\displaystyle H=\int _{\mathcal {V}}d^{3}x\,T_{00}=H_{\rm {mat}}+H_{\rm {em}}\,.} where for nonrelativistic point particles in the Coulomb gauge H m a t = ∑ α m α | x ˙ α | 2 + ∑ α < β q α q β | x α − x β | = ∑ α 1 2 m α [ p α − q α c A ( x α ) ] 2 + ∑ α < β q α q β | x α − x β | . {\displaystyle H_{\rm {mat}}=\sum _{\alpha }m_{\alpha }|{\dot {\mathbf {x} }}_{\alpha }|^{2}+\sum _{\alpha <\beta }{\frac {q_{\alpha }q_{\beta }}{|\mathbf {x} _{\alpha }-\mathbf {x} _{\beta }|}}=\sum _{\alpha }{\frac {1}{2m_{\alpha }}}\left[\mathbf {p} _{\alpha }-{\frac {q_{\alpha }}{c}}\mathbf {A} (\mathbf {x} _{\alpha })\right]^{2}+\sum _{\alpha <\beta }{\frac {q_{\alpha }q_{\beta }}{|\mathbf {x} _{\alpha }-\mathbf {x} _{\beta }|}}\,.} where the last term is identically 1 8 π ∫ V d 3 x E ∥ 2 {\displaystyle {\frac {1}{8\pi }}\int _{\mathcal {V}}d^{3}x\mathbf {E} _{\parallel }^{2}} where E ∥ i = ∇ i A 0 {\displaystyle {E}_{\parallel i}={\nabla _{i}}A_{0}} and H e m = 1 8 π ∫ V d 3 x ( E ⊥ 2 + B 2 ) . {\displaystyle H_{\rm {em}}={\frac {1}{8\pi }}\int _{\mathcal {V}}d^{3}x\left(\mathbf {E} _{\perp }^{2}+\mathbf {B} ^{2}\right)\,.} where and E ⊥ i = − 1 c ∂ 0 A i {\displaystyle {E}_{\perp i}=-{\frac {1}{c}}\partial _{0}A_{i}} . === Quantum electrodynamics and field theory === The Lagrangian of quantum electrodynamics extends beyond the classical Lagrangian established in relativity to incorporate the creation and annihilation of photons (and electrons): L = ψ ¯ ( i ℏ c γ α D α − m c 2 ) ψ − 1 4 μ 0 F α β F α β , {\displaystyle {\mathcal {L}}={\bar {\psi }}\left(i\hbar c\,\gamma ^{\alpha }D_{\alpha }-mc^{2}\right)\psi -{\frac {1}{4\mu _{0}}}F_{\alpha \beta }F^{\alpha \beta },} where the first part in the right hand side, containing the Dirac spinor ψ {\displaystyle \psi } , represents the Dirac field. In quantum field theory it is used as the template for the gauge field strength tensor. By being employed in addition to the local interaction Lagrangian it reprises its usual role in QED. == See also == Classification of electromagnetic fields Covariant formulation of classical electromagnetism Electromagnetic stress–energy tensor Gluon field strength tensor Ricci calculus Riemann–Silberstein vector == Notes == == References == Brau, Charles A. (2004). Modern Problems in Classical Electrodynamics. Oxford University Press. ISBN 0-19-514665-4. Jackson, John D. (1999). Classical Electrodynamics. John Wiley & Sons, Inc. ISBN 0-471-30932-X. Peskin, Michael E.; Schroeder, Daniel V. (1995). An Introduction to Quantum Field Theory. Perseus Publishing. ISBN 0-201-50397-2.
Wikipedia/Field_strength_tensor
Nuclear Physics A, Nuclear Physics B, Nuclear Physics B: Proceedings Supplements and discontinued Nuclear Physics are peer-reviewed scientific journals published by Elsevier. The scope of Nuclear Physics A is nuclear and hadronic physics, and that of Nuclear Physics B is high energy physics, quantum field theory, statistical systems, and mathematical physics. Nuclear Physics was established in 1956, and then split into Nuclear Physics A and Nuclear Physics B in 1967. A supplement series to Nuclear Physics B, called Nuclear Physics B: Proceedings Supplements has been published from 1987 onwards until 2015 and continues as Nuclear and Particle Physics Proceedings. Nuclear Physics B is part of the SCOAP3 initiative. == Abstracting and indexing == === Nuclear Physics A === Current Contents/Physics, Chemical, & Earth Sciences === Nuclear Physics B === Current Contents/Physics, Chemical, & Earth Sciences == References == == External links == Nuclear Physics Nuclear Physics A Nuclear Physics B Nuclear Physics B: Proceedings Supplements
Wikipedia/Nuclear_Physics_(journal)
A Grand Unified Theory (GUT) is any model in particle physics that merges the electromagnetic, weak, and strong forces (the three gauge interactions of the Standard Model) into a single force at high energies. Although this unified force has not been directly observed, many GUT models theorize its existence. If the unification of these three interactions is possible, it raises the possibility that there was a grand unification epoch in the very early universe in which these three fundamental interactions were not yet distinct. Experiments have confirmed that at high energy, the electromagnetic interaction and weak interaction unify into a single combined electroweak interaction. GUT models predict that at even higher energy, the strong and electroweak interactions will unify into one electronuclear interaction. This interaction is characterized by one larger gauge symmetry and thus several force carriers, but one unified coupling constant. Unifying gravity with the electronuclear interaction would provide a more comprehensive theory of everything (TOE) rather than a Grand Unified Theory. Thus, GUTs are often seen as an intermediate step towards a TOE. The novel particles predicted by GUT models are expected to have extremely high masses—around the GUT scale of 1016 GeV/c2 (only three orders of magnitude below the Planck scale of 1019 GeV/c2)—and so are well beyond the reach of any foreseen particle hadron collider experiments. Therefore, the particles predicted by GUT models will be unable to be observed directly, and instead the effects of grand unification might be detected through indirect observations of the following: proton decay, electric dipole moments of elementary particles, or the properties of neutrinos. Some GUTs, such as the Pati–Salam model, predict the existence of magnetic monopoles. While GUTs might be expected to offer simplicity over the complications present in the Standard Model, realistic models remain complicated because they need to introduce additional fields and interactions, or even additional dimensions of space, in order to reproduce observed fermion masses and mixing angles. This difficulty, in turn, may be related to the existence of family symmetries beyond the conventional GUT models. Due to this and the lack of any observed effect of grand unification so far, there is no generally accepted GUT model. Models that do not unify the three interactions using one simple group as the gauge symmetry but do so using semisimple groups can exhibit similar properties and are sometimes referred to as Grand Unified Theories as well. == History == Historically, the first true GUT, which was based on the simple Lie group SU(5), was proposed by Howard Georgi and Sheldon Glashow in 1974. The Georgi–Glashow model was preceded by the semisimple Lie algebra Pati–Salam model by Abdus Salam and Jogesh Pati also in 1974, who pioneered the idea to unify gauge interactions. The acronym GUT was first coined in 1978 by CERN researchers John Ellis, Andrzej Buras, Mary K. Gaillard, and Dimitri Nanopoulos, however in the final version of their paper they opted for the less anatomical GUM (Grand Unification Mass). Nanopoulos later that year was the first to use the acronym in a paper. == Motivation == The fact that the electric charges of electrons and protons seem to cancel each other exactly to extreme precision is essential for the existence of the macroscopic world as we know it, but this important property of elementary particles is not explained in the Standard Model of particle physics. While the description of strong and weak interactions within the Standard Model is based on gauge symmetries governed by the simple symmetry groups SU(3) and SU(2) which allow only discrete charges, the remaining component, the weak hypercharge interaction is described by an abelian symmetry U(1) which in principle allows for arbitrary charge assignments. The observed charge quantization, namely the postulation that all known elementary particles carry electric charges which are exact multiples of one-third of the "elementary" charge, has led to the idea that hypercharge interactions and possibly the strong and weak interactions might be embedded in one Grand Unified interaction described by a single, larger simple symmetry group containing the Standard Model. This would automatically predict the quantized nature and values of all elementary particle charges. Since this also results in a prediction for the relative strengths of the fundamental interactions which we observe, in particular, the weak mixing angle, grand unification ideally reduces the number of independent input parameters but is also constrained by observations. Grand unification is reminiscent of the unification of electric and magnetic forces by Maxwell's field theory of electromagnetism in the 19th century, but its physical implications and mathematical structure are qualitatively different. == Unification of matter particles == === SU(5) === SU(5) is the simplest GUT. The smallest simple Lie group which contains the standard model, and upon which the first Grand Unified Theory was based, is S U ( 5 ) ⊃ S U ( 3 ) × S U ( 2 ) × U ( 1 ) . {\displaystyle {\rm {SU(5)\supset SU(3)\times SU(2)\times U(1).}}} Such group symmetries allow the reinterpretation of several known particles, including the photon, W and Z bosons, and gluon, as different states of a single particle field. However, it is not obvious that the simplest possible choices for the extended "Grand Unified" symmetry should yield the correct inventory of elementary particles. The fact that all currently known matter particles fit perfectly into three copies of the smallest group representations of SU(5) and immediately carry the correct observed charges, is one of the first and most important reasons why people believe that a Grand Unified Theory might actually be realized in nature. The two smallest irreducible representations of SU(5) are 5 (the defining representation) and 10. (These bold numbers indicate the dimension of the representation.) In the standard assignment, the 5 contains the charge conjugates of the right-handed down-type quark color triplet and a left-handed lepton isospin doublet, while the 10 contains the six up-type quark components, the left-handed down-type quark color triplet, and the right-handed electron. This scheme has to be replicated for each of the three known generations of matter. It is notable that the theory is anomaly free with this matter content. The hypothetical right-handed neutrinos are a singlet of SU(5), which means its mass is not forbidden by any symmetry; it doesn't need a spontaneous electroweak symmetry breaking which explains why its mass would be heavy (see seesaw mechanism). === SO(10) === The next simple Lie group which contains the standard model is S O ( 10 ) ⊃ S U ( 5 ) ⊃ S U ( 3 ) × S U ( 2 ) × U ( 1 ) . {\displaystyle {\rm {SO(10)\supset SU(5)\supset SU(3)\times SU(2)\times U(1).}}} Here, the unification of matter is even more complete, since the irreducible spinor representation 16 contains both the 5 and 10 of SU(5) and a right-handed neutrino, and thus the complete particle content of one generation of the extended standard model with neutrino masses. This is already the largest simple group that achieves the unification of matter in a scheme involving only the already known matter particles (apart from the Higgs sector). Since different standard model fermions are grouped together in larger representations, GUTs specifically predict relations among the fermion masses, such as between the electron and the down quark, the muon and the strange quark, and the tau lepton and the bottom quark for SU(5) and SO(10). Some of these mass relations hold approximately, but most don't (see Georgi-Jarlskog mass relation). The boson matrix for SO(10) is found by taking the 15 × 15 matrix from the 10 + 5 representation of SU(5) and adding an extra row and column for the right-handed neutrino. The bosons are found by adding a partner to each of the 20 charged bosons (2 right-handed W bosons, 6 massive charged gluons and 12 X/Y type bosons) and adding an extra heavy neutral Z-boson to make 5 neutral bosons in total. The boson matrix will have a boson or its new partner in each row and column. These pairs combine to create the familiar 16D Dirac spinor matrices of SO(10). === E6 === In some forms of string theory, including E8 × E8 heterotic string theory, the resultant four-dimensional theory after spontaneous compactification on a six-dimensional Calabi–Yau manifold resembles a GUT based on the group E6. Notably E6 is the only exceptional simple Lie group to have any complex representations, a requirement for a theory to contain chiral fermions (namely all weakly-interacting fermions). Hence the other four (G2, F4, E7, and E8) can't be the gauge group of a GUT. === Extended Grand Unified Theories === Non-chiral extensions of the Standard Model with vectorlike split-multiplet particle spectra which naturally appear in the higher SU(N) GUTs considerably modify the desert physics and lead to the realistic (string-scale) grand unification for conventional three quark-lepton families even without using supersymmetry (see below). On the other hand, due to a new missing VEV mechanism emerging in the supersymmetric SU(8) GUT the simultaneous solution to the gauge hierarchy (doublet-triplet splitting) problem and problem of unification of flavor can be argued. GUTs with four families / generations, SU(8): Assuming 4 generations of fermions instead of 3 makes a total of 64 types of particles. These can be put into 64 = 8 + 56 representations of SU(8). This can be divided into SU(5) × SU(3)F × U(1) which is the SU(5) theory together with some heavy bosons which act on the generation number. GUTs with four families / generations, O(16): Again assuming 4 generations of fermions, the 128 particles and anti-particles can be put into a single spinor representation of O(16). === Symplectic groups and quaternion representations === Symplectic gauge groups could also be considered. For example, Sp(8) (which is called Sp(4) in the article symplectic group) has a representation in terms of 4 × 4 quaternion unitary matrices which has a 16 dimensional real representation and so might be considered as a candidate for a gauge group. Sp(8) has 32 charged bosons and 4 neutral bosons. Its subgroups include SU(4) so can at least contain the gluons and photon of SU(3) × U(1). Although it's probably not possible to have weak bosons acting on chiral fermions in this representation. A quaternion representation of the fermions might be: [ e + i e ¯ + j v + k v ¯ u r + i u ¯ r ¯ + j d r + k d ¯ r ¯ u g + i u ¯ g ¯ + j d g + k d ¯ g ¯ u b + i u ¯ b ¯ + j d b + k d ¯ b ¯ ] L {\displaystyle {\begin{bmatrix}e+i\ {\overline {e}}+j\ v+k\ {\overline {v}}\\u_{r}+i\ {\overline {u}}_{\mathrm {\overline {r}} }+j\ d_{\mathrm {r} }+k\ {\overline {d}}_{\mathrm {\overline {r}} }\\u_{g}+i\ {\overline {u}}_{\mathrm {\overline {g}} }+j\ d_{\mathrm {g} }+k\ {\overline {d}}_{\mathrm {\overline {g}} }\\u_{b}+i\ {\overline {u}}_{\mathrm {\overline {b}} }+j\ d_{\mathrm {b} }+k\ {\overline {d}}_{\mathrm {\overline {b}} }\\\end{bmatrix}}_{\mathrm {L} }} A further complication with quaternion representations of fermions is that there are two types of multiplication: left multiplication and right multiplication which must be taken into account. It turns out that including left and right-handed 4 × 4 quaternion matrices is equivalent to including a single right-multiplication by a unit quaternion which adds an extra SU(2) and so has an extra neutral boson and two more charged bosons. Thus the group of left- and right-handed 4 × 4 quaternion matrices is Sp(8) × SU(2) which does include the standard model bosons: S U ( 4 , H ) L × H R = S p ( 8 ) × S U ( 2 ) ⊃ S U ( 4 ) × S U ( 2 ) ⊃ S U ( 3 ) × S U ( 2 ) × U ( 1 ) {\displaystyle \mathrm {SU(4,\mathbb {H} )_{L}\times \mathbb {H} _{R}=Sp(8)\times SU(2)\supset SU(4)\times SU(2)\supset SU(3)\times SU(2)\times U(1)} } If ψ {\displaystyle \psi } is a quaternion valued spinor, A μ a b {\displaystyle A_{\mu }^{ab}} is quaternion hermitian 4 × 4 matrix coming from Sp(8) and B μ {\displaystyle B_{\mu }} is a pure vector quaternion (both of which are 4-vector bosons) then the interaction term is: ψ a ¯ γ μ ( A μ a b ψ b + ψ a B μ ) {\displaystyle \ {\overline {\psi ^{a}}}\gamma _{\mu }\left(A_{\mu }^{ab}\psi ^{b}+\psi ^{a}B_{\mu }\right)\ } === Octonion representations === It can be noted that a generation of 16 fermions can be put into the form of an octonion with each element of the octonion being an 8-vector. If the 3 generations are then put in a 3x3 hermitian matrix with certain additions for the diagonal elements then these matrices form an exceptional (Grassmann) Jordan algebra, which has the symmetry group of one of the exceptional Lie groups (F4, E6, E7, or E8) depending on the details. ψ = [ a e μ e ¯ b τ μ ¯ τ ¯ c ] {\displaystyle \psi ={\begin{bmatrix}a&e&\mu \\{\overline {e}}&b&\tau \\{\overline {\mu }}&{\overline {\tau }}&c\end{bmatrix}}} [ ψ A , ψ B ] ⊂ J 3 ( O ) {\displaystyle \ [\psi _{A},\psi _{B}]\subset \mathrm {J} _{3}(\mathbb {O} )\ } Because they are fermions the anti-commutators of the Jordan algebra become commutators. It is known that E6 has subgroup O(10) and so is big enough to include the Standard Model. An E8 gauge group, for example, would have 8 neutral bosons, 120 charged bosons and 120 charged anti-bosons. To account for the 248 fermions in the lowest multiplet of E8, these would either have to include anti-particles (and so have baryogenesis), have new undiscovered particles, or have gravity-like (spin connection) bosons affecting elements of the particles spin direction. Each of these possesses theoretical problems. === Beyond Lie groups === Other structures have been suggested including Lie 3-algebras and Lie superalgebras. Neither of these fit with Yang–Mills theory. In particular Lie superalgebras would introduce bosons with incorrect statistics. Supersymmetry, however, does fit with Yang–Mills. == Unification of forces and the role of supersymmetry == The unification of forces is possible due to the energy scale dependence of force coupling parameters in quantum field theory called renormalization group "running", which allows parameters with vastly different values at usual energies to converge to a single value at a much higher energy scale. The renormalization group running of the three gauge couplings in the Standard Model has been found to nearly, but not quite, meet at the same point if the hypercharge is normalized so that it is consistent with SU(5) or SO(10) GUTs, which are precisely the GUT groups which lead to a simple fermion unification. This is a significant result, as other Lie groups lead to different normalizations. However, if the supersymmetric extension MSSM is used instead of the Standard Model, the match becomes much more accurate. In this case, the coupling constants of the strong and electroweak interactions meet at the grand unification energy, also known as the GUT scale: Λ GUT ≈ 10 16 GeV . {\displaystyle \Lambda _{\text{GUT}}\approx 10^{16}\,{\text{GeV}}.} It is commonly believed that this matching is unlikely to be a coincidence, and is often quoted as one of the main motivations to further investigate supersymmetric theories despite the fact that no supersymmetric partner particles have been experimentally observed. Also, most model builders simply assume supersymmetry because it solves the hierarchy problem—i.e., it stabilizes the electroweak Higgs mass against radiative corrections. == Neutrino masses == Since Majorana masses of the right-handed neutrino are forbidden by SO(10) symmetry, SO(10) GUTs predict the Majorana masses of right-handed neutrinos to be close to the GUT scale where the symmetry is spontaneously broken in those models. In supersymmetric GUTs, this scale tends to be larger than would be desirable to obtain realistic masses of the light, mostly left-handed neutrinos (see neutrino oscillation) via the seesaw mechanism. These predictions are independent of the Georgi–Jarlskog mass relations, wherein some GUTs predict other fermion mass ratios. == Proposed theories == Several theories have been proposed, but none is currently universally accepted. An even more ambitious theory that includes all fundamental forces, including gravitation, is termed a theory of everything. Some common mainstream GUT models are: Pati–Salam model – SU(4) × SU(2) × SU(2) Georgi–Glashow model – SU(5); and Flipped SU(5) – SU(5) × U(1) SO(10) model; and Flipped SO(10) – SO(10) × U(1) E6 model; and Trinification – SU(3) × SU(3) × SU(3) minimal left-right model – SU(3)C × SU(2)L × SU(2)R × U(1)B−L 331 model – SU(3)C × SU(3)L × U(1)X chiral color Not quite GUTs: Note: These models refer to Lie algebras not to Lie groups. The Lie group could be [ S U ( 4 ) × S U ( 2 ) × S U ( 2 ) ] / Z 2 , {\displaystyle [\mathrm {SU} (4)\times \mathrm {SU} (2)\times \mathrm {SU} (2)]/\mathbb {Z} _{2},} just to take a random example. The most promising candidate is SO(10). (Minimal) SO(10) does not contain any exotic fermions (i.e. additional fermions besides the Standard Model fermions and the right-handed neutrino), and it unifies each generation into a single irreducible representation. A number of other GUT models are based upon subgroups of SO(10). They are the minimal left-right model, SU(5), flipped SU(5) and the Pati–Salam model. The GUT group E6 contains SO(10), but models based upon it are significantly more complicated. The primary reason for studying E6 models comes from E8 × E8 heterotic string theory. GUT models generically predict the existence of topological defects such as monopoles, cosmic strings, domain walls, and others. But none have been observed. Their absence is known as the monopole problem in cosmology. Many GUT models also predict proton decay, although not the Pati–Salam model. As of now, proton decay has never been experimentally observed. The minimal experimental limit on the proton's lifetime pretty much rules out minimal SU(5) and heavily constrains the other models. The lack of detected supersymmetry to date also constrains many models. Some GUT theories like SU(5) and SO(10) suffer from what is called the doublet-triplet problem. These theories predict that for each electroweak Higgs doublet, there is a corresponding colored Higgs triplet field with a very small mass (many orders of magnitude smaller than the GUT scale here). In theory, unifying quarks with leptons, the Higgs doublet would also be unified with a Higgs triplet. Such triplets have not been observed. They would also cause extremely rapid proton decay (far below current experimental limits) and prevent the gauge coupling strengths from running together in the renormalization group. Most GUT models require a threefold replication of the matter fields. As such, they do not explain why there are three generations of fermions. Most GUT models also fail to explain the little hierarchy between the fermion masses for different generations. == Ingredients == A GUT model consists of a gauge group which is a compact Lie group, a connection form for that Lie group, a Yang–Mills action for that connection given by an invariant symmetric bilinear form over its Lie algebra (which is specified by a coupling constant for each factor), a Higgs sector consisting of a number of scalar fields taking on values within real/complex representations of the Lie group and chiral Weyl fermions taking on values within a complex rep of the Lie group. The Lie group contains the Standard Model group and the Higgs fields acquire VEVs leading to a spontaneous symmetry breaking to the Standard Model. The Weyl fermions represent matter. == Current evidence == The discovery of neutrino oscillations indicates that the Standard Model is incomplete, but there is currently no clear evidence that nature is described by any Grand Unified Theory. Neutrino oscillations have led to renewed interest toward certain GUT such as SO(10). One of the few possible experimental tests of certain GUT is proton decay and also fermion masses. There are a few more special tests for supersymmetric GUT. However, minimum proton lifetimes from research (at or exceeding the 1034~1035 year range) have ruled out simpler GUTs and most non-SUSY models. The maximum upper limit on proton lifetime (if unstable), is calculated at 6×1039 years for SUSY models and 1.4×1036 years for minimal non-SUSY GUTs. The gauge coupling strengths of QCD, the weak interaction and hypercharge seem to meet at a common length scale called the GUT scale and equal approximately to 1016 GeV (slightly less than the Planck energy of 1019 GeV), which is somewhat suggestive. This interesting numerical observation is called the gauge coupling unification, and it works particularly well if one assumes the existence of superpartners of the Standard Model particles. Still, it is possible to achieve the same by postulating, for instance, that ordinary (non supersymmetric) SO(10) models break with an intermediate gauge scale, such as the one of Pati–Salam group. == See also == B − L quantum number Classical unified field theories Paradigm shift Physics beyond the Standard Model Theory of everything X and Y bosons == Notes == == References == == Further reading == Stephen Hawking, A Brief History of Time, includes a brief popular overview. Langacker, Paul (2012). "Grand unification". Scholarpedia. 7 (10): 11419. Bibcode:2012SchpJ...711419L. doi:10.4249/scholarpedia.11419. == External links == The Algebra of Grand Unified Theories
Wikipedia/Grand_unified_theory
Physics beyond the Standard Model (BSM) refers to the theoretical developments needed to explain the deficiencies of the Standard Model, such as the inability to explain the fundamental dimensionless physical constants of the standard model, the strong CP problem, neutrino oscillations, matter–antimatter asymmetry, and the nature of dark matter and dark energy. Another problem lies within the mathematical framework of the Standard Model itself: the Standard Model is inconsistent with that of general relativity, and one or both theories break down under certain conditions, such as spacetime singularities like the Big Bang and black hole event horizons. Theories that lie beyond the Standard Model include various extensions of the standard model through supersymmetry, such as the Minimal Supersymmetric Standard Model (MSSM) and Next-to-Minimal Supersymmetric Standard Model (NMSSM), and entirely novel explanations, such as string theory, M-theory, and extra dimensions. As these theories tend to reproduce the entirety of current phenomena, the question of which theory is the right one, or at least the "best step" towards a Theory of Everything, can only be settled via experiments, and is one of the most active areas of research in both theoretical and experimental physics. == Problems with the Standard Model == Despite being the most successful theory of particle physics to date, the Standard Model is not perfect. A large share of the published output of theoretical physicists consists of proposals for various forms of "Beyond the Standard Model" new physics proposals that would modify the Standard Model in ways subtle enough to be consistent with existing data, yet address its imperfections materially enough to predict non-Standard Model outcomes of new experiments that can be proposed. === Phenomena not explained === The Standard Model is inherently an incomplete theory. There are fundamental physical phenomena in nature that the Standard Model does not adequately explain: Dimensionless physical constants. The standard model does not explain the masses of the elementary particles (as fractions of the Planck mass), their mixing angles and phases, the coupling constants, the cosmological constant (multiplied with the Planck length), and the number of spatial dimensions. Gravity. The standard model does not explain gravity. The approach of simply adding a graviton to the Standard Model does not recreate what is observed experimentally without other modifications, as yet undiscovered, to the Standard Model. Moreover, the Standard Model is widely considered to be incompatible with the most successful theory of gravity to date, general relativity. Dark matter. Assuming that general relativity and Lambda CDM are true, cosmological observations tell us the standard model explains about 5% of the mass-energy present in the universe. About 26% should be dark matter (the remaining 69% being dark energy) which would behave just like other matter, but which only interacts weakly (if at all) with the Standard Model fields. Yet, the Standard Model does not supply any fundamental particles that are good dark matter candidates. Dark energy. As mentioned, the remaining 69% of the universe's energy should consist of the so-called dark energy, a constant energy density for the vacuum. Attempts to explain dark energy in terms of vacuum energy of the standard model lead to a mismatch of 120 orders of magnitude. Neutrino oscillations. According to the Standard Model, neutrinos do not oscillate. However, experiments and astronomical observations have shown that neutrino oscillation does occur. These are typically explained by postulating that neutrinos have mass. Neutrinos do not have mass in the Standard Model, and mass terms for the neutrinos can be added to the Standard Model by hand, but these lead to new theoretical problems. For example, the mass terms need to be extraordinarily small and it is not clear if the neutrino masses would arise in the same way that the masses of other fundamental particles do in the Standard Model. There are also other extensions of the Standard Model for neutrino oscillations which do not assume massive neutrinos, such as Lorentz-violating neutrino oscillations. Matter–antimatter asymmetry. The universe is made out of mostly matter. However, the standard model predicts that matter and antimatter should have been created in (almost) equal amounts if the initial conditions of the universe did not involve disproportionate matter relative to antimatter. Yet, there is no mechanism in the Standard Model to sufficiently explain this asymmetry. ==== Experimental results not explained ==== No experimental result is accepted as definitively contradicting the Standard Model at the 5 σ level, widely considered to be the threshold of a discovery in particle physics. Because every experiment contains some degree of statistical and systemic uncertainty, and the theoretical predictions themselves are also almost never calculated exactly and are subject to uncertainties in measurements of the fundamental constants of the Standard Model (some of which are tiny and others of which are substantial), it is to be expected that some of the hundreds of experimental tests of the Standard Model will deviate from it to some extent, even if there were no new physics to be discovered. At any given moment there are several experimental results standing that significantly differ from a Standard Model-based prediction. In the past, many of these discrepancies have been found to be statistical flukes or experimental errors that vanish as more data has been collected, or when the same experiments were conducted more carefully. On the other hand, any physics beyond the Standard Model would necessarily first appear in experiments as a statistically significant difference between an experiment and the theoretical prediction. The task is to determine which is the case. In each case, physicists seek to determine if a result is merely a statistical fluke or experimental error on the one hand, or a sign of new physics on the other. More statistically significant results cannot be mere statistical flukes but can still result from experimental error or inaccurate estimates of experimental precision. Frequently, experiments are tailored to be more sensitive to experimental results that would distinguish the Standard Model from theoretical alternatives. Some of the most notable examples include the following: B meson decay etc. – results from a BaBar experiment may suggest a surplus over Standard Model predictions of a type of particle decay ( B → D(*) τ− ντ ). In this, an electron and positron collide, resulting in a B meson and an antimatter B meson, which then decays into a D meson and a tau lepton as well as a tau antineutrino. While the level of certainty of the excess (3.4 σ in statistical jargon) is not enough to declare a break from the Standard Model, the results are a potential sign of something amiss and are likely to affect existing theories, including those attempting to deduce the properties of Higgs bosons. In 2015, LHCb reported observing a 2.1 σ excess in the same ratio of branching fractions. The Belle experiment also reported an excess. In 2017 a meta analysis of all available data reported a cumulative 5 σ deviation from SM. Neutron lifetime puzzle - Free neutrons are not stable but decay after some time. Currently there are two methods used to measure this lifetime ("bottle" versus "beam") that give different values not within each other's error margin. Currently the lifetime from the bottle method is at τ n = 877.75 s {\displaystyle \tau _{n}=877.75s} with a difference of 10 seconds below the beam method value of τ n = 887.7 s {\displaystyle \tau _{n}=887.7s} . This problem may be solved by taking into account neutron scattering which decreases the lifetime of the involved neutrons. This error occurs in the bottle method and the effect depends on the shape of the bottle – thus this might be a bottle method only systematic error. === Theoretical predictions not observed === Observation at particle colliders of all of the fundamental particles predicted by the Standard Model has been confirmed. The Higgs boson is predicted by the Standard Model's explanation of the Higgs mechanism, which describes how the weak SU(2) gauge symmetry is broken and how fundamental particles obtain mass; it was the last particle predicted by the Standard Model to be observed. On July 4, 2012, CERN scientists using the Large Hadron Collider announced the discovery of a particle consistent with the Higgs boson, with a mass of about 126 GeV/c2. A Higgs boson was confirmed to exist on March 14, 2013, although efforts to confirm that it has all of the properties predicted by the Standard Model are ongoing. A few hadrons (i.e. composite particles made of quarks) whose existence is predicted by the Standard Model, which can be produced only at very high energies in very low frequencies have not yet been definitively observed, and "glueballs" (i.e. composite particles made of gluons) have also not yet been definitively observed. Some very low frequency particle decays predicted by the Standard Model have also not yet been definitively observed because insufficient data is available to make a statistically significant observation. === Unexplained relations === Koide formula – an unexplained empirical equation remarked upon by Yoshio Koide in 1981, and later by others. It relates the masses of the three charged leptons: Q = m e + m μ + m τ ( m e + m μ + m τ ) 2 = 0.666661 ( 7 ) ≈ 2 3 {\displaystyle Q={\frac {m_{e}+m_{\mu }+m_{\tau }}{{\big (}{\sqrt {m_{e}}}+{\sqrt {m_{\mu }}}+{\sqrt {m_{\tau }}}{\big )}^{2}}}=0.666661(7)\approx {\frac {2}{3}}} . The Standard Model does not predict lepton masses (they are free parameters of the theory). However, the value of the Koide formula being equal to 2/3 within experimental errors of the measured lepton masses suggests the existence of a theory which is able to predict lepton masses. The CKM matrix, if interpreted as a rotation matrix in a 3-dimensional vector space, "rotates" a vector composed of square roots of down-type quark masses ( m d , m s , m b ) {\displaystyle ({\sqrt {m_{d}}},{\sqrt {m_{s}}},{\sqrt {m_{b}}}{\big )}} into a vector of square roots of up-type quark masses ( m u , m c , m t ) {\displaystyle ({\sqrt {m_{u}}},{\sqrt {m_{c}}},{\sqrt {m_{t}}}{\big )}} , up to vector lengths, a result due to Kohzo Nishida. The sum of squares of the Yukawa couplings of all Standard Model fermions is approximately 0.984, which is very close to 1. To put it another way, the sum of squares of fermion masses is very close to half of squared Higgs vacuum expectation value. This sum is dominated by the top quark. The sum of squares of boson masses (that is, W, Z, and Higgs bosons) is also very close to half of squared Higgs vacuum expectation value, the ratio is approximately 1.004. Consequently, the sum of squared masses of all Standard Model particles is very close to the squared Higgs vacuum expectation value, the ratio is approximately 0.994. It is unclear if these empirical relationships represent any underlying physics; according to Koide, the rule he discovered "may be an accidental coincidence". === Theoretical problems === Some features of the standard model are added in an ad hoc way. These are not problems per se (i.e. the theory works fine with the ad hoc insertions), but they imply a lack of understanding. These contrived features have motivated theorists to look for more fundamental theories with fewer parameters. Some of the contrivances are: Hierarchy problem – the standard model introduces particle masses through a process known as spontaneous symmetry breaking caused by the Higgs field. Within the standard model, the mass of the Higgs particle gets some very large quantum corrections due to the presence of virtual particles (mostly virtual top quarks). These corrections are much larger than the actual mass of the Higgs. This means that the bare mass parameter of the Higgs in the standard model must be fine tuned in such a way that almost completely cancels the quantum corrections. This level of fine-tuning is deemed unnatural by many theorists. Number of parameters – the standard model depends on 19 parameter numbers. Their values are known from experiment, but the origin of the values is unknown. Some theorists have tried to find relations between different parameters, for example, between the masses of particles in different generations or calculating particle masses, such as in asymptotic safety scenarios. Quantum triviality – suggests that it may not be possible to create a consistent quantum field theory involving elementary scalar Higgs particles. This is sometimes called the Landau pole problem. A possible solution is that the renormalized value could go to zero as the cut-off is removed, meaning that the bare value is completely screened by quantum fluctuations. Strong CP problem – it can be argued theoretically that the standard model should contain a term in the strong interaction that breaks CP symmetry, causing slightly different interaction rates for matter vs. antimatter. Experimentally, however, no such violation has been found, implying that the coefficient of this term – if any – would be suspiciously close to zero. == Additional experimental results == Research from experimental data on the cosmological constant, LIGO noise, and pulsar timing, suggests it's very unlikely that there are any new particles with masses much higher than those which can be found in the standard model or the Large Hadron Collider. However, this research has also indicated that quantum gravity or perturbative quantum field theory will become strongly coupled before 1 PeV, leading to other new physics in the TeVs. == Grand unified theories == The standard model has three gauge symmetries; the colour SU(3), the weak isospin SU(2), and the weak hypercharge U(1) symmetry, corresponding to the three fundamental forces. Due to renormalization the coupling constants of each of these symmetries vary with the energy at which they are measured. Around 1016 GeV these couplings become approximately equal. This has led to speculation that above this energy the three gauge symmetries of the standard model are unified in one single gauge symmetry with a simple gauge group, and just one coupling constant. Below this energy the symmetry is spontaneously broken to the standard model symmetries. Popular choices for the unifying group are the special unitary group in five dimensions SU(5) and the special orthogonal group in ten dimensions SO(10). Theories that unify the standard model symmetries in this way are called Grand Unified Theories (or GUTs), and the energy scale at which the unified symmetry is broken is called the GUT scale. Generically, grand unified theories predict the creation of magnetic monopoles in the early universe, and instability of the proton. Neither of these have been observed, and this absence of observation puts limits on the possible GUTs. == Supersymmetry == Supersymmetry extends the Standard Model by adding another class of symmetries to the Lagrangian. These symmetries exchange fermionic particles with bosonic ones. Such a symmetry predicts the existence of supersymmetric particles, abbreviated as sparticles, which include the sleptons, squarks, neutralinos and charginos. Each particle in the Standard Model would have a superpartner whose spin differs by 1/2 from the ordinary particle. Due to the breaking of supersymmetry, the sparticles are much heavier than their ordinary counterparts; they are so heavy that existing particle colliders may not be powerful enough to produce them. == Neutrinos == In the standard model, neutrinos cannot spontaneously change flavor. Measurements however indicated that neutrinos do spontaneously change flavor, in what is called neutrino oscillations. Neutrino oscillations are usually explained using massive neutrinos. In the standard model, neutrinos have exactly zero mass, as the standard model only contains left-handed neutrinos. With no suitable right-handed partner, it is impossible to add a renormalizable mass term to the standard model. These measurements only give the mass differences between the different flavours. The best constraint on the absolute mass of the neutrinos comes from precision measurements of tritium decay, providing an upper limit 2 eV, which makes them at least five orders of magnitude lighter than the other particles in the standard model. This necessitates an extension of the standard model, which not only needs to explain how neutrinos get their mass, but also why the mass is so small. One approach to add masses to the neutrinos, the so-called seesaw mechanism, is to add right-handed neutrinos and have these couple to left-handed neutrinos with a Dirac mass term. The right-handed neutrinos have to be sterile, meaning that they do not participate in any of the standard model interactions. Because they have no charges, the right-handed neutrinos can act as their own anti-particles, and have a Majorana mass term. Like the other Dirac masses in the standard model, the neutrino Dirac mass is expected to be generated through the Higgs mechanism, and is therefore unpredictable. The standard model fermion masses differ by many orders of magnitude; the Dirac neutrino mass has at least the same uncertainty. On the other hand, the Majorana mass for the right-handed neutrinos does not arise from the Higgs mechanism, and is therefore expected to be tied to some energy scale of new physics beyond the standard model, for example the Planck scale. Therefore, any process involving right-handed neutrinos will be suppressed at low energies. The correction due to these suppressed processes effectively gives the left-handed neutrinos a mass that is inversely proportional to the right-handed Majorana mass, a mechanism known as the see-saw. The presence of heavy right-handed neutrinos thereby explains both the small mass of the left-handed neutrinos and the absence of the right-handed neutrinos in observations. However, due to the uncertainty in the Dirac neutrino masses, the right-handed neutrino masses can lie anywhere. For example, they could be as light as keV and be dark matter, they can have a mass in the LHC energy range and lead to observable lepton number violation, or they can be near the GUT scale, linking the right-handed neutrinos to the possibility of a grand unified theory. The mass terms mix neutrinos of different generations. This mixing is parameterized by the PMNS matrix, which is the neutrino analogue of the CKM quark mixing matrix. Unlike the quark mixing, which is almost minimal, the mixing of the neutrinos appears to be almost maximal. This has led to various speculations of symmetries between the various generations that could explain the mixing patterns. The mixing matrix could also contain several complex phases that break CP invariance, although there has been no experimental probe of these. These phases could potentially create a surplus of leptons over anti-leptons in the early universe, a process known as leptogenesis. This asymmetry could then at a later stage be converted in an excess of baryons over anti-baryons, and explain the matter-antimatter asymmetry in the universe. The light neutrinos are disfavored as an explanation for the observation of dark matter, based on considerations of large-scale structure formation in the early universe. Simulations of structure formation show that they are too hot – that is, their kinetic energy is large compared to their mass – while formation of structures similar to the galaxies in our universe requires cold dark matter. The simulations show that neutrinos can at best explain a few percent of the missing mass in dark matter. However, the heavy, sterile, right-handed neutrinos are a possible candidate for a dark matter WIMP. There are however other explanations for neutrino oscillations which do not necessarily require neutrinos to have masses, such as Lorentz-violating neutrino oscillations. == Preon models == Several preon models have been proposed to address the unsolved problem concerning the fact that there are three generations of quarks and leptons. Preon models generally postulate some additional new particles which are further postulated to be able to combine to form the quarks and leptons of the standard model. One of the earliest preon models was the Rishon model. To date, no preon model is widely accepted or fully verified. == Theories of everything == Theoretical physics continues to strive toward a theory of everything, a theory that fully explains and links together all known physical phenomena, and predicts the outcome of any experiment that could be carried out in principle. In practical terms the immediate goal in this regard is to develop a theory which would unify the Standard Model with General Relativity in a theory of quantum gravity. Additional features, such as overcoming conceptual flaws in either theory or accurate prediction of particle masses, would be desired. The challenges in putting together such a theory are not just conceptual - they include the experimental aspects of the very high energies needed to probe exotic realms. Several notable attempts in this direction are supersymmetry, loop quantum gravity, and String theory. === Supersymmetry === === Loop quantum gravity === Theories of quantum gravity such as loop quantum gravity and others are thought by some to be promising candidates to the mathematical unification of quantum field theory and general relativity, requiring less drastic changes to existing theories. However recent work places stringent limits on the putative effects of quantum gravity on the speed of light, and disfavours some current models of quantum gravity. === String theory === Extensions, revisions, replacements, and reorganizations of the Standard Model exist in attempt to correct for these and other issues. String theory is one such reinvention, and many theoretical physicists think that such theories are the next theoretical step toward a true Theory of Everything. Among the numerous variants of string theory, M-theory, whose mathematical existence was first proposed at a String Conference in 1995 by Edward Witten, is believed by many to be a proper "ToE" candidate, notably by physicists Brian Greene and Stephen Hawking. Though a full mathematical description is not yet known, solutions to the theory exist for specific cases. Recent works have also proposed alternate string models, some of which lack the various harder-to-test features of M-theory (e.g. the existence of Calabi–Yau manifolds, many extra dimensions, etc.) including works by well-published physicists such as Lisa Randall. == See also == == Footnotes == == References == == Further reading == Lisa Randall (2005). Warped Passages: Unraveling the Mysteries of the Universe's Hidden Dimensions. HarperCollins. ISBN 978-0-06-053108-9. == External resources == Standard Model Theory @ SLAC Scientific American Apr 2006 LHC. Nature July 2007 Les Houches Conference, Summer 2005
Wikipedia/Physics_beyond_the_standard_model
Vehicle emissions control is the study of reducing the emissions produced by motor vehicles, especially internal combustion engines. The primary emissions studied include hydrocarbons, volatile organic compounds, carbon monoxide, carbon dioxide, nitrogen oxides, particulate matter, and sulfur oxides. Starting in the 1950s and 1960s, various regulatory agencies were formed with a primary focus on studying the vehicle emissions and their effects on human health and the environment. As the worlds understanding of vehicle emissions improved, so did the devices used to mitigate their impacts. The regulatory requirements of the Clean Air Act, which was amended many times, greatly restricted acceptable vehicle emissions. With the restrictions, vehicles started being designed more efficiently by utilizing various emission control systems and devices which became more common in vehicles over time. == Types of emissions == Emissions of many air pollutants have been shown to have variety of negative effects on public health and the natural environment. Emissions that are principal pollutants of concern include: Hydrocarbons (HC) – A class of burned or partially burned fuel, hydrocarbons are toxins. Hydrocarbons are a major contributor to smog, which can be a major problem in urban areas. Prolonged exposure to hydrocarbons contributes to asthma, liver disease, lung disease, and cancer. Regulations governing hydrocarbons vary according to type of engine and jurisdiction; in some cases, "non-methane hydrocarbons" are regulated, while in other cases, "total hydrocarbons" are regulated. Technology for one application (to meet a non-methane hydrocarbon standard) may not be suitable for use in an application that has to meet a total hydrocarbon standard. Methane is not directly toxic, but is more difficult to break down in fuel vent lines and a charcoal canister is meant to collect and contain fuel vapors and route them either back to the fuel tank or, after the engine is started and warmed up, into the air intake to be burned in the engine. Volatile organic compounds (VOCs) – Organic compounds which typically have a boiling point less than or equal to 250 °C; for example chlorofluorocarbons (CFCs) and formaldehyde. Carbon monoxide (CO) – A product of incomplete combustion, inhaled carbon monoxide reduces the blood's ability to carry oxygen; overexposure (carbon monoxide poisoning) may be fatal. (Carbon monoxide persistently binds to hemoglobin, the oxygen-carrying chemical in red blood cells, where oxygen (O2) would temporarily bind. The bonding of CO excludes O2 and also reduces the ability of the hemoglobin to release already-bound oxygen, on both counts rendering the red blood cells ineffective. Recovery is by the slow release of bound CO and the body's production of new hemoglobin – a healing process – so full recovery from moderate to severe [but nonfatal] CO poisoning takes hours or days. Removing a person from a CO-poisoned atmosphere to fresh air stops the injury but does not yield prompt recovery, unlike the case where a person is removed from an asphyxiating atmosphere [i.e. one deficient in oxygen]. Toxic effects delayed by days are also common.) Nitrogen oxides (NOx) – Generated when nitrogen in the air reacts with oxygen at the high temperature and pressure inside the engine. NOx is a precursor to smog and acid rain. NOx includes NO and NO2. NO2 is extremely reactive. NOx production is increased when an engine runs at its most efficient (i.e. hottest) operating point, so there tends to be a natural tradeoff between efficiency and control of NOx emissions. It is expected to be reduced drastically by use of emulsion fuels. Particulate matter – Soot or smoke made up of particles in the micrometre size range: Particulate matter causes negative health effects, including but not limited to respiratory disease and cancer. Very fine particulate matter has been linked to cardiovascular disease. Sulfur oxide (SOx) – A general term for oxides of sulfur, which are emitted from motor vehicles burning fuel containing sulfur. Reducing the level of fuel sulfur reduces the level of sulfur oxides emitted from the tailpipe. == History == Throughout the 1950s and 1960s, various federal, state and local governments in the United States conducted studies into the numerous sources of air pollution. These studies ultimately attributed a significant portion of air pollution to the automobile, and concluded air pollution is not bounded by local political boundaries. At that time, such minimal emission control regulations as existed in the U.S. were promulgated at the municipal or, occasionally, the state level. The ineffective local regulations were gradually supplanted by more comprehensive state and federal regulations. By 1967 the State of California created the California Air Resources Board, and in 1970, the federal United States Environmental Protection Agency (EPA) was established. Both agencies, as well as other state agencies, now create and enforce emission regulations for automobiles in the United States. Similar agencies and regulations were contemporaneously developed and implemented in Canada, Western Europe, Australia, and Japan. The first effort at controlling pollution from automobiles was the PCV (positive crankcase ventilation) system. This draws crankcase fumes heavy in unburned hydrocarbons – a precursor to photochemical smog – into the engine's intake tract so they are burned rather than released unburned from the crankcase into the atmosphere. Positive crankcase ventilation was first installed on a widespread basis by law on all new 1961-model cars first sold in California. The following year, New York required it. By 1964, most new cars sold in the U.S. were so equipped, and PCV quickly became standard equipment on all vehicles worldwide. The first legislated exhaust (tailpipe) emission standards were promulgated by the State of California for 1966 model year for cars sold in that state, followed by the United States as a whole in model year 1968. Also in 1966, the first emission test cycle was enacted in the State of California measuring tailpipe emissions in PPM (parts per million). The standards were progressively tightened year by year, as mandated by the EPA. By the 1974 model year, the United States emission standards had tightened such that the de-tuning techniques used to meet them were seriously reducing engine efficiency and thus increasing fuel usage. The new emission standards for 1975 model year, as well as the increase in fuel usage, forced the invention of the catalytic converter for after-treatment of the exhaust gas. This was not possible with existing leaded gasoline, because the lead residue contaminated the platinum catalyst. In 1972, General Motors proposed to the American Petroleum Institute the elimination of leaded fuels for 1975 and later model year cars. The production and distribution of unleaded fuel was a major challenge, but it was completed successfully in time for the 1975 model year cars. All modern cars are now equipped with catalytic converters to further reduce vehicle emissions. Leading up to the 1981 model year in the United States, passenger vehicle manufactures were faced with the challenges in its history of meeting new emissions regulations, how to meet the much more restrictive requirements of the Clean Air Act (United States) per the 1977 amendment. For example: to meet this challenge, General Motors created a new "Emissions Control Systems Project Center" (ECS) first located at the AC Spark Plug Engineering Building in Flint, Michigan. Its purpose was to "Have overall responsibility for the design and development of the carborated and fuel injected closed loop 3-way catalyst system including related electronic controls, fuel metering, spark control, idle speed control, EGR, etc. currently planned through 1981." In 1990, the Clean Air Act (CAA) was amended to help further regulate harmful vehicle emissions. In the amendment, vehicle fuel regulations became more stringent by limiting how much sulfur was allowed in diesel fuel. The amendments also required a procedural change for the creation of gasoline to ensure there are less emissions of hydrocarbons (HC), carbon monoxide (CO), nitrogen oxides (NOX), particulate matter (PM), and volatile organic compounds (VOCs). Changes made to the CAA also required the use of oxygenated gasoline to reduce CO emissions. Throughout the years, the Environmental Protection Agency (EPA) continued to implement new regulations to reduce harmful emissions for vehicles. Some of the more important update standards are as follows. 1983: For areas with big pollution problems, Inspection and Maintenance programs were created, meaning vehicles would need to get tested for emissions. 1985: Changed the allowable amount of gasoline to 0.1 grams per gallon. 1991: Lowed the allowable emissions of HC and NOx for vehicle tailpipes 1993: Began developing new vehicle technology to help triple the fuel economy in family sedans, thus reducing harmful emissions. 1996: Lead in gasoline officially banned. New regulations created with intentions of innovating vehicle design to be cleaner for the environment and improving engine performance. 1998: Diesel engine standards further increased in efforts to reduce ozone and PM emissions for various vehicles including industrial equipment. 1999: Tailpipe emission standards are finalized, sulfur contents in gasoline are reduced, and various boats/other marine vehicles using diesel had reduced emission limits for NOx and PM. === History of lead in gasoline === In 1922, lead was added to gasoline as an antiknock agent. It was not until 1969, nearly five decades later, that research began to show the negative health affects related to lead as a pollutant. Despite the plethora of negative health impacts discovered, no regulatory requirements were implemented to reduce lead levels in gasoline until 1983. Slowly, countries began banning use of lead in gasoline entirely from the years of 1986 to 2021. Japan was first to ban lead in gasoline in 1986, with North and South America following with nearly every country in the two continents banning lead by 1998. Africa was the latest to ban lead in gasoline with most countries banning in 2004 and 2005 and the last, Algeria, which didn’t ban it until 2021. == Regulatory agencies == The agencies charged with implementing exhaust emission standards vary from jurisdiction to jurisdiction, even in the same country. For example, in the United States, overall responsibility belongs to the EPA, but due to special requirements of the State of California, emissions in California are regulated by the Air Resources Board. In Texas, the Texas Railroad Commission is responsible for regulating emissions from LPG-fueled rich burn engines (but not gasoline-fueled rich burn engines). === North America === California Air Resources Board – California, United States (most sources) Environment Canada – Canada (most sources) Environmental Protection Agency – United States (most sources) Texas Railroad Commission – Texas, United States (LPG-fueled engines only) Transport Canada – Canada (trains and ships) === Japan === Ministry of Land, Infrastructure, Transport and Tourism / Road Transport Bureau / Environmental Policy Division === Europe === The European Union has control over regulation of emissions in EU member states; however, many member states have their own government bodies to enforce and implement these regulations in their respective countries. In short, the EU forms the policy (by setting limits such as the European emission standard) and the member states decide how to best implement it in their own country. ==== United Kingdom ==== In the United Kingdom, matters concerning environmental policy are "devolved powers" so that some of the constituent countries deal with it separately through their own government bodies set up to deal with environmental issues: Environment Agency – England and Wales Scottish Environment Protection Agency (SEPA) – Scotland Department of the Environment – Northern Ireland However, many UK-wide policies are handled by the Department for Environment, Food and Rural Affairs (DEFRA) and they are still subject to EU regulations. Emissions tests on diesel cars have not been carried out during MOTs in Northern Ireland for 12 years, despite being legally required. === China === Ministry of Ecology and Environment – Primary regulatory authority responsible for environmental protection, formulates policies, standards, and regulations which encompass vehicle emissions, and environmental impact assessments. Ministry of Industry and Information Technology – Creates and establishes goals for new energy vehicles (NEV), and commercial vehicles. Also plays a role in creating national emissions standards for cars. State Administration for Market Regulation – Responsible for market supervision and standardization in China. The State Administration for Market Regulation oversees the enforcement of vehicle emissions standards and ensures compliance by conducting inspections, testing, and quality control measures. National Development and Reform Commission - Responsible for macroeconomic planning and formulating energy-related policies in China. The National Development and Reform Commission plays a role on fuel efficiency standards, promoting alternative fuels, and implementing energy-saving measures to reduce emissions from vehicles. China Automotive Technology & Research Center - An independent research institution commissioned by the Ministry of Industry and Information Technology, to research, develop and draft the standards for fuel consumption limits of motor vehicles. Ministry of Transport of the People's Republic of China - While it is unclear whether this ministry has legal authority on whether they can enforce these standards, the Ministry of Transport will not issue commercial licenses to any heavy-duty vehicles that don't meet fuel consumption requirements they have set. Provincial and Municipal Environmental Protection Bureaus - At the provincial and municipal level these Bureaus are responsible for enforcing regulations such as those related to vehicle emissions. These bureaus monitor compliance, conduct inspections, and impose penalties for non-compliance. == Emission control system design == It was very important to system designers to meet the emission requirements using a minimum quantity of catalyst material (platinum and/or palladium) due to cost and supply issues. The General Motors "Emissions Control Systems Project Center" was "to follow the operational plans established by previous (GM) Project Centers. Items unique to the "Emissions Control Systems Project Center" (were): No Designers - all design work to be done at home divisions. Planning activity which will provide the official timing charts, component costs, allocations, etc. The ("Emissions Control Systems Project Center") (had) seven tasks to perform, such that an emission system, which passes all existing Federal Emission and Fuel Economy legislation is put into production. These are to work with the car divisions to: Define hardware and system requirements. Develop design specifications for all hardware all hardware required. Review alternative designs and systems. Arrange to test and validate systems, which best suits the needs of all concerned. Monitor component design and release. Follow progress of divisional certification work. Keep management and divisions apprised of progress status. The system implementation (was to) be phased in over three years. In the 1979 model year. California vehicles with 2.5, 2.8 and 3.5 liter engines will have a CLCC system. In 1980 model year, vehicles sold in California and 3.8 and 4.3 liter engines sold federally will have CLCC, and finally in the 1981 model year all passenger cars will have the system. California light and medium duty trucks may also use the c-4 system. While 1979 and 1980 systems are very similar, the 1981 system (2nd generation) will differ in that it may include additional engine control systems (i.e., electronic spark timing, idle speed control, etc.) The Emission Control System under development has been designated C-4.This stands for Computer Controlled Catalytic Converter. The C-4 System encompasses Closed Loop Carburetor Control (CLCC) and Throttle Body Injection (TBI) systems."" == Emissions control == Engine efficiency has been steadily improved with improved engine design, more precise ignition timing and electronic ignition, more precise fuel metering, and computerized engine management. Advances in engine and vehicle technology continually reduce the toxicity of exhaust leaving the engine, but these alone have generally been proved insufficient to meet emissions goals. Therefore, technologies to detoxify the exhaust are an essential part of emissions control. === Air injection === One of the first-developed exhaust emission control systems is secondary air injection. Originally, this system was used to inject air into the engine's exhaust ports to provide oxygen so unburned and partially burned hydrocarbons in the exhaust would finish burning. Air injection is now used to support the catalytic converter's oxidation reaction, and to reduce emissions when an engine is started from cold. After a cold start, an engine needs an air-fuel mixture richer than what it needs at operating temperature, and the catalytic converter does not function efficiently until it has reached its own operating temperature. The air injected upstream of the converter supports combustion in the exhaust headpipe, which speeds catalyst warmup and reduces the amount of unburned hydrocarbon emitted from the tailpipe. === Exhaust gas recirculation === In the United States and Canada, many engines in 1973 and newer vehicles (1972 and newer in California) have a system that routes a metered amount of exhaust into the intake tract under particular operating conditions. Exhaust neither burns nor supports combustion, so it dilutes the air/fuel charge to reduce peak combustion chamber temperatures. This, in turn, reduces the formation of NOx. === Catalytic converter === The catalytic converter is a device placed in the exhaust pipe, which converts hydrocarbons, carbon monoxide, and NOx into less harmful gases by using a combination of platinum, palladium and rhodium as catalysts. There are two types of catalytic converter, a two-way and a three-way converter. Two-way converters were common until the 1980s, when three-way converters replaced them on most automobile engines. See the catalytic converter article for further details. == Evaporative emissions control == Evaporative emissions are the result of gasoline vapors escaping from the vehicle's fuel system. Since 1971, all U.S. vehicles have had fully sealed fuel systems that do not vent directly to the atmosphere; mandates for systems of this type appeared contemporaneously in other jurisdictions. In a typical system, vapors from the fuel tank and carburetor bowl vent (on carbureted vehicles) are ducted to canisters containing activated carbon. The vapors are adsorbed within the canister, and during certain engine operational modes fresh air is drawn through the canister, pulling the vapor into the engine, where it burns. == Remote sensing emission testing == Some US states are also using a technology which uses infrared and ultraviolet light to detect emissions while vehicles pass by on public roads, thus eliminating the need for owners to go to a test center. Invisible light flash detection of exhaust gases is commonly used in metropolitan areas, and becoming more broadly known in Europe. === Use of emission test data === Emission test results from individual vehicles are in many cases compiled to evaluate the emissions performance of various classes of vehicles, the efficacy of the testing program and of various other emission-related regulations (such as changes to fuel formulations) and to model the effects of auto emissions on public health and the environment. == Alternative fuel vehicles == Exhaust emissions can be reduced by making use of clean vehicle propulsion. The most popular modes include hybrid and electric vehicles. As of December 2020, China had the world's largest stock of highway legal plug-in electric passenger cars with 4.5 million units, representing 42% of the world's stock of plug-in cars. == See also == AP 42 Compilation of Air Pollutant Emission Factors Low carbon economy On-board diagnostics#OBD-I Ontario's Drive Clean Portable Emissions Measurement System Roadway air dispersion modeling Vehicle inspection Phase-out of fossil fuel vehicles Non-exhaust emissions == References == == External links == Manufacturers of Emission Controls Association (MECA) Diesel Information Hub Association for Emissions Control by Catalyst (AECC) National Vehicle and Fuel Emissions Laboratory of the United States Environmental Protection Agency Vehicle emissions and testing
Wikipedia/Vehicle_emissions_control
The latent internal energy of a system is the internal energy a system requires to undergo a phase transition. Its value is specific to the substance or mix of substances in question. The value can also vary with temperature and pressure. Generally speaking the value is different for the type of phase change being accomplished. Examples can include Latent internal energy of vaporization (liquid to vapor), Latent internal energy of crystallization (liquid to solid) Latent internal energy of sublimation (solid to vapor). These values are usually expressed in units of energy per mole or per mass such as J/mol or BTU/lb. Often a negative sign will be used to represent energy being withdrawn from the system, while a positive value represents energy being added to the system. For every type of latent internal energy there is an opposite. For example, the latent internal energy of Freezing (liquid to solid) is equal to the negative of the Latent internal energy of melting (solid to liquid) == References ==
Wikipedia/Latent_internal_energy
Space physics, also known as space plasma physics, is the study of naturally occurring plasmas within Earth's upper atmosphere and the rest of the Solar System. It includes the topics of aeronomy, aurorae, planetary ionospheres and magnetospheres, radiation belts, and space weather (collectively known as solar-terrestrial physics). It also encompasses the discipline of heliophysics, which studies the solar physics of the Sun, its solar wind, the coronal heating problem, solar energetic particles, and the heliosphere. Space physics is both a pure science and an applied science, with applications in radio transmission, spacecraft operations (particularly communications and weather satellites), and in meteorology. Important physical processes in space physics include magnetic reconnection, synchrotron radiation, ring currents, Alfvén waves and plasma instabilities. It is studied using direct in situ measurements by sounding rockets and spacecraft, indirect remote sensing of electromagnetic radiation produced by the plasmas, and theoretical magnetohydrodynamics. Closely related fields include plasma physics, which studies more fundamental physics and artificial plasmas; atmospheric physics, which investigates lower levels of Earth's atmosphere; and astrophysical plasmas, which are natural plasmas beyond the Solar System. == History == Space physics can be traced to the Chinese who discovered the principle of the compass, but did not understand how it worked. During the 16th century, in De Magnete, William Gilbert gave the first description of the Earth's magnetic field, showing that the Earth itself is a great magnet, which explained why a compass needle points north. Deviations of the compass needle magnetic declination were recorded on navigation charts, and a detailed study of the declination near London by watchmaker George Graham resulted in the discovery of irregular magnetic fluctuations that we now call magnetic storms, so named by Alexander Von Humboldt. Gauss and William Weber made very careful measurements of Earth's magnetic field which showed systematic variations and random fluctuations. This suggested that the Earth was not an isolated body, but was influenced by external forces – especially from the Sun and the appearance of sunspots. A relationship between individual aurora and accompanying geomagnetic disturbances was noticed by Anders Celsius and Olof Peter Hiorter in 1747. In 1860, Elias Loomis (1811–1889) showed that the highest incidence of aurora is seen inside an oval of 20 - 25 degrees around the magnetic pole. In 1881, Hermann Fritz published a map of the "isochasms" or lines of constant magnetic field. In the late 1870s, Henri Becquerel offered the first physical explanation for the statistical correlations that had been recorded: sunspots must be a source of fast protons. They are guided to the poles by the Earth's magnetic field. In the early twentieth century, these ideas led Kristian Birkeland to build a terrella, or laboratory device which simulates the Earth's magnetic field in a vacuum chamber, and which uses a cathode ray tube to simulate the energetic particles which compose the solar wind. A theory began to be formulated about the interaction between the Earth's magnetic field and the solar wind. Space physics began in earnest with the first in situ measurements in the early 1950s, when a team led by Van Allen launched the first rockets to a height around 110 km. Geiger counters on board the second Soviet satellite, Sputnik 2, and the first US satellite, Explorer 1, detected the Earth's radiation belts, later named the Van Allen belts. The boundary between the Earth's magnetic field and interplanetary space was studied by Explorer 10. Future space craft would travel outside Earth orbit and study the composition and structure of the solar wind in much greater detail. These include WIND (spacecraft), (1994), Advanced Composition Explorer (ACE), Ulysses, the Interstellar Boundary Explorer (IBEX) in 2008, and Parker Solar Probe. Other spacecraft would study the sun, such as STEREO and Solar and Heliospheric Observatory (SOHO). == See also == Effects of spaceflight on the human body Space environment Space science Weightlessness == References == == Further reading == Kallenrode, May-Britt (2004). Space Physics: An Introduction to Plasmas and Particles in the Heliosphere and Magnetospheres. Springer. ISBN 978-3-540-20617-0. Gombosi, Tamas (1998). Physics of the Space Environment. New York: Cambridge University Press. ISBN 978-0-521-59264-2. == External links == Media related to Space physics at Wikimedia Commons
Wikipedia/Space_plasma_physics
In molecular kinetic theory in physics, a system's distribution function is a function of seven variables, f ( t , x , y , z , v x , v y , v z ) {\displaystyle f(t,x,y,z,v_{x},v_{y},v_{z})} , which gives the number of particles per unit volume in single-particle phase space. It is the number of particles per unit volume having approximately the velocity v = ( v x , v y , v z ) {\displaystyle \mathbf {v} =(v_{x},v_{y},v_{z})} near the position r = ( x , y , z ) {\displaystyle \mathbf {r} =(x,y,z)} and time t {\displaystyle t} . The usual normalization of the distribution function is n ( r , t ) = ∫ f ( r , v , t ) d v x d v y d v z , N ( t ) = ∫ n ( r , t ) d x d y d z , {\displaystyle {\begin{aligned}n(\mathbf {r} ,t)&=\int f(\mathbf {r} ,\mathbf {v} ,t)\,dv_{x}\,dv_{y}\,dv_{z},\\N(t)&=\int n(\mathbf {r} ,t)\,dx\,dy\,dz,\end{aligned}}} where N is the total number of particles and n is the number density of particles – the number of particles per unit volume, or the density divided by the mass of individual particles. A distribution function may be specialised with respect to a particular set of dimensions. E.g. take the quantum mechanical six-dimensional phase space, f ( x , y , z ; p x , p y , p z ) {\displaystyle f(x,y,z;p_{x},p_{y},p_{z})} and multiply by the total space volume, to give the momentum distribution, i.e. the number of particles in the momentum phase space having approximately the momentum ( p x , p y , p z ) {\displaystyle (p_{x},p_{y},p_{z})} . Particle distribution functions are often used in plasma physics to describe wave–particle interactions and velocity-space instabilities. Distribution functions are also used in fluid mechanics, statistical mechanics and nuclear physics. The basic distribution function uses the Boltzmann constant k {\displaystyle k} and temperature T {\displaystyle T} with the number density to modify the normal distribution: f = n ( m 2 π k T ) 3 / 2 exp ⁡ ( − m v 2 2 k T ) = n ( m 2 π k T ) 3 / 2 exp ⁡ ( − m ( v x 2 + v y 2 + v z 2 ) 2 k T ) . {\displaystyle {\begin{aligned}f&=n\left({\frac {m}{2\pi kT}}\right)^{3/2}\exp \left(-{\frac {mv^{2}}{2kT}}\right)\\[2pt]&=n\left({\frac {m}{2\pi kT}}\right)^{3/2}\exp \left(-{\frac {m(v_{x}^{2}+v_{y}^{2}+v_{z}^{2})}{2kT}}\right).\end{aligned}}} Related distribution functions may allow bulk fluid flow, in which case the velocity origin is shifted, so that the exponent's numerator is m ( ( v x − u x ) 2 + ( v y − u y ) 2 + ( v z − u z ) 2 ) {\displaystyle m((v_{x}-u_{x})^{2}+(v_{y}-u_{y})^{2}+(v_{z}-u_{z})^{2})} , where ( u x , u y , u z ) {\displaystyle (u_{x},u_{y},u_{z})} is the bulk velocity of the fluid. Distribution functions may also feature non-isotropic temperatures, in which each term in the exponent is divided by a different temperature. Plasma theories such as magnetohydrodynamics may assume the particles to be in thermodynamic equilibrium. In this case, the distribution function is Maxwellian. This distribution function allows fluid flow and different temperatures in the directions parallel to, and perpendicular to, the local magnetic field. More complex distribution functions may also be used, since plasmas are rarely in thermal equilibrium. The mathematical analogue of a distribution is a measure; the time evolution of a measure on a phase space is the topic of study in dynamical systems. == References ==
Wikipedia/Distribution_function_(physics)
In physics, the Saha ionization equation is an expression that relates the ionization state of a gas in thermal equilibrium to the temperature and pressure. The equation is a result of combining ideas of quantum mechanics and statistical mechanics and is used to explain the spectral classification of stars. The expression was developed by physicist Meghnad Saha in 1920. It is discussed in many textbooks on statistical physics and plasma physics. == Description == For a gas at a high enough temperature (here measured in energy units, i.e. keV or J) and/or density, the thermal collisions of the atoms will ionize some of the atoms, making an ionized gas. When several or more of the electrons that are normally bound to the atom in orbits around the atomic nucleus are freed, they form an independent electron gas cloud co-existing with the surrounding gas of atomic ions and neutral atoms. With sufficient ionization, the gas can become the state of matter called plasma. The Saha equation describes the degree of ionization for any gas in thermal equilibrium as a function of the temperature, density, and ionization energies of the atoms. For a gas composed of a single atomic species, the Saha equation is written: n i + 1 n e n i = 2 λ th 3 g i + 1 g i exp ⁡ [ − ε i + 1 − ε i k B T ] {\displaystyle {\frac {n_{i+1}n_{\text{e}}}{n_{i}}}={\frac {2}{\lambda _{\text{th}}^{3}}}{\frac {g_{i+1}}{g_{i}}}\exp \left[-{\frac {\varepsilon _{i+1}-\varepsilon _{i}}{k_{\text{B}}T}}\right]} where: n i {\displaystyle n_{i}} is the number density of atoms in the i-th state of ionization, that is with i electrons removed. g i {\displaystyle g_{i}} is the degeneracy of state for the i-ions. ε i {\displaystyle \varepsilon _{i}} is the energy required to remove i electrons from a neutral atom, creating an i-level ion. n e {\displaystyle n_{\text{e}}} is the electron density k B {\displaystyle k_{\text{B}}} is the Boltzmann constant λ th {\displaystyle \lambda _{\text{th}}} is the thermal de Broglie wavelength of an electron λ th = d e f h 2 π m e k B T {\displaystyle \lambda _{\text{th}}\ {\stackrel {\mathrm {def} }{=}}\ {\frac {h}{\sqrt {2\pi m_{\text{e}}k_{\text{B}}T}}}} m e {\displaystyle m_{\text{e}}} is the mass of an electron T {\displaystyle T} is the temperature of the gas h {\displaystyle h} is the Planck constant The expression ( ε i + 1 − ε i ) {\textstyle (\varepsilon _{i+1}-\varepsilon _{i})} is the energy required to ionize the species from state i {\displaystyle i} to state i + 1 {\displaystyle i+1} . In the case where only one level of ionization is important, we have n 1 = n e {\textstyle n_{1}=n_{\text{e}}} for H+; defining the total density H/H+ as n = n 0 + n 1 , {\textstyle n=n_{0}+n_{1},} the Saha equation simplifies to: n e 2 n − n e = 2 λ th 3 g 1 g 0 exp ⁡ [ − ε k B T ] {\displaystyle {\frac {n_{\text{e}}^{2}}{n-n_{\text{e}}}}={\frac {2}{\lambda _{\text{th}}^{3}}}{\frac {g_{1}}{g_{0}}}\exp \left[{\frac {-\varepsilon }{k_{\text{B}}T}}\right]} where ε {\displaystyle \varepsilon } is the energy of ionization. We can define the degree of ionization x = n 1 / n {\textstyle x=n_{1}/n} and find x 2 1 − x = A = 2 n λ th 3 g 1 g 0 exp ⁡ [ − ε k B T ] {\displaystyle {\frac {x^{2}}{1-x}}=A={\frac {2}{n\lambda _{\text{th}}^{3}}}{\frac {g_{1}}{g_{0}}}\exp \left[{\frac {-\varepsilon }{k_{\text{B}}T}}\right]} This gives a quadratic equation that can be solved (in closed form): x 2 + A x − A = 0 , x = ( A ( 1 + 4 A ) − A ) / 2 {\displaystyle x^{2}+Ax-A=0,x=(A{\sqrt {(}}1+{\tfrac {4}{A}})-A)/2} For small A ( T ) , {\textstyle A(T),} low temperature, x ≈ A 1 / 2 , ∝ n − 1 / 2 , {\textstyle x\approx A^{1/2},\propto n^{-1/2},} so that the ionization decreases with higher number density (factors 10 in both plots). Note that except for weakly ionized plasmas, the plasma environment affects the atomic structure with the subsequent lowering of the ionization potentials and the "cutoff" of the partition function. Therefore, ε i {\displaystyle \varepsilon _{i}} and g i {\displaystyle g_{i}} depend, in general, on T {\displaystyle T} and n e {\displaystyle n_{\text{e}}} and solving the Saha equation is only possible iteratively. As a simple example, imagine a gas of monatomic hydrogen, set g 0 = g 1 {\displaystyle g_{0}=g_{1}} and let ε {\displaystyle \varepsilon } = 13.6 eV (158000 K), the ionization energy of hydrogen from its ground state. Let n {\displaystyle n} = 2.69×1025 m−3, which is the Loschmidt constant (nL for NA), or particle density of Earth's atmosphere at standard pressure and temperature. At T {\displaystyle T} = 300 K, the ionization is essentially none: x {\displaystyle x} = 5×10−115 and there would almost certainly be no ionized atoms in the volume of Earth's atmosphere. But x {\displaystyle x} increases rapidly with T {\displaystyle T} , reaching 0.35 for T {\displaystyle T} = 20000 K. There is substantial ionization even though this k B T {\textstyle k_{B}T} is much less than the ionization energy (although this depends somewhat on density). This is a common occurrence. Physically, it stems from the fact that at a given temperature, the particles have a distribution of energies, including some with several times k B T . {\textstyle k_{B}T.} These high energy particles are much more effective at ionizing atoms. In Earth's atmosphere, ionization is actually governed not by the Saha equation but by very energetic cosmic rays, largely of muons. These particles are not in thermal equilibrium with the atmosphere, so they are not at its temperature and the Saha logic does not apply. Rigorously, the Saha equation is only valid for dilute gases, due to the underlying ideal gas assumption used in its derivation. For dense gases this assumption is no longer valid, because particle interactions becoming significant modifies the chemical potential of the species. And the compressibility of ionized gas and plasma. Hence, the Saha ionization framework has been extended to deal with systems that are denser than the ideal gas limit p/RT [mole/m3], by incorporating corrections for these non-ideal interactions into the thermodynamic potential. This correction leads to improved estimates for the degree of ionization in the corona of the Sun. == Particle densities == The Saha equation is useful for determining the ratio of particle densities for two different ionization levels. The most useful form of the Saha equation for this purpose is Z i N i = Z i + 1 Z e N i + 1 N e , {\displaystyle {\frac {Z_{i}}{N_{i}}}={\frac {Z_{i+1}Z_{e}}{N_{i+1}N_{e}}},} where Z denotes the partition function of atom/ion resp. electron. The Saha equation can be seen as a restatement of the equilibrium condition for the chemical potentials: μ i = μ i + 1 + μ e {\displaystyle \mu _{i}=\mu _{i+1}+\mu _{e}\,} This equation simply states that the potential for an atom of ionization state i to ionize is the same as the potential for an electron and an atom of ionization state i + 1. The potentials are equal, therefore the system is in equilibrium and no net change of ionization will occur. == Stellar atmospheres == In the early twenties Ralph H. Fowler (in collaboration with Charles Galton Darwin) developed a new method in statistical mechanics permitting a systematic calculation of the equilibrium properties of matter. He used this to provide a rigorous derivation of the ionization formula which Saha had obtained, by extending to the ionization of atoms the theorem of Jacobus Henricus van 't Hoff, used in physical chemistry for its application to molecular dissociation. Also, a significant improvement in the Saha equation introduced by Fowler was to include the effect of the excited states of atoms and ions. A further important step forward came in 1923, when Edward Arthur Milne and R.H. Fowler published a paper in the Monthly Notices of the Royal Astronomical Society, showing that the criterion of the maximum intensity of absorption lines (belonging to subordinate series of a neutral atom) was much more fruitful in giving information about physical parameters of stellar atmospheres than the criterion employed by Saha which consisted in the marginal appearance or disappearance of absorption lines. The latter criterion requires some knowledge of the relevant pressures in the stellar atmospheres, and Saha following the generally accepted view at the time assumed a value of the order of 1 to 0.1 atmosphere. Milne wrote: Saha had concentrated on the marginal appearances and disappearances of absorption lines in the stellar sequence, assuming an order of magnitude for the pressure in a stellar atmosphere and calculating the temperature where increasing ionization, for example, inhibited further absorption of the line in question owing to the loss of the series electron. As Fowler and I were one day stamping round my rooms in Trinity and discussing this, it suddenly occurred to me that the maximum intensity of the Balmer lines of hydrogen, for example, was readily explained by the consideration that at the lower temperatures there were too few excited atoms to give appreciable absorption, whilst at the higher temperatures there are too few neutral atoms left to give any absorption. ... That evening I did a hasty order of magnitude calculation of the effect and found that to agree with a temperature of 10000° [K] for the stars of type A0, where the Balmer lines have their maximum, a pressure of the order of 10−4 atmosphere was required. This was very exciting, because standard determinations of pressures in stellar atmospheres from line shifts and line widths had been supposed to indicate a pressure of the order of one atmosphere or more, and I had begun on other grounds to disbelieve this. The generally accepted view at the time assumed that the composition of stars were similar to Earth. However, in 1925 Cecilia Payne used Saha's ionization theory to calculate that the composition of stellar atmospheres is as we now know it; mostly hydrogen and helium, expanding the knowledge of stars. == Stellar coronae == Saha equilibrium prevails when the plasma is in local thermodynamic equilibrium, which is not the case in the optically thin corona. Here the equilibrium ionization states must be estimated by detailed statistical calculation of collision and recombination rates. == Early universe == Equilibrium ionization, described by the Saha equation, explains evolution in the early universe. After the Big Bang, all atoms were ionized, leaving mostly protons and electrons (looking in the past). According to Saha's approach, when the universe had expanded and cooled such that the temperature reached about 3000 K, electrons (re)combined with protons (10 fm) forming hydrogen atoms (0.1 nm). At this point, 700 millennia since it was 100 million K, the universe became transparent to most electromagnetic radiation. That 3000 K surface, red-shifted in time by a factor of about 1,000, generated the 2.7 K cosmic microwave background radiation, which pervades the universe today. == References == == External links == Derivation & Discussion by Hale Bradt www.cambridge.org A detailed derivation from the University of Utah Physics Department Lecture notes from the University of Maryland Department of Astronomy
Wikipedia/Saha_equation
Flow control is a field of fluid dynamics. It involves a small configuration change to serve an ideally large engineering benefit, like drag reduction, lift increase, mixing enhancement or noise reduction. This change may be accomplished by passive or active devices. == Passive vs active == Passive devices by definition require no energy. Passive techniques include turbulators or roughness elements geometric shaping, the use of vortex generators, and the placement of longitudinal grooves or riblets on airfoil surfaces. Active control requires actuators that require energy and may operate in a time-dependent manner. Active flow control includes steady or unsteady suction or blowing, the use of synthetic jets, valves and plasma actuators. Actuation may be pre-determined (open-loop control) or be dependent on monitoring sensors (closed-loop control). == Aircraft wings == Airplane wing performance has a substantial effect on not only runway length, approach speed, climb rate, cargo capacity, and operation range but also noise and emissions. Wing performance can be degraded by flow separation, which depends on the aerodynamic characteristics of the airfoil. Aerodynamic and non-aerodynamic constraints often conflict. Flow control is required to overcome such difficulties. Techniques developed to manipulate the boundary layer, either to increase lift or decrease drag, and separation delay come under the general heading of flow control. Aurora Flight Sciences is a DARPA CRANE (Control of Revolutionary Aircraft with Novel Effectors) grantee. It initially involved testing a small-scale plane that uses compressed air bursts instead of external moving parts such as flaps. The program seeks to eliminate the weight, drag, and mechanical complexity involved in moving control surfaces. The air bursts modify the air pressure and flow, and change the boundaries between streams of air moving at different speeds. The company built a 25% scale prototype with 11 conventional control surfaces, as well as 14 banks fed by eight air channels. In 2023, the aircraft received its official designation as X-65. In January 2024, DARPA and Aurora started CRANE Phase 3, building the first full-scale X-65 aircraft using active flow control actuators for primary flight control. The 7,000-pound X-65 will be rolled out in early 2025 with the first flight planned for summer of 2025. == References ==
Wikipedia/Flow_control_(fluid)
Photonic molecules are a form of matter in which photons bind together to form "molecules". They were first predicted in 2007. Photonic molecules are formed when individual (massless) photons "interact with each other so strongly that they act as though they have mass". In an alternative definition (which is not equivalent), photons confined to two or more coupled optical cavities also reproduce the physics of interacting atomic energy levels, and have been termed as photonic molecules. Researchers drew analogies between the phenomenon and the fictional "lightsaber" from Star Wars. == Construction == Gaseous rubidium atoms were pumped into a vacuum chamber. The cloud was cooled using lasers to just a few degrees above absolute zero. Using weak laser pulses, small numbers of photons were fired into the cloud. As the photons entered the cloud, their energy excited atoms along their path, causing them to lose speed. Inside the cloud medium, the photons dispersively coupled to strongly interacting atoms in highly excited Rydberg states. This caused the photons to behave as massive particles with strong mutual attraction (photon molecules). Eventually the photons exited the cloud together as normal photons (often entangled in pairs). The effect is caused by a so-called Rydberg blockade, which, in the presence of one excited atom, prevents nearby atoms from being excited to the same degree. In this case, as two photons enter the atomic cloud, the first excites an atom, annihilating itself in the interaction, but the transmitted energy must move forward inside the excited atom before the second photon can excite nearby atoms. In effect the two photons push and pull each other through the cloud as their energy is passed from one atom to the next, forcing them to interact. This photonic interaction is mediated by the electromagnetic interaction between photons and atoms. == Possible applications == The interaction of the photons suggests that the effect could be employed to build a system that can preserve quantum information, and process it using quantum logic operations. The system could also be useful in classical computing, given the much-lower power required to manipulate photons than electrons. It may be possible to arrange the photonic molecules in such a way within the medium that they form larger two-dimensional structures (similar to drawings). == Interacting optical cavities as photonic molecules == The term photonic molecule has been also used since 1998 for an unrelated phenomenon involving electromagnetically interacting optical microcavities. The properties of quantized confined photon states in optical micro- and nanocavities are very similar to those of confined electron states in atoms. Owing to this similarity, optical microcavities can be termed 'photonic atoms'. Taking this analogy even further, a cluster of several mutually-coupled photonic atoms forms a photonic molecule. When individual photonic atoms are brought into close proximity, their optical modes interact and give rise to a spectrum of hybridized super-modes of photonic molecules. This is very similar to what happens when two isolated systems are coupled, like two hydrogen atomic orbitals coming together to form the bonding and antibonding orbitals of the hydrogen molecule, which are hybridized super-modes of the total coupled system. "A micrometer-sized piece of semiconductor can trap photons inside it in such a way that they act like electrons in an atom. Now the 21 September PRL describes a way to link two of these "photonic atoms" together. The result of such a close relationship is a "photonic molecule," whose optical modes bear a strong resemblance to the electronic states of a diatomic molecule like hydrogen." "Photonic molecules, named by analogy with chemical molecules, are clusters of closely located electromagnetically interacting microcavities or "photonic atoms"." "Optically coupled microcavities have emerged as photonic structures with promising properties for investigation of fundamental science as well as for applications." The first photonic realization of the two-level system of a photonic molecule was by Spreew et al., who used optical fibers to realize a ring resonator, although they did not use the term "photonic molecule". The two modes forming the molecule could then be the polarization modes of the ring or the clockwise and counterclockwise modes of the ring. This was followed by the demonstration of a lithographically fabricated photonic molecule, inspired by an analogy with a simple diatomic molecule. However, other nature-inspired PM structures (such as ‘photonic benzene’) have been proposed and shown to support confined optical modes closely analogous to the ground-state molecular orbitals of their chemical counterparts. Photonic molecules offer advantages over isolated photonic atoms in a variety of applications, including bio(chemical) sensing, cavity optomechanics, and microlasers, Photonic molecules can also be used as quantum simulators of many-body physics and as building blocks of future optical quantum information processing networks. In complete analogy, clusters of metal nanoparticles – which support confined surface plasmon states – have been termed ‘plasmonic molecules.” Finally, hybrid photonic-plasmonic (or opto-plasmonic) and elastic molecules have also been proposed and demonstrated. == See also == Luminiferous aether Photoluminescence == References ==
Wikipedia/Photonic_molecule
In physics, the terms order and disorder designate the presence or absence of some symmetry or correlation in a many-particle system. In condensed matter physics, systems typically are ordered at low temperatures; upon heating, they undergo one or several phase transitions into less ordered states. Examples for such an order-disorder transition are: the melting of ice: solid–liquid transition, loss of crystalline order; the demagnetization of iron by heating above the Curie temperature: ferromagnetic–paramagnetic transition, loss of magnetic order. The degree of freedom that is ordered or disordered can be translational (crystalline ordering), rotational (ferroelectric ordering), or a spin state (magnetic ordering). The order can consist either in a full crystalline space group symmetry, or in a correlation. Depending on how the correlations decay with distance, one speaks of long range order or short range order. If a disordered state is not in thermodynamic equilibrium, one speaks of quenched disorder. For instance, a glass is obtained by quenching (supercooling) a liquid. By extension, other quenched states are called spin glass, orientational glass. In some contexts, the opposite of quenched disorder is annealed disorder. == Characterizing order == === Lattice periodicity and X-ray crystallinity === The strictest form of order in a solid is lattice periodicity: a certain pattern (the arrangement of atoms in a unit cell) is repeated again and again to form a translationally invariant tiling of space. This is the defining property of a crystal. Possible symmetries have been classified in 14 Bravais lattices and 230 space groups. Lattice periodicity implies long-range order: if only one unit cell is known, then by virtue of the translational symmetry it is possible to accurately predict all atomic positions at arbitrary distances. During much of the 20th century, the converse was also taken for granted – until the discovery of quasicrystals in 1982 showed that there are perfectly deterministic tilings that do not possess lattice periodicity. Besides structural order, one may consider charge ordering, spin ordering, magnetic ordering, and compositional ordering. Magnetic ordering is observable in neutron diffraction. It is a thermodynamic entropy concept often displayed by a second-order phase transition. Generally speaking, high thermal energy is associated with disorder and low thermal energy with ordering, although there have been violations of this. Ordering peaks become apparent in diffraction experiments at low energy. === Long-range order === Long-range order characterizes physical systems in which remote portions of the same sample exhibit correlated behavior. This can be expressed as a correlation function, namely the spin-spin correlation function: G ( x , x ′ ) = ⟨ s ( x ) , s ( x ′ ) ⟩ . {\displaystyle G(x,x')=\langle s(x),s(x')\rangle .\,} where s is the spin quantum number and x is the distance function within the particular system. This function is equal to unity when x = x ′ {\displaystyle x=x'} and decreases as the distance | x − x ′ | {\displaystyle |x-x'|} increases. Typically, it decays exponentially to zero at large distances, and the system is considered to be disordered. But if the correlation function decays to a constant value at large | x − x ′ | {\displaystyle |x-x'|} then the system is said to possess long-range order. If it decays to zero as a power of the distance then it is called quasi-long-range order (for details see Chapter 11 in the textbook cited below. See also Berezinskii–Kosterlitz–Thouless transition). Note that what constitutes a large value of | x − x ′ | {\displaystyle |x-x'|} is understood in the sense of asymptotics. == Quenched disorder == In statistical physics, a system is said to present quenched disorder when some parameters defining its behavior are random variables which do not evolve with time. These parameters are said to be quenched or frozen. Spin glasses are a typical example. Quenched disorder is contrasted with annealed disorder in which the parameters are allowed to evolve themselves. Mathematically, quenched disorder is more difficult to analyze than its annealed counterpart as averages over thermal noise and quenched disorder play distinct roles. Few techniques to approach each are known, most of which rely on approximations. Common techniques used to analyzed systems with quenched disorder include the replica trick, based on analytic continuation, and the cavity method, where a system's response to the perturbation due to an added constituent is analyzed. While these methods yield results agreeing with experiments in many systems, the procedures have not been formally mathematically justified. Recently, rigorous methods have shown that in the Sherrington-Kirkpatrick model, an archetypal spin glass model, the replica-based solution is exact. The generating functional formalism, which relies on the computation of path integrals, is a fully exact method but is more difficult to apply than the replica or cavity procedures in practice. == Annealed disorder == A system is said to present annealed disorder when some parameters entering its definition are random variables, but whose evolution is related to that of the degrees of freedom defining the system. It is defined in opposition to quenched disorder, where the random variables may not change their values. Systems with annealed disorder are usually considered to be easier to deal with mathematically, since the average on the disorder and the thermal average may be treated on the same footing. == See also == In high energy physics, the formation of the chiral condensate in quantum chromodynamics is an ordering transition; it is discussed in terms of superselection. Entropy Topological order Impurity superstructure (physics) == Further reading == H Kleinert: Gauge Fields in Condensed Matter (ISBN 9971-5-0210-0, 2 volumes) Singapore: World Scientific (1989). Bürgi, H. B. (2000). "Motion and Disorder in Crystal Structure Analysis: Measuring and Distinguishing them". Annual Review of Physical Chemistry. 51: 275–296. Bibcode:2000ARPC...51..275B. doi:10.1146/annurev.physchem.51.1.275. PMID 11031283. Müller, Peter (2009). "5.067 Crystal Structure Refinement" (PDF). Cambridge: MIT OpenCourseWare. Retrieved 13 October 2013. == References ==
Wikipedia/Order_and_disorder_(physics)
Fusion power is a proposed form of power generation that would generate electricity by using heat from nuclear fusion reactions. In a fusion process, two lighter atomic nuclei combine to form a heavier nucleus, while releasing energy. Devices designed to harness this energy are known as fusion reactors. Research into fusion reactors began in the 1940s, but as of 2025, no device has reached net power. Fusion processes require fuel, in a state of plasma, and a confined environment with sufficient temperature, pressure, and confinement time. The combination of these parameters that results in a power-producing system is known as the Lawson criterion. In stellar cores the most common fuel is the lightest isotope of hydrogen (protium), and gravity provides the conditions needed for fusion energy production. Proposed fusion reactors would use the heavy hydrogen isotopes of deuterium and tritium for DT fusion, for which the Lawson criterion is the easiest to achieve. This produces a helium nucleus and an energetic neutron. Most designs aim to heat their fuel to around 100 million kelvins. The necessary combination of pressure and confinement time has proven very difficult to produce. Reactors must achieve levels of breakeven well beyond net plasma power and net electricity production to be economically viable. Fusion fuel is 10 million times more energy dense than coal, but tritium is extremely rare on Earth, having a half life of only ~12.3 years. Consequently, during the operation of envisioned fusion reactors, lithium breeding blankets are to be subjected to neutron fluxes to generate tritium to complete the fuel cycle. As a source of power, nuclear fusion has a number of potential advantages compared to fission. These include little high-level waste, and increased safety. One issue that affects common reactions is managing resulting neutron radiation, which over time degrade the reaction chamber, especially the first wall. Fusion research is dominated by magnetic confinement (MCF) and inertial confinement (ICF) approaches. MCF systems have been researched since the 1940s, initially focusing on the z-pinch, stellarator, and magnetic mirror. The tokamak has dominated MCF designs since Soviet experiments were verified in the late 1960s. ICF was developed from the 1970s, focusing on laser driving of fusion implosions. Both designs are under research at very large scales, most notably the ITER tokamak in France and the National Ignition Facility (NIF) laser in the United States. Researchers and private companies are also studying other designs that may offer less expensive approaches. Among these alternatives, there is increasing interest in magnetized target fusion, and new variations of the stellarator. == Terminology == The terms "fusion experiment" and "fusion device" refer to the collection of technologies used for scientific investigation of plasma, and technical advancement. Not all are capable of, or routinely used for, producing thermonuclear reactions i.e. fusion. The term "fusion reactor" is used interchangeably to mean the above experiments, or to mean a hypothetical power-producing version, at the center of a commercial power plant, requiring additions such as a breeding blanket and heat engine. == Background == === Mechanism === Fusion reactions occur when two or more atomic nuclei come close enough for long enough that the nuclear force pulling them together exceeds the electrostatic force pushing them apart, fusing them into heavier nuclei. For nuclei heavier than iron-56, the reaction is endothermic, requiring an input of energy. The heavy nuclei bigger than iron have many more protons resulting in a greater repulsive force. For nuclei lighter than iron-56, the reaction is exothermic, releasing energy when they fuse. Since hydrogen has a single proton in its nucleus, it requires the least effort to attain fusion, and yields the most net energy output. Also since it has one electron, hydrogen is the easiest fuel to fully ionize. The repulsive electrostatic interaction between nuclei operates across larger distances than the strong force, which has a range of roughly one femtometer—the diameter of a proton or neutron. The fuel atoms must be supplied enough kinetic energy to approach one another closely enough for the strong force to overcome the electrostatic repulsion in order to initiate fusion. The "Coulomb barrier" is the quantity of kinetic energy required to move the fuel atoms near enough. Atoms can be heated to extremely high temperatures or accelerated in a particle accelerator to produce this energy. An atom loses its electrons once it is heated past its ionization energy. The resultant bare nucleus is a type of ion. The result of this ionization is plasma, which is a heated cloud of bare nuclei and free electrons that were formerly bound to them. Plasmas are electrically conducting and magnetically controlled because the charges are separated. This is used by several fusion devices to confine the hot particles. === Cross section === A reaction's cross section, denoted σ, measures the probability that a fusion reaction will happen. This depends on the relative velocity of the two nuclei. Higher relative velocities generally increase the probability, but the probability begins to decrease again at very high energies. In a plasma, particle velocity can be characterized using a probability distribution. If the plasma is thermalized, the distribution looks like a Gaussian curve, or Maxwell–Boltzmann distribution. In this case, it is useful to use the average particle cross section over the velocity distribution. This is entered into the volumetric fusion rate: P fusion = n A n B ⟨ σ v A , B ⟩ E fusion {\displaystyle P_{\text{fusion}}=n_{A}n_{B}\langle \sigma v_{A,B}\rangle E_{\text{fusion}}} where: P fusion {\displaystyle P_{\text{fusion}}} is the energy made by fusion, per time and volume n is the number density of species A or B, of the particles in the volume ⟨ σ v A , B ⟩ {\displaystyle \langle \sigma v_{A,B}\rangle } is the cross section of that reaction, average over all the velocities of the two species v E fusion {\displaystyle E_{\text{fusion}}} is the energy released by that fusion reaction. === Lawson criterion === The Lawson criterion considers the energy balance between the energy produced in fusion reactions to the energy being lost to the environment. In order to generate usable energy, a system would have to produce more energy than it loses. Lawson assumed an energy balance, shown below. P out = η capture ( P fusion − P conduction − P radiation ) {\displaystyle P_{\text{out}}=\eta _{\text{capture}}\left(P_{\text{fusion}}-P_{\text{conduction}}-P_{\text{radiation}}\right)} where: P out {\displaystyle P_{\text{out}}} is the net power from fusion η capture {\displaystyle \eta _{\text{capture}}} is the efficiency of capturing the output of the fusion P fusion {\displaystyle P_{\text{fusion}}} is the rate of energy generated by the fusion reactions P conduction {\displaystyle P_{\text{conduction}}} is the conduction losses as energetic mass leaves the plasma P radiation {\displaystyle P_{\text{radiation}}} is the radiation losses as energy leaves as light and neutron flux. The rate of fusion, and thus Pfusion, depends on the temperature and density of the plasma. The plasma loses energy through conduction and radiation. Conduction occurs when ions, electrons, or neutrals impact other substances, typically a surface of the device, and transfer a portion of their kinetic energy to the other atoms. The rate of conduction is also based on the temperature and density. Radiation is energy that leaves the cloud as light. Radiation also increases with temperature as well as the mass of the ions. Fusion power systems must operate in a region where the rate of fusion is higher than the losses. === Triple product: density, temperature, time === The Lawson criterion argues that a machine holding a thermalized and quasi-neutral plasma has to generate enough energy to overcome its energy losses. The amount of energy released in a given volume is a function of the temperature, and thus the reaction rate on a per-particle basis, the density of particles within that volume, and finally the confinement time, the length of time that energy stays within the volume. This is known as the "triple product": the plasma density, temperature, and confinement time. In magnetic confinement, the density is low, on the order of a "good vacuum". For instance, in the ITER device the fuel density is about 1.0 × 1019 m−3, which is about one-millionth atmospheric density. This means that the temperature and/or confinement time must increase. Fusion-relevant temperatures have been achieved using a variety of heating methods that were developed in the early 1970s. In modern machines, as of 2019, the major remaining issue was the confinement time. Plasmas in strong magnetic fields are subject to a number of inherent instabilities, which must be suppressed to reach useful durations. One way to do this is to simply make the reactor volume larger, which reduces the rate of leakage due to classical diffusion. This is why ITER is so large. In contrast, inertial confinement systems approach useful triple product values via higher density, and have short confinement intervals. In NIF, the initial frozen hydrogen fuel load has a density less than water that is increased to about 100 times the density of lead. In these conditions, the rate of fusion is so high that the fuel fuses in the microseconds it takes for the heat generated by the reactions to blow the fuel apart. Although NIF is also large, this is a function of its "driver" design, not inherent to the fusion process. === Energy capture === Multiple approaches have been proposed to capture the energy that fusion produces. The simplest is to heat a fluid. The commonly targeted D-T reaction releases much of its energy as fast-moving neutrons. Electrically neutral, the neutron is unaffected by the confinement scheme. In most designs, it is captured in a thick "blanket" of lithium surrounding the reactor core. When struck by a high-energy neutron, the blanket heats up. It is then actively cooled with a working fluid that drives a turbine to produce power. Another design proposed to use the neutrons to breed fission fuel in a blanket of nuclear waste, a concept known as a fission-fusion hybrid. In these systems, the power output is enhanced by the fission events, and power is extracted using systems like those in conventional fission reactors. Designs that use other fuels, notably the proton-boron aneutronic fusion reaction, release much more of their energy in the form of charged particles. In these cases, power extraction systems based on the movement of these charges are possible. Direct energy conversion was developed at Lawrence Livermore National Laboratory (LLNL) in the 1980s as a method to maintain a voltage directly using fusion reaction products. This has demonstrated energy capture efficiency of 48 percent. == Plasma behavior == Plasma is an ionized gas that conducts electricity. In bulk, it is modeled using magnetohydrodynamics, which is a combination of the Navier–Stokes equations governing fluids and Maxwell's equations governing how magnetic and electric fields behave. Fusion exploits several plasma properties, including: Self-organizing plasma conducts electric and magnetic fields. Its motions generate fields that can in turn contain it. Diamagnetic plasma can generate its own internal magnetic field. This can reject an externally applied magnetic field, making it diamagnetic. Magnetic mirrors can reflect plasma when it moves from a low to high density field.:24 == Methods == === Magnetic confinement === Tokamak: the most well-developed and well-funded approach. This method drives hot plasma around in a magnetically confined torus, with an internal current. When completed, ITER will become the world's largest tokamak. As of September 2018 an estimated 226 experimental tokamaks were either planned, decommissioned or operating (50) worldwide. Spherical tokamak: also known as spherical torus. A variation on the tokamak with a spherical shape. Stellarator: Twisted rings of hot plasma. The stellarator attempts to create a natural twisted plasma path, using external magnets. Stellarators were developed by Lyman Spitzer in 1950 and evolved into four designs: Torsatron, Heliotron, Heliac and Helias. One example is Wendelstein 7-X, a German device. It is the world's largest stellarator. Internal rings: Stellarators create a twisted plasma using external magnets, while tokamaks do so using a current induced in the plasma. Several classes of designs provide this twist using conductors inside the plasma. Early calculations showed that collisions between the plasma and the supports for the conductors would remove energy faster than fusion reactions could replace it. Modern variations, including the Levitated Dipole Experiment (LDX), use a solid superconducting torus that is magnetically levitated inside the reactor chamber. Magnetic mirror: Developed by Richard F. Post and teams at Lawrence Livermore National Laboratory (LLNL) in the 1960s. Magnetic mirrors reflect plasma back and forth in a line. Variations included the Tandem Mirror, magnetic bottle and the biconic cusp. A series of mirror machines were built by the US government in the 1970s and 1980s, principally at LLNL. However, calculations in the 1970s estimated it was unlikely these would ever be commercially useful. Bumpy torus: A number of magnetic mirrors are arranged end-to-end in a toroidal ring. Any fuel ions that leak out of one are confined in a neighboring mirror, permitting the plasma pressure to be raised arbitrarily high without loss. An experimental facility, the ELMO Bumpy Torus or EBT was built and tested at Oak Ridge National Laboratory (ORNL) in the 1970s. Field-reversed configuration: This device traps plasma in a self-organized quasi-stable structure; where the particle motion makes an internal magnetic field which then traps itself. Spheromak: Similar to a field-reversed configuration, a semi-stable plasma structure made by using the plasmas' self-generated magnetic field. A spheromak has both toroidal and poloidal fields, while a field-reversed configuration has no toroidal field. Dynomak is a spheromak that is formed and sustained using continuous magnetic flux injection. Reversed field pinch: Here the plasma moves inside a ring. It has an internal magnetic field. Moving out from the center of this ring, the magnetic field reverses direction. === Inertial confinement === Indirect drive: Lasers heat a structure known as a Hohlraum that becomes so hot it begins to radiate x-ray light. These x-rays heat a fuel pellet, causing it to collapse inward to compress the fuel. The largest system using this method is the National Ignition Facility, followed closely by Laser Mégajoule. Direct drive: Lasers directly heat the fuel pellet. Notable direct drive experiments have been conducted at the Laboratory for Laser Energetics (LLE) and the GEKKO XII facilities. Good implosions require fuel pellets with close to a perfect shape in order to generate a symmetrical inward shock wave that produces the high-density plasma. Fast ignition: This method uses two laser blasts. The first blast compresses the fusion fuel, while the second ignites it. As of 2019 this technique had lost favor for energy production. Magneto-inertial fusion or Magnetized Liner Inertial Fusion: This combines a laser pulse with a magnetic pinch. The pinch community refers to it as magnetized liner inertial fusion while the ICF community refers to it as magneto-inertial fusion. Ion Beams: Ion beams replace laser beams to heat the fuel. The main difference is that the beam has momentum due to mass, whereas lasers do not. As of 2019 it appears unlikely that ion beams can be sufficiently focused spatially and in time. Z-machine: Sends an electric current through thin tungsten wires, heating them sufficiently to generate x-rays. Like the indirect drive approach, these x-rays then compress a fuel capsule. === Magnetic or electric pinches === Z-pinch: A current travels in the z-direction through the plasma. The current generates a magnetic field that compresses the plasma. Pinches were the first method for human-made controlled fusion. The z-pinch has inherent instabilities that limit its compression and heating to values too low for practical fusion. The largest such machine, the UK's ZETA, was the last major experiment of the sort. The problems in z-pinch led to the tokamak design. The dense plasma focus is a possibly superior variation. Theta-pinch: A current circles around the outside of a plasma column, in the theta direction. This induces a magnetic field running down the center of the plasma, as opposed to around it. The early theta-pinch device Scylla was the first to conclusively demonstrate fusion, but later work demonstrated it had inherent limits that made it uninteresting for power production. Sheared Flow Stabilized Z-Pinch: Research at the University of Washington under Uri Shumlak investigated the use of sheared-flow stabilization to smooth out the instabilities of Z-pinch reactors. This involves accelerating neutral gas along the axis of the pinch. Experimental machines included the FuZE and Zap Flow Z-Pinch experimental reactors. In 2017, British technology investor and entrepreneur Benj Conway, together with physicists Brian Nelson and Uri Shumlak, co-founded Zap Energy to attempt to commercialize the technology for power production. Screw Pinch: This method combines a theta and z-pinch for improved stabilization. === Inertial electrostatic confinement === Polywell: Attempts to combine magnetic confinement with electrostatic fields, to avoid the conduction losses generated by the cage. === Other thermonuclear === Magnetized target fusion: Confines hot plasma using a magnetic field and squeezes it using inertia. Examples include LANL FRX-L machine, General Fusion (piston compression with liquid metal liner), HyperJet Fusion (plasma jet compression with plasma liner). Uncontrolled: Fusion has been initiated by man, using uncontrolled fission explosions to stimulate fusion. Early proposals for fusion power included using bombs to initiate reactions. See Project PACER. === Other non-thermonuclear === Muon-catalyzed fusion: This approach replaces electrons in diatomic molecules of isotopes of hydrogen with muons—more massive particles with the same electric charge. Their greater mass compresses the nuclei enough such that the strong interaction can cause fusion. As of 2007 producing muons required more energy than can be obtained from muon-catalyzed fusion. Lattice confinement fusion: Lattice confinement fusion (LCF) is a type of nuclear fusion in which deuteron-saturated metals are exposed to gamma radiation or ion beams, such as in an IEC fusor, avoiding the confined high-temperature plasmas used in other methods of fusion. === Negative power methods === These methods inherently consume more power than they can provide via fusion. Fusor: An electric field heats ions to fusion conditions. The machine typically uses two spherical cages, a cathode inside the anode, inside a vacuum. These machines are not considered a viable approach to net power because of their high conduction and radiation losses. They are simple enough to build that amateurs have fused atoms using them. Colliding beam fusion: A beam of high energy particles fired at another beam or target can initiate fusion. This was used in the 1970s and 1980s to study the cross sections of fusion reactions. However beam systems cannot be used for power because keeping a beam coherent takes more energy than comes from fusion. == Locations == == Common tools == Many approaches, equipment, and mechanisms are employed across multiple projects to address fusion heating, measurement, and power production. === Machine learning === A deep reinforcement learning system has been used to control a tokamak-based reactor. The system was able to manipulate the magnetic coils to manage the plasma. The system was able to continuously adjust to maintain appropriate behavior (more complex than step-based systems). In 2014, Google began working with California-based fusion company TAE Technologies to control the Joint European Torus (JET) to predict plasma behavior. DeepMind has also developed a control scheme with TCV. === Heating === Electrostatic heating: an electric field can do work on charged ions or electrons, heating them. Neutral beam injection: hydrogen is ionized and accelerated by an electric field to form a charged beam that is shone through a source of neutral hydrogen gas towards the plasma which itself is ionized and contained by a magnetic field. Some of the intermediate hydrogen gas is accelerated towards the plasma by collisions with the charged beam while remaining neutral: this neutral beam is thus unaffected by the magnetic field and so reaches the plasma. Once inside the plasma the neutral beam transmits energy to the plasma by collisions which ionize it and allow it to be contained by the magnetic field, thereby both heating and refueling the reactor in one operation. The remainder of the charged beam is diverted by magnetic fields onto cooled beam dumps. Radio frequency heating: a radio wave causes the plasma to oscillate (i.e., microwave oven). This is also known as electron cyclotron resonance heating, using for example gyrotrons, or dielectric heating. Magnetic reconnection: when plasma gets dense, its electromagnetic properties can change, which can lead to magnetic reconnection. Reconnection helps fusion because it instantly dumps energy into a plasma, heating it quickly. Up to 45% of the magnetic field energy can heat the ions. Magnetic oscillations: varying electric currents can be supplied to magnetic coils that heat plasma confined within a magnetic wall. Antiproton annihilation: antiprotons injected into a mass of fusion fuel can induce thermonuclear reactions. This possibility as a method of spacecraft propulsion, known as antimatter-catalyzed nuclear pulse propulsion, was investigated at Pennsylvania State University in connection with the proposed AIMStar project. === Measurement === The diagnostics of a fusion scientific reactor are extremely complex and varied. The diagnostics required for a fusion power reactor will be various but less complicated than those of a scientific reactor as by the time of commercialization, many real-time feedback and control diagnostics will have been perfected. However, the operating environment of a commercial fusion reactor will be harsher for diagnostic systems than in a scientific reactor because continuous operations may involve higher plasma temperatures and higher levels of neutron irradiation. In many proposed approaches, commercialization will require the additional ability to measure and separate diverter gases, for example helium and impurities, and to monitor fuel breeding, for instance the state of a tritium breeding liquid lithium liner. The following are some basic techniques. Flux loop: a loop of wire is inserted into the magnetic field. As the field passes through the loop, a current is made. The current measures the total magnetic flux through that loop. This has been used on the National Compact Stellarator Experiment, the polywell, and the LDX machines. A Langmuir probe, a metal object placed in a plasma, can be employed. A potential is applied to it, giving it a voltage against the surrounding plasma. The metal collects charged particles, drawing a current. As the voltage changes, the current changes. This makes an IV Curve. The IV-curve can be used to determine the local plasma density, potential and temperature. Thomson scattering: "Light scatters" from plasma can be used to reconstruct plasma behavior, including density and temperature. It is common in Inertial confinement fusion, Tokamaks, and fusors. In ICF systems, firing a second beam into a gold foil adjacent to the target makes x-rays that traverse the plasma. In tokamaks, this can be done using mirrors and detectors to reflect light. Neutron detectors: Several types of neutron detectors can record the rate at which neutrons are produced. X-ray detectors Visible, IR, UV, and X-rays are emitted anytime a particle changes velocity. If the reason is deflection by a magnetic field, the radiation is cyclotron radiation at low speeds and synchrotron radiation at high speeds. If the reason is deflection by another particle, plasma radiates X-rays, known as Bremsstrahlung radiation. === Power production === Neutron blankets absorb neutrons, which heats the blanket. Power can be extracted from the blanket in various ways: Steam turbines can be driven by heat transferred into a working fluid that turns into steam, driving electric generators. Neutron blankets: These neutrons can regenerate spent fission fuel. Tritium can be produced using a breeder blanket of liquid lithium or a helium cooled pebble bed made of lithium-bearing ceramic pebbles. Direct conversion: The kinetic energy of a particle can be converted into voltage. It was first suggested by Richard F. Post in conjunction with magnetic mirrors, in the late 1960s. It has been proposed for Field-Reversed Configurations as well as Dense Plasma Focus devices. The process converts a large fraction of the random energy of the fusion products into directed motion. The particles are then collected on electrodes at various large electrical potentials. This method has demonstrated an experimental efficiency of 48 percent. Traveling-wave tubes pass charged helium atoms at several megavolts and just coming off the fusion reaction through a tube with a coil of wire around the outside. This passing charge at high voltage pulls electricity through the wire. === Confinement === Confinement refers to all the conditions necessary to keep a plasma dense and hot long enough to undergo fusion. General principles: Equilibrium: The forces acting on the plasma must be balanced. One exception is inertial confinement, where the fusion must occur faster than the dispersal time. Stability: The plasma must be constructed so that disturbances will not lead to the plasma dispersing. Transport or conduction: The loss of material must be sufficiently slow. The plasma carries energy off with it, so rapid loss of material will disrupt fusion. Material can be lost by transport into different regions or conduction through a solid or liquid. To produce self-sustaining fusion, part of the energy released by the reaction must be used to heat new reactants and maintain the conditions for fusion. ==== Magnetic confinement ==== ===== Magnetic Mirror ===== Magnetic mirror effect. If a particle follows the field line and enters a region of higher field strength, the particles can be reflected. Several devices apply this effect. The most famous was the magnetic mirror machines, a series of devices built at LLNL from the 1960s to the 1980s. Other examples include magnetic bottles and Biconic cusp. Because the mirror machines were straight, they had some advantages over ring-shaped designs. The mirrors were easier to construct and maintain and direct conversion energy capture was easier to implement. Poor confinement has led this approach to be abandoned, except in the polywell design. ===== Magnetic loops ===== Magnetic loops bend the field lines back on themselves, either in circles or more commonly in nested toroidal surfaces. The most highly developed systems of this type are the tokamak, the stellarator, and the reversed field pinch. Compact toroids, especially the field-reversed configuration and the spheromak, attempt to combine the advantages of toroidal magnetic surfaces with those of a simply connected (non-toroidal) machine, resulting in a mechanically simpler and smaller confinement area. ==== Inertial confinement ==== Inertial confinement is the use of rapid implosion to heat and confine plasma. A shell surrounding the fuel is imploded using a direct laser blast (direct drive), a secondary x-ray blast (indirect drive), or heavy beams. The fuel must be compressed to about 30 times solid density with energetic beams. Direct drive can in principle be efficient, but insufficient uniformity has prevented success.:19–20 Indirect drive uses beams to heat a shell, driving the shell to radiate x-rays, which then implode the pellet. The beams are commonly laser beams, but ion and electron beams have been investigated.:182–193 ===== Electrostatic confinement ===== Electrostatic confinement fusion devices use electrostatic fields. The best known is the fusor. This device has a cathode inside an anode wire cage. Positive ions fly towards the negative inner cage, and are heated by the electric field in the process. If they miss the inner cage they can collide and fuse. Ions typically hit the cathode, however, creating prohibitory high conduction losses. Fusion rates in fusors are low because of competing physical effects, such as energy loss in the form of light radiation. Designs have been proposed to avoid the problems associated with the cage, by generating the field using a non-neutral cloud. These include a plasma oscillating device, a magnetically shielded-grid, a penning trap, the polywell, and the F1 cathode driver concept. == Fuels == The fuels considered for fusion power are mainly the heavier isotopes of hydrogen—deuterium and tritium. Deuterium is abundant on earth in the form of semiheavy water. Tritium, decaying with a half-life of 12 years, must be produced. Fusion reactor concepts assume as a component a proposed lithium "breeding blanket" technology surrounding the reactor. Helium-3 is a more speculative fuel, which must be mined extraterrestrially or produced by other nuclear reactions. The protium–boron-11 reaction is extremely speculative, but minimizes neutron radiation. === Deuterium, tritium === The easiest nuclear reaction, at the lowest energy, is D+T: 21D + 31T → 42He (3.5 MeV) + 10n (14.1 MeV) This reaction is common in research, industrial and military applications, usually as a neutron source. Deuterium is a naturally occurring isotope of hydrogen and is commonly available. The large mass ratio of the hydrogen isotopes makes their separation easy compared to the uranium enrichment process. Tritium is a natural isotope of hydrogen, but because it has a short half-life of 12.32 years, it is hard to find, store, produce, and is expensive. Consequently, the deuterium-tritium fuel cycle requires the breeding of tritium from lithium using one of the following reactions: 10n + 63Li → 31T + 42He 10n + 73Li → 31T + 42He + 10n The reactant neutron is supplied by the D-T fusion reaction shown above, and the one that has the greatest energy yield. The reaction with 6Li is exothermic, providing a small energy gain for the reactor. The reaction with 7Li is endothermic, but does not consume the neutron. Neutron multiplication reactions are required to replace the neutrons lost to absorption by other elements. Leading candidate neutron multiplication materials are beryllium and lead, but the 7Li reaction helps to keep the neutron population high. Natural lithium is mainly 7Li, which has a low tritium production cross section compared to 6Li so most reactor designs use breeding blankets with enriched 6Li. Drawbacks commonly attributed to D-T fusion power include: The supply of neutrons results in neutron activation of the reactor materials.:242 80% of the resultant energy is carried off by neutrons, which limits the use of direct energy conversion. It requires the radioisotope tritium. Tritium may leak from reactors. Some estimates suggest that this would represent a substantial environmental radioactivity release. The neutron flux expected in a commercial D-T fusion reactor is about 100 times that of fission power reactors, posing problems for material design. After a series of D-T tests at JET, the vacuum vessel was sufficiently radioactive that it required remote handling for the year following the tests. In a production setting, the neutrons would react with lithium in the breeding blanket composed of lithium ceramic pebbles or liquid lithium, yielding tritium. The energy of the neutrons ends up in the lithium, which would then be transferred to drive electrical production. The lithium blanket protects the outer portions of the reactor from the neutron flux. Newer designs, the advanced tokamak in particular, use lithium inside the reactor core as a design element. The plasma interacts directly with the lithium, preventing a problem known as "recycling". The advantage of this design was demonstrated in the Lithium Tokamak Experiment. === Deuterium === Fusing two deuterium nuclei is the second easiest fusion reaction. The reaction has two branches that occur with nearly equal probability: 21D + 21D → 31T + 11H 21D + 21D → 32He + 10n This reaction is also common in research. The optimum energy to initiate this reaction is 15 keV, only slightly higher than that for the D-T reaction. The first branch produces tritium, so that a D-D reactor is not tritium-free, even though it does not require an input of tritium or lithium. Unless the tritons are quickly removed, most of the tritium produced is burned in the reactor, which reduces the handling of tritium, with the disadvantage of producing more, and higher-energy, neutrons. The neutron from the second branch of the D-D reaction has an energy of only 2.45 MeV (0.393 pJ), while the neutron from the D-T reaction has an energy of 14.1 MeV (2.26 pJ), resulting in greater isotope production and material damage. When the tritons are removed quickly while allowing the 3He to react, the fuel cycle is called "tritium suppressed fusion". The removed tritium decays to 3He with a 12.5 year half life. By recycling the 3He decay product into the reactor, the fusion reactor does not require materials resistant to fast neutrons. Assuming complete tritium burn-up, the reduction in the fraction of fusion energy carried by neutrons would be only about 18%, so that the primary advantage of the D-D fuel cycle is that tritium breeding is not required. Other advantages are independence from lithium resources and a somewhat softer neutron spectrum. The disadvantage of D-D compared to D-T is that the energy confinement time (at a given pressure) must be 30 times longer and the power produced (at a given pressure and volume) is 68 times less. Assuming complete removal of tritium and 3He recycling, only 6% of the fusion energy is carried by neutrons. The tritium-suppressed D-D fusion requires an energy confinement that is 10 times longer compared to D-T and double the plasma temperature. === Deuterium, helium-3 === A second-generation approach to controlled fusion power involves combining helium-3 (3He) and deuterium (2H): 21D + 32He → 42He + 11H This reaction produces 4He and a high-energy proton. As with the p-11B aneutronic fusion fuel cycle, most of the reaction energy is released as charged particles, reducing activation of the reactor housing and potentially allowing more efficient energy harvesting (via any of several pathways). In practice, D-D side reactions produce a significant number of neutrons, leaving p-11B as the preferred cycle for aneutronic fusion. === Proton, boron-11 === Both material science problems and non-proliferation concerns are greatly diminished by aneutronic fusion. Theoretically, the most reactive aneutronic fuel is 3He. However, obtaining reasonable quantities of 3He implies large scale extraterrestrial mining on the Moon or in the atmosphere of Uranus or Saturn. Therefore, the most promising candidate fuel for such fusion is fusing the readily available protium (i.e. a proton) and boron. Their fusion releases no neutrons, but produces energetic charged alpha (helium) particles whose energy can directly be converted to electrical power: 11H + 115B → 3 42He Side reactions are likely to yield neutrons that carry only about 0.1% of the power,:177–182 which means that neutron scattering is not used for energy transfer and material activation is reduced several thousand-fold. The optimum temperature for this reaction of 123 keV is nearly ten times higher than that for pure hydrogen reactions, and energy confinement must be 500 times better than that required for the D-T reaction. In addition, the power density is 2500 times lower than for D-T, although per unit mass of fuel, this is still considerably higher compared to fission reactors. Because the confinement properties of the tokamak and laser pellet fusion are marginal, most proposals for aneutronic fusion are based on radically different confinement concepts, such as the Polywell and the Dense Plasma Focus. In 2013, a research team led by Christine Labaune at École Polytechnique, reported a new fusion rate record for proton-boron fusion, with an estimated 80 million fusion reactions during a 1.5 nanosecond laser fire, 100 times greater than reported in previous experiments. == Material selection == Structural material stability is a critical issue. Materials that can survive the high temperatures and neutron bombardment experienced in a fusion reactor are considered key to success. The principal issues are the conditions generated by the plasma, neutron degradation of wall surfaces, and the related issue of plasma-wall surface conditions. Reducing hydrogen permeability is seen as crucial to hydrogen recycling and control of the tritium inventory. Materials with the lowest bulk hydrogen solubility and diffusivity provide the optimal candidates for stable barriers. A few pure metals, including tungsten and beryllium, and compounds such as carbides, dense oxides, and nitrides have been investigated. Research has highlighted that coating techniques for preparing well-adhered and perfect barriers are of equivalent importance. The most attractive techniques are those in which an ad-layer is formed by oxidation alone. Alternative methods utilize specific gas environments with strong magnetic and electric fields. Assessment of barrier performance represents an additional challenge. Classical coated membranes gas permeation continues to be the most reliable method to determine hydrogen permeation barrier (HPB) efficiency. In 2021, in response to increasing numbers of designs for fusion power reactors for 2040, the United Kingdom Atomic Energy Authority published the UK Fusion Materials Roadmap 2021–2040, focusing on five priority areas, with a focus on tokamak family reactors: Novel materials to minimize the amount of activation in the structure of the fusion power plant; Compounds that can be used within the power plant to optimise breeding of tritium fuel to sustain the fusion process; Magnets and insulators that are resistant to irradiation from fusion reactions—especially under cryogenic conditions; Structural materials able to retain their strength under neutron bombardment at high operating temperatures (over 550 degrees C); Engineering assurance for fusion materials—providing irradiated sample data and modelled predictions such that plant designers, operators and regulators have confidence that materials are suitable for use in future commercial power stations. === Superconducting materials === In a plasma that is embedded in a magnetic field (known as a magnetized plasma) the fusion rate scales as the magnetic field strength to the 4th power. For this reason, many fusion companies that rely on magnetic fields to control their plasma are trying to develop high temperature superconducting devices. In 2021, SuperOx, a Russian and Japanese company, developed a new manufacturing process for making superconducting YBCO wire for fusion reactors. This new wire was shown to conduct between 700 and 2000 Amps per square millimeter. The company was able to produce 186 miles of wire in nine months. === Containment considerations === Even on smaller production scales, the containment apparatus is blasted with matter and energy. Designs for plasma containment must consider: A heating and cooling cycle, up to a 10 MW/m2 thermal load. Neutron radiation, which over time leads to neutron activation and embrittlement. High energy ions leaving at tens to hundreds of electronvolts. Alpha particles leaving at millions of electronvolts. Electrons leaving at high energy. Light radiation (IR, visible, UV, X-ray). Depending on the approach, these effects may be higher or lower than fission reactors. One estimate put the radiation at 100 times that of a typical pressurized water reactor. Depending on the approach, other considerations such as electrical conductivity, magnetic permeability, and mechanical strength matter. Materials must also not end up as long-lived radioactive waste. === Plasma-wall surface conditions === For long term use, each atom in the wall is expected to be hit by a neutron and displaced about 100 times before the material is replaced. These high-energy neutron collisions with the atoms in the wall result in the absorption of the neutrons, forming unstable isotopes of the atoms. When the isotope decays, it may emit alpha particles, protons, or gamma rays. Alpha particles, once stabilized by capturing electrons, form helium atoms which accumulate at grain boundaries and may result in swelling, blistering, or embrittlement of the material. === Selection of materials === Tungsten is widely regarded as the optimal material for plasma-facing components in next-generation fusion devices due to its unique properties and potential for enhancements. Its low sputtering rates and high melting point make it particularly suitable for the high-stress environments of fusion reactors, allowing it to withstand intense conditions without rapid degradation. Additionally, tungsten's low tritium retention through co-deposition and implantation is essential in fusion contexts, as it helps to minimize the accumulation of this radioactive isotope. Liquid metals (lithium, gallium, tin) have been proposed, e.g., by injection of 1–5 mm thick streams flowing at 10 m/s on solid substrates. Graphite features a gross erosion rate due to physical and chemical sputtering amounting to many meters per year, requiring redeposition of the sputtered material. The redeposition site generally does not exactly match the sputter site, allowing net erosion that may be prohibitive. An even larger problem is that tritium is redeposited with the redeposited graphite. The tritium inventory in the wall and dust could build up to many kilograms, representing a waste of resources and a radiological hazard in case of an accident. Graphite found favor as material for short-lived experiments, but appears unlikely to become the primary plasma-facing material (PFM) in a commercial reactor. Ceramic materials such as silicon carbide (SiC) have similar issues like graphite. Tritium retention in silicon carbide plasma-facing components is approximately 1.5-2 times higher than in graphite, resulting in reduced fuel efficiency and heightened safety risks in fusion reactors. SiC tends to trap more tritium, limiting its availability for fusion and increasing the risk of hazardous accumulation, complicating tritium management. Furthermore, the chemical and physical sputtering of SiC remains significant, contributing to tritium buildup through co-deposition over time and with increasing particle fluence. As a result, carbon-based materials have been excluded from ITER, DEMO, and similar devices. Tungsten's sputtering rate is orders of magnitude smaller than carbon's, and tritium is much less incorporated into redeposited tungsten. However, tungsten plasma impurities are much more damaging than carbon impurities, and self-sputtering can be high, requiring the plasma in contact with the tungsten not be too hot (a few tens of eV rather than hundreds of eV). Tungsten also has issues around eddy currents and melting in off-normal events, as well as some radiological issues. Recent advances in materials for containment apparatus materials have found that certain ceramics can actually improve the longevity of the material of the containment apparatus. Studies on MAX phases, such as titanium silicon carbide, show that under the high operating temperatures of nuclear fusion, the material undergoes a phase transformation from a hexagonal structure to a face-centered-cubic (FCC) structure, driven by helium bubble growth. Helium atoms preferentially accumulate in the Si layer of the hexagonal structure, as the Si atoms are more mobile than the Ti-C slabs. As more atoms are trapped, the Ti-C slab is peeled off, causing the Si atoms to become highly mobile interstitial atoms in the new FCC structure. Lattice strain induced by the He bubbles cause Si atoms to diffuse out of compressive areas, typically towards the surface of the material, forming a protective silicon dioxide layer. Doping vessel materials with iron silicate has emerged as a promising approach to enhance containment materials in fusion reactors, as well. This method targets helium embrittlement at grain boundaries, a common issue that arises as helium atoms accumulate and form bubbles. Over time, these bubbles coalesce at grain boundaries, causing them to expand and degrade the material's structural integrity. By contrast, introducing iron silicate creates nucleation sites within the metal matrix that are more thermodynamically favorable for helium aggregation. This localized congregation around iron silicate nanoparticles induces matrix strain rather than weakening grain boundaries, preserving the material’s strength and longevity. == Accident scenarios and the environment == === Accident potential === Accident potential and effect on the environment are critical to social acceptance of nuclear fusion, also known as a social license. Fusion reactors are not subject to catastrophic meltdown. It requires precise and controlled temperature, pressure and magnetic field parameters to produce net energy, and any damage or loss of required control would rapidly quench the reaction. Fusion reactors operate with seconds or even microseconds worth of fuel at any moment. Without active refueling, the reactions immediately quench. The same constraints prevent runaway reactions. Although the plasma is expected to have a volume of 1,000 m3 (35,000 cu ft) or more, the plasma typically contains only a few grams of fuel. By comparison, a fission reactor is typically loaded with enough fuel for months or years, and no additional fuel is necessary to continue the reaction. This large fuel supply is what offers the possibility of a meltdown. In magnetic containment, strong fields develop in coils that are mechanically held in place by the reactor structure. Failure of this structure could release this tension and allow the magnet to "explode" outward. The severity of this event would be similar to other industrial accidents or an MRI machine quench/explosion, and could be effectively contained within a containment building similar to those used in fission reactors. In laser-driven inertial containment the larger size of the reaction chamber reduces the stress on materials. Although failure of the reaction chamber is possible, stopping fuel delivery prevents catastrophic failure. === Magnet quench === A magnet quench is an abnormal termination of magnet operation that occurs when part of the superconducting coil exits the superconducting state (becomes normal). This can occur because the field inside the magnet is too large, the rate of change of field is too large (causing eddy currents and resultant heating in the copper support matrix), or a combination of the two. More rarely a magnet defect can cause a quench. When this happens, that particular spot is subject to rapid Joule heating from the current, which raises the temperature of the surrounding regions. This pushes those regions into the normal state as well, which leads to more heating in a chain reaction. The entire magnet rapidly becomes normal over several seconds, depending on the size of the superconducting coil. This is accompanied by a loud bang as the energy in the magnetic field is converted to heat, and the cryogenic fluid boils away. The abrupt decrease of current can result in kilovolt inductive voltage spikes and arcing. Permanent damage to the magnet is rare, but components can be damaged by localized heating, high voltages, or large mechanical forces. In practice, magnets usually have safety devices to stop or limit the current when a quench is detected. If a large magnet undergoes a quench, the inert vapor formed by the evaporating cryogenic fluid can present a significant asphyxiation hazard to operators by displacing breathable air. A large section of the superconducting magnets in CERN's Large Hadron Collider unexpectedly quenched during start-up operations in 2008, destroying multiple magnets. In order to prevent a recurrence, the LHC's superconducting magnets are equipped with fast-ramping heaters that are activated when a quench event is detected. The dipole bending magnets are connected in series. Each power circuit includes 154 individual magnets, and should a quench event occur, the entire combined stored energy of these magnets must be dumped at once. This energy is transferred into massive blocks of metal that heat up to several hundred degrees Celsius—because of resistive heating—in seconds. A magnet quench is a "fairly routine event" during the operation of a particle accelerator. === Atmospheric tritium release === The natural product of the fusion reaction is a small amount of helium, which is harmless to life. Hazardous tritium is difficult to retain completely. Although tritium is volatile and biologically active, the health risk posed by a release is much lower than that of most radioactive contaminants, because of tritium's short half-life (12.32 years) and very low decay energy (~14.95 keV), and because it does not bioaccumulate (it cycles out of the body as water, with a biological half-life of 7 to 14 days). ITER incorporates total containment facilities for tritium. Calculations suggest that about 1 kilogram (2.2 lb) of tritium and other radioactive gases in a typical power station would be present. The amount is small enough that it would dilute to legally acceptable limits by the time they reached the station's perimeter fence. The likelihood of small industrial accidents, including the local release of radioactivity and injury to staff, are estimated to be minor compared to fission. They would include accidental releases of lithium or tritium or mishandling of radioactive reactor components. === Radioactive waste === Fusion reactors create far less radioactive material than fission reactors. Further, the material it creates is less damaging biologically, and the radioactivity dissipates within a time period that is well within existing engineering capabilities for safe long-term waste storage. In specific terms, except in the case of aneutronic fusion, the neutron flux turns the structural materials radioactive. The amount of radioactive material at shut-down may be comparable to that of a fission reactor, with important differences. The half-lives of fusion and neutron activation radioisotopes tend to be less than those from fission, so that the hazard decreases more rapidly. Whereas fission reactors produce waste that remains radioactive for thousands of years, the radioactive material in a fusion reactor (other than tritium) would be the reactor core itself and most of this would be radioactive for about 50 years, with other low-level waste being radioactive for another 100 years or so thereafter. The fusion waste's short half-life eliminates the challenge of long-term storage. By 500 years, the material would have the same radiotoxicity as coal ash. Nonetheless, classification as intermediate level waste rather than low-level waste may complicate safety discussions. The choice of materials is less constrained than in conventional fission, where many materials are required for their specific neutron cross-sections. Fusion reactors can be designed using "low activation", materials that do not easily become radioactive. Vanadium, for example, becomes much less radioactive than stainless steel. Carbon fiber materials are also low-activation, are strong and light, and are promising for laser-inertial reactors where a magnetic field is not required. === Fuel reserves === Fusion power commonly proposes the use of deuterium as fuel and many current designs also use lithium. Assuming a fusion energy output equal to the 1995 global power output of about 100 EJ/yr (= 1 × 1020 J/yr) and that this does not increase in the future, which is unlikely, then known current lithium reserves would last 3000 years. Lithium from sea water would last 60 million years, however, and a more complicated fusion process using only deuterium would have fuel for 150 billion years. To put this in context, 150 billion years is close to 30 times the remaining lifespan of the Sun, and more than 10 times the estimated age of the universe == Potential military usage == In some scenarios, fusion power technology could be adapted to produce materials for military purposes. A huge amount of tritium could be produced by a fusion power station; tritium is used in the trigger of hydrogen bombs and in modern boosted fission weapons, but it can be produced in other ways. The energetic neutrons from a fusion reactor could be used to breed weapons-grade plutonium or uranium for an atomic bomb (for example by transmutation of 238U to 239Pu, or 232Th to 233U). A study conducted in 2011 assessed three scenarios: Small-scale fusion station: As a result of much higher power consumption, heat dissipation and a more recognizable design compared to enrichment gas centrifuges, this choice would be much easier to detect and therefore implausible. Commercial facility: The production potential is significant. But no fertile or fissile substances necessary for the production of weapon-usable materials needs to be present at a civil fusion system at all. If not shielded, detection of these materials can be done by their characteristic gamma radiation. The underlying redesign could be detected by regular design information verification. In the (technically more feasible) case of solid breeder blanket modules, it would be necessary for incoming components to be inspected for the presence of fertile material, otherwise plutonium for several weapons could be produced each year. Prioritizing weapon-grade material regardless of secrecy: The fastest way to produce weapon-usable material was seen in modifying a civil fusion power station. No weapons-compatible material is required during civil use. Even without the need for covert action, such a modification would take about two months to start production and at least an additional week to generate a significant amount. This was considered to be enough time to detect a military use and to react with diplomatic or military means. To stop the production, a military destruction of parts of the facility while leaving out the reactor would be sufficient. Another study concluded "...large fusion reactors—even if not designed for fissile material breeding—could easily produce several hundred kg Pu per year with high weapon quality and very low source material requirements." It was emphasized that the implementation of features for intrinsic proliferation resistance might only be possible at an early phase of research and development. The theoretical and computational tools needed for hydrogen bomb design are closely related to those needed for inertial confinement fusion, but have very little in common with magnetic confinement fusion. == Economics == The European Union spent almost €10 billion through the 1990s. ITER represents an investment of over twenty billion dollars, and possibly tens of billions more, including in kind contributions. Under the European Union's Sixth Framework Programme, nuclear fusion research received €750 million (in addition to ITER funding), compared with €810 million for sustainable energy research, putting research into fusion power well ahead of that of any single rival technology. The United States Department of Energy has allocated $US367M–$US671M every year since 2010, peaking in 2020, with plans to reduce investment to $US425M in its FY2021 Budget Request. About a quarter of this budget is directed to support ITER. The size of the investments and time lines meant that fusion research was traditionally almost exclusively publicly funded. However, starting in the 2010s, the promise of commercializing a paradigm-changing low-carbon energy source began to attract a raft of companies and investors. Over two dozen start-up companies attracted over one billion dollars from roughly 2000 to 2020, mainly from 2015, and a further three billion in funding and milestone related commitments in 2021, with investors including Jeff Bezos, Peter Thiel, and Bill Gates, as well as institutional investors including Legal & General, and energy companies including Equinor, Eni, Chevron, and the Chinese ENN Group. In 2021, Commonwealth Fusion Systems (CFS) obtained $1.8 billion in scale-up funding, and Helion Energy obtained a half-billion dollars with an additional $1.7 billion contingent on meeting milestones. Scenarios developed in the 2000s and early 2010s discussed the effects of the commercialization of fusion power on the future of human civilization. Using nuclear fission as a guide, these saw ITER and later DEMO as bringing online the first commercial reactors around 2050 and a rapid expansion after mid-century. Some scenarios emphasized "fusion nuclear science facilities" as a step beyond ITER. However, the economic obstacles to tokamak-based fusion power remain immense, requiring investment to fund prototype tokamak reactors and development of new supply chains, a problem which will affect any kind of fusion reactor. Tokamak designs appear to be labour-intensive, while the commercialization risk of alternatives like inertial fusion energy is high due to the lack of government resources. Scenarios since 2010 note computing and material science advances enabling multi-phase national or cost-sharing "Fusion Pilot Plants" (FPPs) along various technology pathways, such as the UK Spherical Tokamak for Energy Production, within the 2030–2040 time frame. Notably, in June 2021, General Fusion announced it would accept the UK government's offer to host the world's first substantial public-private partnership fusion demonstration plant, at Culham Centre for Fusion Energy. The plant will be constructed from 2022 to 2025 and is intended to lead the way for commercial pilot plants in the late 2025s. The plant will be 70% of full scale and is expected to attain a stable plasma of 150 million degrees. In the United States, cost-sharing public-private partnership FPPs appear likely, and in 2022 the DOE announced a new Milestone-Based Fusion Development Program as the centerpiece of its Bold Decadal Vision for Commercial Fusion Energy, which envisages private sector-led teams delivering FPP pre-conceptual designs, defining technology roadmaps, and pursuing the R&D necessary to resolve critical-path scientific and technical issues towards an FPP design. Compact reactor technology based on such demonstration plants may enable commercialization via a fleet approach from the 2030s if early markets can be located. The widespread adoption of non-nuclear renewable energy has transformed the energy landscape. Such renewables are projected to supply 74% of global energy by 2050. The steady fall of renewable energy prices challenges the economic competitiveness of fusion power. Some economists suggest fusion power is unlikely to match other renewable energy costs. Fusion plants are expected to face large start up and capital costs. Moreover, operation and maintenance are likely to be costly. While the costs of the China Fusion Engineering Test Reactor are not well known, an EU DEMO fusion concept was projected to feature a levelized cost of energy (LCOE) of $121/MWh. Fuel costs are low, but economists suggest that the energy cost for a one-gigawatt plant would increase by $16.5 per MWh for every $1 billion increase in the capital investment in construction. There is also the risk that easily obtained lithium will be used up making batteries. Obtaining it from seawater would be very costly and might require more energy than the energy that would be generated. In contrast, renewable levelized cost of energy estimates are substantially lower. For instance, the 2019 levelized cost of energy of solar energy was estimated to be $40-$46/MWh, on shore wind was estimated at $29-$56/MWh, and offshore wind was approximately $92/MWh. However, fusion power may still have a role filling energy gaps left by renewables, depending on how administration priorities for energy and environmental justice influence the market. In the 2020s, socioeconomic studies of fusion that began to consider these factors emerged, and in 2022 EUROFusion launched its Socio-Economic Studies and Prospective Research and Development strands to investigate how such factors might affect commercialization pathways and timetables. Similarly, in April 2023 Japan announced a national strategy to industrialise fusion. Thus, fusion power may work in tandem with other renewable energy sources rather than becoming the primary energy source. In some applications, fusion power could provide the base load, especially if including integrated thermal storage and cogeneration and considering the potential for retrofitting coal plants. == Regulation == As fusion pilot plants move within reach, legal and regulatory issues must be addressed. In September 2020, the United States National Academy of Sciences consulted with private fusion companies to consider a national pilot plant. The following month, the United States Department of Energy, the Nuclear Regulatory Commission (NRC) and the Fusion Industry Association co-hosted a public forum to begin the process. In November 2020, the International Atomic Energy Agency (IAEA) began working with various nations to create safety standards such as dose regulations and radioactive waste handling. In January and March 2021, NRC hosted two public meetings on regulatory frameworks. A public-private cost-sharing approach was endorsed in the December 27 H.R.133 Consolidated Appropriations Act, 2021, which authorized $325 million over five years for a partnership program to build fusion demonstration facilities, with a 100% match from private industry. Subsequently, the UK Regulatory Horizons Council published a report calling for a fusion regulatory framework by early 2022 in order to position the UK as a global leader in commercializing fusion power. This call was met by the UK government publishing in October 2021 both its Fusion Green Paper and its Fusion Strategy, to regulate and commercialize fusion, respectively. Then, in April 2023, in a decision likely to influence other nuclear regulators, the NRC announced in a unanimous vote that fusion energy would be regulated not as fission but under the same regulatory regime as particle accelerators. Then, in October 2023 the UK government, in enacting the Energy Act 2023, made the UK the first country to legislate for fusion separately from fission, to support planning and investment, including the UK's planned prototype fusion power plant for 2040; STEP the UK is working with Canada and Japan in this regard. Meanwhile, in February 2024 the US House of Representatives passed the Atomic Energy Advancement Act, which includes the Fusion Energy Act, which establishes a regulatory framework for fusion energy systems. == Geopolitics == Given the potential of fusion to transform the world's energy industry and mitigate climate change, fusion science has traditionally been seen as an integral part of peace-building science diplomacy. However, technological developments and private sector involvement has raised concerns over intellectual property, regulatory administration, global leadership; equity, and potential weaponization. These challenge ITER's peace-building role and led to calls for a global commission. Fusion power significantly contributing to climate change by 2050 seems unlikely without substantial breakthroughs and a space race mentality emerging, but a contribution by 2100 appears possible, with the extent depending on the type and particularly cost of technology pathways. Developments from late 2020 onwards have led to talk of a "new space race" with multiple entrants, pitting the US against China and the UK's STEP FPP, with China now outspending the US and threatening to leapfrog US technology. On September 24, 2020, the United States House of Representatives approved a research and commercialization program. The Fusion Energy Research section incorporated a milestone-based, cost-sharing, public-private partnership program modeled on NASA's COTS program, which launched the commercial space industry. In February 2021, the National Academies published Bringing Fusion to the U.S. Grid, recommending a market-driven, cost-sharing plant for 2035–2040, and the launch of the Congressional Bipartisan Fusion Caucus followed. In December 2020, an independent expert panel reviewed EUROfusion's design and R&D work on DEMO, and EUROfusion confirmed it was proceeding with its Roadmap to Fusion Energy, beginning the conceptual design of DEMO in partnership with the European fusion community, suggesting an EU-backed machine had entered the race. In October 2023, the UK-oriented Agile Nations group announced a fusion working group. One month later, the UK and the US announced a bilateral partnership to accelerate fusion energy. Then, in December 2023 at COP28 the US announced a US global strategy to commercialize fusion energy. Then, in April 2024, Japan and the US announced a similar partnership, and in May of the same year, the G7 announced a G7 Working Group on Fusion Energy to promote international collaborations to accelerate the development of commercial energy and promote R&D between countries, as well as rationalize fusion regulation. Later the same year, the US partnered with the IAEA to launch the Fusion Energy Solutions Taskforce, to collaboratively crowdsource ideas to accelerate commercial fusion energy, in line with the US COP28 statement. Specifically to resolve the tritium supply problem, in February 2024, the UK (UKAEA) and Canada (Canadian Nuclear Laboratories) announced an agreement by which Canada could refurbish its Candu deuterium-uranium tritium-generating heavywater nuclear plants and even build new ones, guaranteeing a supply of tritium into the 2070s, while the UKAEA would test breeder materials and simulate how tritium could be captured, purified, and injected back into the fusion reaction. In 2024, both South Korea and Japan announced major initiatives to accelerate their national fusion strategies, by building electricity-generating public-private fusion plants in the 2030s, aiming to begin operations in the 2040s and 2030s respectively. == Advantages == Fusion power promises to provide more energy for a given weight of fuel than any fuel-consuming energy source currently in use. The fuel (primarily deuterium) exists abundantly in the ocean: about 1 in 6500 hydrogen atoms in seawater is deuterium. Although this is only about 0.015%, seawater is plentiful and easy to access, implying that fusion could supply the world's energy needs for millions of years. First generation fusion plants are expected to use the deuterium-tritium fuel cycle. This will require the use of lithium for breeding of the tritium. It is not known for how long global lithium supplies will suffice to supply this need as well as those of the battery and metallurgical industries. It is expected that second generation plants will move on to the more formidable deuterium-deuterium reaction. The deuterium-helium-3 reaction is also of interest, but the light helium isotope is practically non-existent on Earth. It is thought to exist in useful quantities in the lunar regolith, and is abundant in the atmospheres of the gas giant planets. Fusion power could be used for so-called "deep space" propulsion within the Solar System and for interstellar space exploration where solar energy is not available, including via antimatter-fusion hybrid drives. === Helium production === Deuterium–tritium fusion produces helium-4 as a by-product. == Disadvantages == Fusion power has a number of disadvantages. Because 80 percent of the energy in any reactor fueled by deuterium and tritium appears in the form of neutron streams, such reactors share many of the drawbacks of fission reactors. This includes the production of large quantities of radioactive waste and serious radiation damage to reactor components. Additionally, naturally occurring tritium is extremely rare. While the hope is that fusion reactors can breed their own tritium, tritium self-sufficiency is extremely challenging, not least because tritium is difficult to contain (tritium has leaked from 48 of 65 nuclear sites in the US). In any case the reserve and start-up tritium inventory requirements are likely to be unacceptably large. If reactors can be made to operate using only deuterium fuel, then the tritium replenishment issue is eliminated and neutron radiation damage may be reduced. However, the probabilities of deuterium-deuterium reactions are about 20 times lower than for deuterium-tritium. Additionally, the temperature needed is about 3 times higher than for deuterium-tritium (see cross section). The higher temperatures and lower reaction rates thus significantly complicate the engineering challenges. == History == === Milestones in fusion experiments === === Early experiments === The first machine to achieve controlled thermonuclear fusion was a pinch machine at Los Alamos National Laboratory called Scylla I at the start of 1958. The team that achieved it was led by a British scientist named James Tuck and included a young Marshall Rosenbluth. Tuck had been involved in the Manhattan project, but had switched to working on fusion in the early 1950s. He applied for funding for the project as part of a White House sponsored contest to develop a fusion reactor along with Lyman Spitzer. The previous year, 1957, the British had claimed that they had achieved thermonuclear fusion reactions on the Zeta pinch machine. However, it turned out that the neutrons they had detected were from beam-target interactions, not fusion, and they withdrew the claim. A CERN-sponsored study group on controlled thermonuclear fusion met from 1958 to 1964. This group ceased when it became clear that CERN discontinued its limited support for plasma physics. Scylla I was a classified machine at the time, so the achievement was hidden from the public. A traditional Z-pinch passes a current down the center of a plasma, which makes a magnetic force around the outside which squeezes the plasma to fusion conditions. Scylla I was a θ-pinch, which used deuterium to pass a current around the outside of its cylinder to create a magnetic force in the center. After the success of Scylla I, Los Alamos went on to build multiple pinch machines over the next few years. Spitzer continued his stellarator research at Princeton. While fusion did not immediately transpire, the effort led to the creation of the Princeton Plasma Physics Laboratory. === First tokamak === In the early 1950s, Soviet physicists I.E. Tamm and A.D. Sakharov developed the concept of the tokamak, combining a low-power pinch device with a low-power stellarator. A.D. Sakharov's group constructed the first tokamaks, achieving the first quasistationary fusion reaction.:90 Over time, the "advanced tokamak" concept emerged, which included non-circular plasma, internal diverters and limiters, superconducting magnets, operation in the "H-mode" island of increased stability, and the compact tokamak, with the magnets on the inside of the vacuum chamber. === First inertial confinement experiments === Laser fusion was suggested in 1962 by scientists at Lawrence Livermore National Laboratory (LLNL), shortly after the invention of the laser in 1960. Inertial confinement fusion experiments using lasers began as early as 1965. Several laser systems were built at LLNL, including the Argus, the Cyclops, the Janus, the long path, the Shiva laser, and the Nova. Laser advances included frequency-tripling crystals that transformed infrared laser beams into ultraviolet beams and "chirping", which changed a single wavelength into a full spectrum that could be amplified and then reconstituted into one frequency. Laser research cost over one billion dollars in the 1980s. === 1980s === The Tore Supra, JET, T-15, and JT-60 tokamaks were built in the 1980s. In 1984, Martin Peng of ORNL proposed the spherical tokamak with a much smaller radius. It used a single large conductor in the center, with magnets as half-rings off this conductor. The aspect ratio fell to as low as 1.2.:B247:225 Peng's advocacy caught the interest of Derek Robinson, who built the Small Tight Aspect Ratio Tokamak, (START). === 1990s === In 1991, the Preliminary Tritium Experiment at the Joint European Torus achieved the world's first controlled release of fusion power. In 1996, Tore Supra created a plasma for two minutes with a current of almost 1 million amperes, totaling 280 MJ of injected and extracted energy. In 1997, JET produced a peak of 16.1 MW of fusion power (65% of heat to plasma), with fusion power of over 10 MW sustained for over 0.5 sec. === 2000s === "Fast ignition" saved power and moved ICF into the race for energy production. In 2006, China's Experimental Advanced Superconducting Tokamak (EAST) test reactor was completed. It was the first tokamak to use superconducting magnets to generate both toroidal and poloidal fields. In March 2009, the laser-driven ICF NIF became operational. In the 2000s, privately backed fusion companies entered the race, including TAE Technologies, General Fusion, and Tokamak Energy. === 2010s === Private and public research accelerated in the 2010s. General Fusion developed plasma injector technology and Tri Alpha Energy tested its C-2U device. The French Laser Mégajoule began operation. NIF achieved net energy gain in 2013, as defined in the very limited sense as the hot spot at the core of the collapsed target, rather than the whole target. In 2014, Phoenix Nuclear Labs sold a high-yield neutron generator that could sustain 5×1011 deuterium fusion reactions per second over a 24-hour period. In 2015, MIT announced a tokamak it named the ARC fusion reactor, using rare-earth barium-copper oxide (REBCO) superconducting tapes to produce high-magnetic field coils that it claimed could produce comparable magnetic field strength in a smaller configuration than other designs. In October, researchers at the Max Planck Institute of Plasma Physics in Greifswald, Germany, completed building the largest stellarator to date, the Wendelstein 7-X (W7-X). The W7-X stellarator began Operational phase 1 (OP1.1) on December 10, 2015, successfully producing helium plasma. The objective was to test vital systems and understand the machine's physics. By February 2016, hydrogen plasma was achieved, with temperatures reaching up to 100 million Kelvin. The initial tests used five graphite limiters. After over 2,000 pulses and achieving significant milestones, OP1.1 concluded on March 10, 2016. An upgrade followed, and OP1.2 in 2017 aimed to test an uncooled divertor. By June 2018, record temperatures were reached. W7-X concluded its first campaigns with limiter and island divertor tests, achieving notable advancements by the end of 2018. It soon produced helium and hydrogen plasmas lasting up to 30 minutes. In 2017, Helion Energy's fifth-generation plasma machine went into operation. The UK's Tokamak Energy's ST40 generated "first plasma". The next year, Eni announced a $50 million investment in Commonwealth Fusion Systems, to attempt to commercialize MIT's ARC technology. === 2020s === In January 2021, SuperOx announced the commercialization of a new superconducting wire with more than 700 A/mm2 current capability. TAE Technologies announced results for its Norman device, holding a temperature of about 60 MK for 30 milliseconds, 8 and 10 times higher, respectively, than the company's previous devices. In October, Oxford-based First Light Fusion revealed its projectile fusion project, which fires an aluminum disc at a fusion target, accelerated by a 9 mega-amp electrical pulse, reaching speeds of 20 kilometres per second (12 mi/s). The resulting fusion generates neutrons whose energy is captured as heat. On November 8, in an invited talk to the 63rd Annual Meeting of the APS Division of Plasma Physics, the National Ignition Facility claimed to have triggered fusion ignition in the laboratory on August 8, 2021, for the first time in the 60+ year history of the ICF program. The shot yielded 1.3 MJ of fusion energy, an over 8X improvement on tests done in spring of 2021. NIF estimates that 230 kJ of energy reached the fuel capsule, which resulted in an almost 6-fold energy output from the capsule. A researcher from Imperial College London stated that the majority of the field agreed that ignition had been demonstrated. In November 2021, Helion Energy reported receiving $500 million in Series E funding for its seventh-generation Polaris device, designed to demonstrate net electricity production, with an additional $1.7 billion of commitments tied to specific milestones, while Commonwealth Fusion Systems raised an additional $1.8 billion in Series B funding to construct and operate its SPARC tokamak, the single largest investment in any private fusion company. In April 2022, First Light announced that their hypersonic projectile fusion prototype had produced neutrons compatible with fusion. Their technique electromagnetically fires projectiles at Mach 19 at a caged fuel pellet. The deuterium fuel is compressed at Mach 204, reaching pressure levels of 100 TPa. On December 13, 2022, the US Department of Energy reported that researchers at the National Ignition Facility had achieved a net energy gain from a fusion reaction. The reaction of hydrogen fuel at the facility produced about 3.15 MJ of energy while consuming 2.05 MJ of input. However, while the fusion reactions may have produced more than 3 megajoules of energy—more than was delivered to the target—NIF's 192 lasers consumed 322 MJ of grid energy in the conversion process. In May 2023, the United States Department of Energy (DOE) provided a grant of $46 million to eight companies across seven states to support fusion power plant design and research efforts. This funding, under the Milestone-Based Fusion Development Program, aligns with objectives to demonstrate pilot-scale fusion within a decade and to develop fusion as a carbon-neutral energy source by 2050. The granted companies are tasked with addressing the scientific and technical challenges to create viable fusion pilot plant designs in the next 5–10 years. The recipient firms include Commonwealth Fusion Systems, Focused Energy Inc., Princeton Stellarators Inc., Realta Fusion Inc., Tokamak Energy Inc., Type One Energy Group, Xcimer Energy Inc., and Zap Energy Inc. In December 2023, the largest and most advanced tokamak JT-60SA was inaugurated in Naka, Japan. The reactor is a joint project between Japan and the European Union. The reactor had achieved its first plasma in October 2023. Subsequently, South Korea's fusion reactor project, the Korean Superconducting Tokamak Advanced Research, successfully operated for 102 seconds in a high-containment mode (H-mode) containing high ion temperatures of more than 100 million degrees in plasma tests conducted from December 2023 to February 2024. In January 2025, EAST fusion reactor in China was reported to maintain a steady-state high-confinement plasma operation for 1066 seconds. In February 2025, the French Alternative Energies and Atomic Energy Commission (CEA) announced that its WEST tokamak had maintained a stable plasma for 1,337 seconds—over 22 minutes. == Future development == Claims of commercially viable fusion power being relatively imminent have often attracted ridicule within the scientific community. A common joke is that human-engineered fusion has always been promised as 30 years away since the concept was first discussed, or that it has been "20 years away for 50 years". In 2024, Commonwealth Fusion Systems announced plans to build the world's first grid-scale commercial nuclear fusion power plant at the James River Industrial Center in Chesterfield County, Virginia, which is part of the Greater Richmond Region; the plant is designed to produce about 400 MW of electric power, and is intended to come online in the early 2030s. == Records == Fusion power records vary across confinement systems. They include records pertaining to fusion energy release, and more broadly, any plasma confinement parameters, such as temperature and pressure, or discharge time (not confinement time). The record for MCF fusion energy release is 69 MJ, over 6 seconds, set by the Joint European Torus tokamak in 2023. The record for ICF fusion energy release is 3.15 MJ, over 100 picoseconds, set by the National Ignition Facility in 2022, which also achieved Q values greater than unity. == See also == == References == == Bibliography == Clery, Daniel (2014). A Piece of the Sun: The Quest for Fusion Energy. The Overlook Press. ISBN 978-1468310412. Cockburn, Stewart; Ellyard, David (1981). Oliphant, the life and times of Sir Mark Oliphant. Axiom Books. ISBN 978-0959416404. Dean, Stephen O. (2013). Search for the Ultimate Energy Source: A History of the U.S. Fusion Energy Program. Springer Science & Business Media. ISBN 978-1461460374. Hagelstein, Peter L.; McKubre, Michael; Nagel, David; Chubb, Talbot; Hekman, Randall (2004). "New Physical Effects in Metal Deuterides" (PDF). 11th Condensed Matter Nuclear Science. Vol. 11. Washington: US Department of Energy. pp. 23–59. Bibcode:2006cmns...11...23H. CiteSeerX 10.1.1.233.5518. doi:10.1142/9789812774354_0003. ISBN 978-9812566409. Archived from the original (PDF) on 2007-01-06. (manuscript) Hutchinson, Alex (January 8, 2006). "The Year in Science: Physics". Discover Magazine (Online). ISSN 0274-7529. Retrieved 2008-06-20. Nuttall, William J., Konishi, Satoshi, Takeda, Shutaro, and Webbe-Wood, David (2020). Commercialising Fusion Energy: How Small Businesses are Transforming Big Science. IOP Publishing. ISBN 978-0750327176. Molina, Andrés de Bustos (2013). Kinetic Simulations of Ion Transport in Fusion Devices. Springer International Publishing. ISBN 978-3319004211. Nagamine, Kanetada (2003). "Muon Catalyzed Fusion". Introductory Muon Science. Cambridge University Press. ISBN 978-0521038201. Pfalzner, Susanne (2006). An Introduction to Inertial Confinement Fusion. US: Taylor & Francis. ISBN 978-0750307017. == Further reading == Ball, Philip. "The chase for fusion energy". Nature. Retrieved 2021-11-22. Oreskes, Naomi, "Fusion's False Promise: Despite a recent advance, nuclear fusion is not the solution to the climate crisis", Scientific American, vol. 328, no. 6 (June 2023), p. 86. == External links == Media related to Nuclear fusion reactors at Wikimedia Commons Data related to Fusion power at Wikidata Fusion Device Information System Fusion Energy Base Fusion Industry Association Princeton Satellite Systems News U.S. Fusion Energy Science Program
Wikipedia/Fusion_energy
Science On a Sphere (SOS) is a spherical projection system created by the United States National Oceanic and Atmospheric Administration (NOAA). It displays high-resolution video on a suspended globe with the aim of better representing global phenomena. Animated images of atmospheric storms, climate change, and ocean temperature can be displayed on the sphere to display environmental processes. SOS systems are most frequently installed in science museums, universities, zoos, and research institutions. == History == SOS was invented by Alexander E. MacDonald, the former director of the Earth System Research Laboratories. MacDonald devised the original idea for SOS in 1995. A team of NOAA staff wrote the SOS software and developed the SOS hardware and system architecture. A patent was awarded to NOAA for Science On a Sphere in August 2005. == Configuration == SOS uses many off-the-shelf hardware and software components. A spherical screen covered in ordinary latex paint hangs suspended in the center of the projection space. The screen is inert; it neither moves nor has any electronic parts. Surrounding the screen are four video projectors, with each projector responsible for one quadrant of screen space. One CPU is used to control the system. The SOS software runs on Linux. === The sphere === The carbon fiber sphere is 68 inches (1.7 m) in diameter and weighs under 50 pounds (23 kg). The sphere is attached to the ceiling or suspension structure with a three-point suspension system to hold the sphere in place and reduce lateral movement and blurring. === Projectors === The system requires high quality, bright, long-duty cycle projectors, rather than smaller portable and consumer models to endure the requirements of 8–10 hours per day, 7 days per week of most public displays. === Computer hardware === The newest configuration uses one Ubuntu Linux computer with NVIDIA Quadro graphics cards, and an iPad app to control the system. === SOS data details === The majority of SOS assets are so-called "datasets". Originally conceived as a video system for showing space-based collections of Earth data, the SOS has grown in its utility. The majority of data that traditionally appears on the SOS screens concerns the Earth, either from near-real-time data acquisition systems, or from processed remote sensing platforms, but recent interest and growth in different kinds of media have started to broaden that library. There are currently over 500 datasets that can be shown on the sphere, including real-time infrared satellite images, Mars, real-time earthquakes, an ocean acidification model, and others, including a number of movies. The data format for SOS datasets is the equirectangular projection, as shown by the map to the right. == SOS User's Collaborative Network == A collaborative network has been established by institutions with access to SOS, as well as partners who are developing educational programming and content for these systems. The SOS Users Collaborative Network is backed by the NOAA Office of Education (OEd) and the NOAA Earth System Research Laboratories (ESRL). == See also == Virtual globe == References == == External links == BWC Visual Technology A distributor and installer of Science on a Sphere Official website SOS Explorer, the desktop version of SOS for Windows and Mac computers, was released in September 2015 and is free for classroom or personal use.
Wikipedia/Science_On_a_Sphere
The Modular Ocean Model (MOM) is a three-dimensional ocean circulation model designed primarily for studying the ocean climate system. The model is developed and supported primarily by researchers at the National Oceanic and Atmospheric Administration's Geophysical Fluid Dynamics Laboratory (NOAA/GFDL) in Princeton, NJ, USA. == Overview == MOM has traditionally been a level-coordinate ocean model, in which the ocean is divided into boxes whose bottoms are located at fixed depths. Such a representation makes it easy to solve the momentum equations and the well-mixed, weakly stratified layer known as the ocean mixed layer near the ocean surface. However, level coordinate models have problems when it comes to the representation of thin bottom boundary layers (Winton et al., 1998) and thick sea ice. Additionally, because mixing in the ocean interior is largely along lines of constant potential density rather than along lines of constant depth, mixing must be rotated relative to the coordinate grid- a process that can be computationally expensive. By contrast, in codes which represent the ocean in terms of constant-density layers (which represent the flow in the ocean interior much more faithfully)- representation of the ocean mixed layer becomes a challenge. MOM3, MOM4, and MOM5 are used as a code base for the ocean component of the GFDL coupled models used in the IPCC assessment reports, including the GFDL CM2.X physical climate model series and the ESM2M Earth System Model. Versions of MOM have been used in hundreds of scientific papers by authors around the world. MOM4 is used as the basis for the El Nino prediction system employed by the National Centers for Environmental Prediction. == History == MOM owes its genesis to work at GFDL in the late 1960s by Kirk Bryan and Michael Cox. This code, along with a version generated at GFDL and UCLA/NCAR by Bert Semtner, is the ancestor of many of the level-coordinate ocean model codes run around the world today. In the late 1980s, Ron Pacanowski, Keith Dixon, and Tony Rosati at GFDL rewrote the Bryan-Cox-Semtner code in a modular form, enabling different options and configurations to be more easily generated and new physical parameterizations to be more easily included. This version, released on December 5, 1990, became known as Modular Ocean Model v1.0 (MOM1). Further development by Pacanowski, aided by Charles Goldberg and encouraged by community feedback, led to the release of v2.0 (MOM2) in 1995. Pacanowski and Stephen Griffies released v3.0 (MOM3) in 1999. Griffies, Matthew Harrison, Rosati and Pacanowski, with considerable input from a scientific community of hundreds of users, resulted in significant evolution of the code released as v4.0 (MOM4) in 2003. An update, v4.1 (MOM4p1) was released by Griffies in 2009, as was the latest version v5.0 (MOM5), which was released in 2012. == See also == Geophysical Fluid Dynamics Laboratory == References == == External links == MOM6 project MOM5 community website NOAA/GFDL Modular Ocean Model home page History of MOM MOM5 manual MOM4p1 manual MOM4 manual MOM3 manual MOM2 manual MOM1 manual Cox code technical report
Wikipedia/Modular_Ocean_Model
In continuum mechanics, wave action refers to a conservable measure of the wave part of a motion. For small-amplitude and slowly varying waves, the wave action density is: A = E ω i , {\displaystyle {\mathcal {A}}={\frac {E}{\omega _{i}}},} where E {\displaystyle E} is the intrinsic wave energy and ω i {\displaystyle \omega _{i}} is the intrinsic frequency of the slowly modulated waves – intrinsic here implying: as observed in a frame of reference moving with the mean velocity of the motion. The action of a wave was introduced by Sturrock (1962) in the study of the (pseudo) energy and momentum of waves in plasmas. Whitham (1965) derived the conservation of wave action – identified as an adiabatic invariant – from an averaged Lagrangian description of slowly varying nonlinear wave trains in inhomogeneous media: ∂ ∂ t A + ∇ ⋅ B = 0 , {\displaystyle {\frac {\partial }{\partial t}}{\mathcal {A}}+{\boldsymbol {\nabla }}\cdot {\boldsymbol {\mathcal {B}}}=0,} where B {\displaystyle {\boldsymbol {\mathcal {B}}}} is the wave-action density flux and ∇ ⋅ B {\displaystyle {\boldsymbol {\nabla }}\cdot {\boldsymbol {\mathcal {B}}}} is the divergence of B {\displaystyle {\boldsymbol {\mathcal {B}}}} . The description of waves in inhomogeneous and moving media was further elaborated by Bretherton & Garrett (1968) for the case of small-amplitude waves; they also called the quantity wave action (by which name it has been referred to subsequently). For small-amplitude waves the conservation of wave action becomes: ∂ ∂ t ( E ω i ) + ∇ ⋅ [ ( U + c g ) E ω i ] = 0 , {\displaystyle {\frac {\partial }{\partial t}}\left({\frac {E}{\omega _{i}}}\right)+{\boldsymbol {\nabla }}\cdot \left[\left({\boldsymbol {U}}+{\boldsymbol {c}}_{g}\right)\,{\frac {E}{\omega _{i}}}\right]=0,} using A = E ω i {\displaystyle {\mathcal {A}}={\frac {E}{\omega _{i}}}} and B = ( U + c g ) A , {\displaystyle {\boldsymbol {\mathcal {B}}}=\left({\boldsymbol {U}}+{\boldsymbol {c}}_{g}\right){\mathcal {A}},} where c g {\displaystyle {\boldsymbol {c}}_{g}} is the group velocity and U {\displaystyle {\boldsymbol {U}}} the mean velocity of the inhomogeneous moving medium. While the total energy (the sum of the energies of the mean motion and of the wave motion) is conserved for a non-dissipative system, the energy of the wave motion is not conserved, since in general there can be an exchange of energy with the mean motion. However, wave action is a quantity which is conserved for the wave-part of the motion. The equation for the conservation of wave action is for instance used extensively in wind wave models to forecast sea states as needed by mariners, the offshore industry and for coastal defense. Also in plasma physics and acoustics the concept of wave action is used. The derivation of an exact wave-action equation for more general wave motion – not limited to slowly modulated waves, small-amplitude waves or (non-dissipative) conservative systems – was provided and analysed by Andrews & McIntyre (1978) using the framework of the generalised Lagrangian mean for the separation of wave and mean motion. == Notes == == References ==
Wikipedia/Wave_action_(continuum_mechanics)
A Wind generated current is a flow in a body of water that is generated by wind friction on its surface. Wind can generate surface currents on water bodies of any size. The depth and strength of the current depend on the wind strength and duration, and on friction and viscosity losses, but are limited to about 400 m depth by the mechanism, and to lesser depths where the water is shallower. The direction of flow is influenced by the Coriolis effect, and is offset to the right of the wind direction in the Northern Hemisphere, and to the left in the Southern Hemisphere. A wind current can induce secondary water flow in the form of upwelling and downwelling, geostrophic flow, and western boundary currents. == Mechanism == Friction between wind and the upper surface of a body of water will drag the water surface along with the wind The surface layer will exert viscous drag on the water just below, which will transfer some of the momentum. This process continues downward, with a continuous reduction in speed of flow with increasing depth as the energy is dissipated. The inertial effect of planetary rotation causes an offset of flow direction with increasing depth to the right in the northern hemisphere and to the left in the southern hemisphere. The mechanism of deflection is called the Coriolis effect, and the variation of flow velocity with depth is called an Ekman spiral. The effect varies with latitude, being very weak at the equator and increasing in strength with latitude. The resultant flow of water caused by this mechanism is known as Ekman transport. A steady wind blowing across a long fetch in deep water for long enough to establish a steady state flow causes the surface water to move at 45° to the wind direction. The variation in flow direction with depth has the water moving perpendicular to wind direction by about 100 to 150 m depth, and flow speed drops to about 4% of surface flow speed by the depth of about 330 to 400 m where the flow direction is opposite to wind direction, below which the effect of wind on the current is considered negligible. The net flow of water over the effective thickness of the current in these conditions is perpendicular to wind direction. Consistent prevailing winds set up persistent circulating surface currents in both hemispheres, and where the current is bounded by continental land masses, the resulting gyres are restricted in longitudinal extent. Seasonal and local winds cause smaller scale and generally transient currents, which dissipate after the driving winds die down. Real conditions often differ, as wind strength and direction vary, and the depth may not be sufficient for the full spiral to develop, so that the angle between wind direction and surface-water movement can be as small as 15°. In deeper water, the angle increases and approaches 45°. A stable pycnocline can inhibit transfer of kinetic energy to deeper waters, providing a depth limit for surface currents. The net inward shallow water flow in a gyre causes the surface level to gradually slope upwards towards the centre. This induces a horizontal pressure gradient which leads to a balancing geostrophic flow. === Boundary currents === Boundary currents are ocean currents with dynamics determined by the presence of a coastline, and fall into two distinct categories: Eastern boundary currents are relatively shallow, broad and slow-flowing currents on the eastern side of oceanic basins along the western coasts of continents. Subtropical eastern boundary currents flow equatorward, transporting cold water from higher latitudes to lower latitudes; examples include the Benguela Current, the Canary Current, the Humboldt Current, and the California Current. Coastal upwelling caused by offshore flow due to Ekman transport where the prevailing wind parallels the shoreline brings nutrient-rich water into eastern boundary current regions, making them highly productive areas. Western boundary currents are warm, deep, narrow, and fast flowing currents that form on the west side of ocean basins due to western intensification. They carry warm water from the tropics poleward. Examples include the Gulf Stream, the Agulhas Current, and the Kuroshio. Western intensification is an effect on the western arm of an oceanic current, particularly a large gyre in an ocean basin. The trade winds blow westward in the tropics. The westerlies blow eastward at mid-latitudes. This applies a stress to the ocean surface with a curl in north and south hemispheres, causing Sverdrup transport toward the tropics. Conservation of mass and potential vorticity cause that transport to be balanced by a narrow, intense poleward current, which flows along the western coast, allowing the vorticity introduced by coastal friction to balance the vorticity input of the wind. The reverse effect applies to the polar gyres – the sign of the wind stress curl and the direction of the resulting currents are reversed. The principal west side currents (such as the Gulf Stream of the North Atlantic Ocean) are stronger than those opposite (such as the California Current of the North Pacific Ocean). === Wind driven upwelling === When the net Ekman transport along a coastline is offshore, a compensatory inflow is possible from below, which brings up bottom water, which tends to be nutrient rich as it comes from the poorly lit regions where photosynthesis is insignificant. Upwelling at the equator is associated with the Intertropical Convergence Zone (ITCZ) which moves seasonally, and consequently, is often located just north or south of the equator. Easterly trade winds blow from the Northeast and Southeast and converge along the equator blowing West to form the ITCZ. Although there are no Coriolis forces present along the equator, upwelling still occurs just north and south of the equator. This results in a divergence, with denser, nutrient-rich water being upwelled from below. === Oceanic downwelling === Downwelling occurs at anti-cyclonic places of the ocean where warm core rings cause surface convergence and push the surface water downwards, or wind drives the sea towards a coastline. Regions that have downwelling generally have lower productivity because the nutrients in the water column are utilized but are not resupplied by nutrient-rich water from deeper below the surface. == Oceanic wind driven currents == Western boundary Gulf Stream – Warm Atlantic Ocean current Agulhas Current – Western boundary current of the southwest Indian Ocean that flows down the east coast of Africa Kuroshio Current – North flowing ocean current on the west side of the North Pacific Ocean Eastern boundary Benguela Current – Ocean current in the South Atlantic Humboldt Current – Current of the Pacific Ocean California Current – Pacific Ocean current Equatorial North Equatorial Current – Current in the Pacific and Atlantic Oceans South Equatorial Current – Ocean current in the Pacific, Atlantic, and Indian Ocean Arctic Atlantic Canary Current – Wind-driven surface current that is part of the North Atlantic Gyre Pacific Southern Antarctic Circumpolar Current, also known as West Wind Drift – Ocean current that flows clockwise from west to east around Antarctica Oceanic gyres Beaufort Gyre – Wind-driven ocean current in the Arctic Ocean polar region Indian Ocean Gyre – Major oceanic gyre in the Indian Ocean North Atlantic Gyre – Major circular system of ocean currents North Pacific Gyre – Major circulating system of ocean currents Ross Gyre – Circulating system of ocean currents in the Ross Sea South Atlantic Gyre – Subtropical gyre in the south Atlantic Ocean South Pacific Gyre – Major circulating system of ocean currents Weddell Gyre – One of two gyres within the Southern Ocean == Lake currents == == Local and transient currents == Surface currents caused by local wind Upwellings driven by local and prevailing winds. == See also == Current (stream) – Flow of water in a natural watercourse due to gravity Downwelling – Process of accumulation and sinking of higher density material beneath lower density material Geostrophic current – Oceanic flow in which the pressure gradient force is balanced by the Coriolis effect Hydrothermal circulation – Circulation of water driven by heat exchange Ocean current – Directional mass flow of oceanic water Thermohaline circulation – Part of large-scale ocean circulation Upwelling – Oceanographic phenomenon of wind-driven motion of ocean water == References ==
Wikipedia/Wind_generated_current
In fluid dynamics, wind wave modeling describes the effort to depict the sea state and predict the evolution of the energy of wind waves using numerical techniques. These simulations consider atmospheric wind forcing, nonlinear wave interactions, and frictional dissipation, and they output statistics describing wave heights, periods, and propagation directions for regional seas or global oceans. Such wave hindcasts and wave forecasts are extremely important for commercial interests on the high seas. For example, the shipping industry requires guidance for operational planning and tactical seakeeping purposes. For the specific case of predicting wind wave statistics on the ocean, the term ocean surface wave model is used. Other applications, in particular coastal engineering, have led to the developments of wind wave models specifically designed for coastal applications. == Historical overview == Early forecasts of the sea state were created manually based upon empirical relationships between the present state of the sea, the expected wind conditions, the fetch/duration, and the direction of the wave propagation. Alternatively, the swell part of the state has been forecasted as early as 1920 using remote observations. During the 1950s and 1960s, much of the theoretical groundwork necessary for numerical descriptions of wave evolution was laid. For forecasting purposes, it was realized that the random nature of the sea state was best described by a spectral decomposition in which the energy of the waves was attributed to as many wave trains as necessary, each with a specific direction and period. This approach allowed to make combined forecasts of wind seas and swells. The first numerical model based on the spectral decomposition of the sea state was operated in 1956 by the French Weather Service, and focused on the North Atlantic. The 1970s saw the first operational, hemispheric wave model: the spectral wave ocean model (SWOM) at the Fleet Numerical Oceanography Center. First generation wave models did not consider nonlinear wave interactions. Second generation models, available by the early 1980s, parameterized these interactions. They included the "coupled hybrid" and "coupled discrete" formulations. Third generation models explicitly represent all the physics relevant for the development of the sea state in two dimensions. The wave modeling project (WAM), an international effort, led to the refinement of modern wave modeling techniques during the decade 1984-1994. Improvements included two-way coupling between wind and waves, assimilation of satellite wave data, and medium-range operational forecasting. Wind wave models are used in the context of a forecasting or hindcasting system. Differences in model results arise (with decreasing order of importance) from: differences in wind and sea ice forcing, differences in parameterizations of physical processes, the use of data assimilation and associated methods, and the numerical techniques used to solve the wave energy evolution equation. In the aftermath of World War II, the study of wave growth garnered significant attention. The global nature of the war, encompassing battles in the Pacific, Atlantic, and Mediterranean seas, necessitated the execution of landing operations on enemy-held coasts. Safe landing was paramount, given that choppy waters posed the danger of capsizing landing craft. Consequently, the precise forecasting of weather and wave conditions became essential, prompting the recruitment of meteorologists and oceanographers by the warring nations. During this period, both Japan and the United States embarked on wave prediction research. In the U.S., comprehensive studies were carried out at the Scripps Institution of Oceanography affiliated with the University of California. Under the guidance of Harald Svedrup, Walter Munk devised an avant-garde wave calculation methodology for the United States Navy and later refined this approach for the Office of Naval Research. This pioneering effort led to the creation of the significant wave method, which underwent subsequent refinements and data integrations. The method, in due course, came to be popularly referred to as the SMB method, an acronym derived from its founders Sverdrup, Munk, and Charles L. Bretschneider. Between 1950 and 1980, various formulae were proposed. Given that two-dimensional field models had not been formulated during that time, studies were initiated in the Netherlands by Rijkswaterstaat and the Technische Adviescommissie voor de Waterkeringen (TAW - Technical Advisory Committee for Flood Defences) to discern the most appropriate formula to compute wave height at the base of a dike. This work concluded that the 1973 Bretschneider formula was the most suitable. However, subsequent studies by Young and Verhagen in 1997 suggested that adjusting certain coefficients enhanced the formula's efficacy in shallow water regions. == General strategy == === Input === A wave model requires as initial conditions information describing the state of the sea. An analysis of the sea or ocean can be created through data assimilation, where observations such as buoy or satellite altimeter measurements are combined with a background guess from a previous forecast or climatology to create the best estimate of the ongoing conditions. In practice, many forecasting system rely only on the previous forecast, without any assimilation of observations. A more critical input is the "forcing" by wind fields: a time-varying map of wind speed and directions. The most common sources of errors in wave model results are the errors in the wind field. Ocean currents can also be important, in particular in western boundary currents such as the Gulf Stream, Kuroshio or Agulhas current, or in coastal areas where tidal currents are strong. Waves are also affected by sea ice and icebergs, and all operational global wave models take at least the sea ice into account. === Representation === The sea state is described as a spectrum; the sea surface can be decomposed into waves of varying frequencies using the principle of superposition. The waves are also separated by their direction of propagation. The model domain size can range from regional to the global ocean. Smaller domains can be nested within a global domain to provide higher resolution in a region of interest. The sea state evolves according to physical equations – based on a spectral representation of the conservation of wave action – which include: wave propagation / advection, refraction (by bathymetry and currents), shoaling, and a source function which allows for wave energy to be augmented or diminished. The source function has at least three terms: wind forcing, nonlinear transfer, and dissipation by whitecapping. Wind data are typically provided from a separate atmospheric model from an operational weather forecasting center. For intermediate water depths the effect of bottom friction should also be added. At ocean scales, the dissipation of swells - without breaking - is a very important term. === Output === The output of a wind wave model is a description of the wave spectra, with amplitudes associated with each frequency and propagation direction. Results are typically summarized by the significant wave height, which is the average height of the one-third largest waves, and the period and propagation direction of the dominant wave. === Coupled models === Wind waves also act to modify atmospheric properties through frictional drag of near-surface winds and heat fluxes. Two-way coupled models allow the wave activity to feed back upon the atmosphere. The European Centre for Medium-Range Weather Forecasts (ECMWF) coupled atmosphere-wave forecast system described below facilitates this through exchange of the Charnock parameter which controls the sea surface roughness. This allows the atmosphere to respond to changes in the surface roughness as the wind sea builds up or decays. == Examples == === WAVEWATCH === The operational wave forecasting systems at NOAA are based on the WAVEWATCH III model. This system has a global domain of approximately 50 km resolution, with nested regional domains for the northern hemisphere oceanic basins at approximately 18 km and approximately 7 km resolution. Physics includes wave field refraction, nonlinear resonant interactions, sub-grid representations of unresolved islands, and dynamically updated ice coverage. Wind data is provided from the GDAS data assimilation system for the GFS weather model. Up to 2008, the model was limited to regions outside the surf zone where the waves are not strongly impacted by shallow depths. The model can incorporate the effects of currents on waves from its early design by Hendrik Tolman in the 1990s, and is now extended for near shore applications. === WAM === The wave model WAM was the first so-called third generation prognostic wave model where the two-dimensional wave spectrum was allowed to evolve freely (up to a cut-off frequency) with no constraints on the spectral shape. The model underwent a series of software updates from its inception in the late 1980s. The last official release is Cycle 4.5, maintained by the German Helmholtz Zentrum, Geesthacht. ECMWF has incorporated WAM into its deterministic and ensemble forecasting system., known as the Integrated Forecast System (IFS). The model currently comprises 36 frequency bins and 36 propagation directions at an average spatial resolution of 25 km. The model has been coupled to the atmospheric component of IFS since 1998. === Other models === Wind wave forecasts are issued regionally by Environment Canada. Regional wave predictions are also produced by universities, such as Texas A&M University's use of the SWAN model (developed by Delft University of Technology) to forecast waves in the Gulf of Mexico. Another model, CCHE2D-COAST is a processes-based integrated model which is capable of simulating coastal processes in different coasts with complex shorelines such as irregular wave deformation from offshore to onshore, nearshore currents induced by radiation stresses, wave set-up, wave set-down, sediment transport, and seabed morphological changes. Other wind wave models include the U.S. Navy Standard Surf Model (NSSM). == The formulae of Bretschneider, Wilson, and Young & Verhagen == For determining wave growth in deep waters subjected to prolonged fetch, the basic formula set is: g H s u w 2 = 0.283 {\displaystyle {{gH_{s}} \over {u_{w}^{2}}}=0.283} g T s u w = 7.54 {\displaystyle {{gT_{s}} \over {u_{w}}}=7.54} Where: g {\displaystyle g} = gravitational acceleration (m/s2) H s {\displaystyle H_{s}} = significant wave height (m) T s {\displaystyle T_{s}} = significant wave period (s) u w {\displaystyle u_{w}} = wind speed (m/s) The constants in these formulas are deduced from empirical data. Factoring in water depth, wind fetch, and storm duration complicates the equations considerably. However, the application of dimensionless values facilitates the identification of patterns for all these variables. The dimensionless parameters employed are: H ^ = g H s / u w 2 {\displaystyle {\widehat {H}}={gH_{s}/u_{w}^{2}}} T ^ = g T s / u w {\displaystyle {\widehat {T}}={gT_{s}/u_{w}}} d ^ = g d / u w 2 {\displaystyle {\widehat {d}}={gd/u_{w}^{2}}} F ^ = g F / u w 2 {\displaystyle {\widehat {F}}={gF/u_{w}^{2}}} t ^ = g t / u w {\displaystyle {\widehat {t}}={gt/u_{w}}} Where: d {\displaystyle d} = water depth (m) F {\displaystyle F} = wind fetch (m) t {\displaystyle t} = storm duration (s) When plotted against the dimensionless wind fetch, both dimensionless wave height and wave period tend to align linearly. However, this trend becomes notably more flattened for more extended dimensionless wind fetches. Various researchers have endeavoured to formulate equations capturing this observed behaviour. === Common Formulas for Deep Water === Bretschneider (1952, 1977): H ^ = 0.283 tanh ⁡ ( 0.0125 F ^ ) 0.42 {\displaystyle {\widehat {H}}=0.283\tanh(0.0125{\widehat {F}})^{0.42}} T ^ = 7.54 tanh ⁡ ( 0.077 F ^ ) 0.25 {\displaystyle {\widehat {T}}=7.54\tanh(0.077{\widehat {F}})^{0.25}} Wilson (1965): H ^ = 0.30 { 1 − [ 1 + 0.004 F ^ 1 / 2 ] − 2 } {\displaystyle {\widehat {H}}=0.30\{1-[1+0.004{\widehat {F}}^{1/2}]^{-2}\}} T ^ = 1.37 { 1 − [ 1 + 0.008 F ^ 1 / 3 ] − 5 } {\displaystyle {\widehat {T}}=1.37\{1-[1+0.008{\widehat {F}}^{1/3}]^{-5}\}} In the Netherlands, a formula devised by Groen & Dorrestein (1976) is also in common use: H ^ = 0.24 tanh ⁡ ( 0.015 F ^ ) 0.45 {\displaystyle {\widehat {H}}=0.24\tanh \left(0.015{\widehat {F}}\right)^{0.45}} for F ^ > 10 {\displaystyle {\widehat {F}}>10} T ^ = 2 π tanh ⁡ ( 0.0345 F ^ ) 0.37 {\displaystyle {\widehat {T}}=2\pi \tanh \left(0.0345{\widehat {F}}\right)^{0.37}} for F ^ > 400 {\displaystyle {\widehat {F}}>400} T ^ = 0.502 F ^ 0.225 {\displaystyle {\widehat {T}}=0.502{\widehat {F}}^{0.225}} for 10 < F ^ < 400 {\displaystyle 10<{\widehat {F}}<400} During periods when programmable computers weren't commonly utilised, these formulas were cumbersome to use. Consequently, for practical applications, nomograms were developed which did away with dimensionless units, instead presenting wave heights in metres, storm duration in hours, and the wind fetch in km. Integrating the water depth into the same chart was problematic as it introduced too many input parameters. Therefore, during the primary usage of nomograms, separate nomograms were crafted for distinct depths. The use of computers has resulted in reduced reliance on nomograms. For deep water, the distinctions between the various formulas are subtle. However, for shallow water, the formula modified by Young & Verhagen proves more suitable. It's defined as: H ^ = 0.241 ( tanh ⁡ A H tanh ⁡ B H tanh ⁡ A H ) 0.87 {\displaystyle {\widehat {H}}=0.241\left(\tanh {A_{H}}\tanh {{B_{H}} \over {\tanh {A_{H}}}}\right)^{0.87}} A H = 0.493 d ^ 0.75 {\displaystyle A_{H}=0.493{\widehat {d}}^{0.75}} and B H = 0.00313 F ^ 0.57 {\displaystyle B_{H}=0.00313{\widehat {F}}^{0.57}} and T ^ = 7.519 ( tanh ⁡ A T tanh ⁡ B T tanh ⁡ A T ) 0.387 {\displaystyle {\widehat {T}}=7.519\left(\tanh {A_{T}}\tanh {{B_{T}} \over {\tanh {A_{T}}}}\right)^{0.387}} A T = 0.331 d ^ 1.01 {\displaystyle A_{T}=0.331{\widehat {d}}^{1.01}} and B T = 0.0005215 F ^ 0.73 {\displaystyle B_{T}=0.0005215{\widehat {F}}^{0.73}} Research by Bart demonstrated that, under Dutch conditions (for example, in the IJsselmeer), this formula is reliable. ==== Example: Lake Garda ==== Lake Garda in Italy is a deep, elongated lake, measuring about 350 m in depth and spanning 45 km in length. With a wind speed of 25 m/s from the SSW, the Bretschneider and Wilson formulas suggest an Hs of 3.5 m and a period of roughly 7 s (assuming the storm persists for at least 4 hours). The Young and Verhagen formula, however, predicts a lower wave height of 2.6 m. This diminished result is attributed to the formula's calibration for shallow waters, whilst Lake Garda is notably deep. ===== Bretschneider Formula: Lake Garda ===== Based on Bretschneider's formula: Predicted wave height: 3.54 meters Predicted wave period: 7.02 seconds ===== Wilson Formula: Lake Garda ===== Utilizing Wilson's formula, the predictions are: Predicted wave height: 3.56 meters Predicted wave period: 7.01 seconds ===== Young & Verhagen Formula: Lake Garda ===== Young & Verhagen's formula, which typically applies to shallow waters, yields: Predicted wave height: 2.63 meters Predicted wave period: 6.89 seconds == Shallow and coastal waters == Global wind wave models such as WAVEWATCH and WAM are not reliable in shallow water areas near the coast. To address this issue, the SWAN (Simulating WAves Nearshore) program was developed in 1993 by Delft University of Technology, in collaboration with Rijkswaterstaat and the Office of Naval Research in the United States. Initially, the main focus of this development was on wave changes due to the effects of breaking, refraction, and the like. The program was subsequently developed to include analysis of wave growth. SWAN essentially calculates the energy of a wave field (in the form of a wave spectrum) and derives the significant wave height from this spectrum. SWAN lacks a user interface for easily creating input files and presenting the output. The program is open-source, and many institutions and companies have since developed their own user environments for SWAN. The program has become a global standard for such calculations, and can be used in both one-dimensional and two-dimensional modes. === One-dimensional approach === The computation time for a calculation with SWAN is in the order of seconds. In one-dimensional mode, results are available from the input of a cross-sectional profile and wind information. In many cases, this can yield a sufficiently reliable value for the local wave spectrum, particularly when the wind path crosses shallow areas. ==== Example: wave growth calculation in The Netherlands ==== As an example, a calculation of the wave growth in the Westerschelde has been made. For this example, the one-dimensional version of SWAN and the open-source user interface SwanOne were used. The wave height at the base of the sea dike near Goudorpe on South Beveland, just west of the Westerscheldetunnel, was calculated, with the wind coming from the SW at a speed of 25m/s (force 9 to 10). In the graph, this is from left to right. The dike is quite far from deep water, with a salt marsh in front of it. The calculation was made for low water, average water level, and high water. At high tide, the salt marsh is under water; at low tide, only the salt marsh is submerged (the tidal difference here is about 5 metres). At high tide, there is a constant increase in wave height, which is faster in deep water than in shallow water. At low tide, some plates are dry, and wave growth has to start all over again. Close to the shore (beyond the Gat van Borssele), there's a tall salt marsh; at low tide, there are no waves there, at average tide, the wave height decreases to almost nothing at the dike, and at high tide, there's still a wave height of 1 m present. The measure of period shown in these graphs is the spectral period (Tm-1,0). === Two-dimensional approach === In situations where significant refraction occurs, or where the coastline is irregular, the one-dimensional method falls short, necessitating the use of a field model. Even in a relatively rectangular lake like Lake Garda, a two-dimensional calculation provides considerably more information, especially in its southern regions. The figure below demonstrates the results of such a calculation. This case highlights another limitation of the one-dimensional approach: at certain points, the actual wave growth is less than predicted by the one-dimensional model. This discrepancy arises because the model assumes a broad wave field, which isn't the case for narrow lakes. == Validation == Comparison of the wave model forecasts with observations is essential for characterizing model deficiencies and identifying areas for improvement. In-situ observations are obtained from buoys, ships and oil platforms. Altimetry data from satellites, such as GEOSAT and TOPEX, can also be used to infer the characteristics of wind waves. Hindcasts of wave models during extreme conditions also serves as a useful test bed for the models. == Reanalyses == A retrospective analysis, or reanalysis, combines all available observations with a physical model to describe the state of a system over a time period of decades. Wind waves are a part of both the NCEP Reanalysis and the ERA-40 from the ECMWF. Such resources permit the creation of monthly wave climatologies, and can track the variation of wave activity on interannual and multi-decadal time scales. During the northern hemisphere winter, the most intense wave activity is located in the central North Pacific south of the Aleutians, and in the central North Atlantic south of Iceland. During the southern hemisphere winter, intense wave activity circumscribes the pole at around 50°S, with 5 m significant wave heights typical in the southern Indian Ocean. == References ==
Wikipedia/Wind_wave_model
Ocean thermal energy conversion (OTEC) is a renewable energy technology that harnesses the temperature difference between the warm surface waters of the ocean and the cold depths to run a heat engine to produce electricity. It is a unique form of clean energy generation that has the potential to provide a consistent and sustainable source of power. Although it has challenges to overcome, OTEC has the potential to provide a consistent and sustainable source of clean energy, particularly in tropical regions with access to deep ocean water. == Description == OTEC uses the ocean thermal gradient between cooler deep and warmer shallow or surface seawaters to run a heat engine and produce useful work, usually in the form of electricity. OTEC can operate with a very high capacity factor and so can operate in base load mode. The denser cold water masses, formed by ocean surface water interaction with cold atmosphere in quite specific areas of the North Atlantic and the Southern Ocean, sink into the deep sea basins and spread in entire deep ocean by the thermohaline circulation. Upwelling of cold water from the deep ocean is replenished by the downwelling of cold surface sea water. Among ocean energy sources, OTEC is one of the continuously available renewable energy resources that could contribute to base-load power supply. The resource potential for OTEC is considered to be much larger than for other ocean energy forms. Up to 10,000 TWh/yr of power could be generated from OTEC without affecting the ocean's thermal structure. Systems may be either closed-cycle or open-cycle. Closed-cycle OTEC uses working fluids that are typically thought of as refrigerants such as ammonia or R-134a. These fluids have low boiling points, and are therefore suitable for powering the system's generator to generate electricity. The most commonly used heat cycle for OTEC to date is the Rankine cycle, using a low-pressure turbine. Open-cycle engines use vapor from the seawater itself as the working fluid. OTEC can also supply quantities of cold water as a by-product. This can be used for air conditioning and refrigeration and the nutrient-rich deep ocean water can feed biological technologies. Another by-product is fresh water distilled from the sea. OTEC theory was first developed in the 1880s and the first bench size demonstration model was constructed in 1926. Currently operating pilot-scale OTEC plants are located in Japan, overseen by Saga University, and Makai in Hawaii. == History == Attempts to develop and refine OTEC technology started in the 1880s. In 1881, Jacques Arsene d'Arsonval, a French physicist, proposed tapping the thermal energy of the ocean. D'Arsonval's student, Georges Claude, built the first OTEC plant, in Matanzas, Cuba in 1930. The system generated 22 kW of electricity with a low-pressure turbine. The plant was later destroyed in a storm. In 1935, Claude constructed a plant aboard a 10,000-ton cargo vessel moored off the coast of Brazil. Weather and waves destroyed it before it could generate net power. (Net power is the amount of power generated after subtracting power needed to run the system). In 1956, French scientists designed a 3 MW plant for Abidjan, Ivory Coast. The plant was never completed, because new finds of large amounts of cheap petroleum made it uneconomical. In 1962, J. Hilbert Anderson and James H. Anderson, Jr. focused on increasing component efficiency. They patented their new "closed cycle" design in 1967. This design improved upon the original closed-cycle Rankine system, and included this in an outline for a plant that would produce power at lower cost than oil or coal. At the time, however, their research garnered little attention since coal and nuclear were considered the future of energy. Japan is a major contributor to the development of OTEC technology. Beginning in 1970 the Tokyo Electric Power Company successfully built and deployed a 100 kW closed-cycle OTEC plant on the island of Nauru. The plant became operational on 14 October 1981, producing about 120 kW of electricity; 90 kW was used to power the plant and the remaining electricity was used to power a school and other places. This set a world record for power output from an OTEC system where the power was sent to a real (as opposed to an experimental) power grid. 1981 also saw a major development in OTEC technology when Russian engineer, Dr. Alexander Kalina, used a mixture of ammonia and water to produce electricity. This new ammonia-water mixture greatly improved the efficiency of the power cycle. In 1994, the Institute of Ocean Energy at Saga University designed and constructed a 4.5 kW plant for the purpose of testing a newly invented Uehara cycle, also named after its inventor Haruo Uehara. This cycle included absorption and extraction processes that allow this system to outperform the Kalina cycle by 1–2%. The 1970s saw an uptick in OTEC research and development during the post 1973 Arab-Israeli War, which caused oil prices to triple. The U.S. federal government poured $260 million into OTEC research after President Carter signed a law that committed the US to a production goal of 10,000 MW of electricity from OTEC systems by 1999. In 1974, The U.S. established the Natural Energy Laboratory of Hawaii Authority (NELHA) at Keahole Point on the Kona coast of Hawaii. Hawaii is the best US OTEC location, due to its warm surface water, access to very deep, very cold water, and high electricity costs. The laboratory has become a leading test facility for OTEC technology. In the same year, Lockheed received a grant from the U.S. National Science Foundation to study OTEC. This eventually led to an effort by Lockheed, the US Navy, Makai Ocean Engineering, Dillingham Construction, and other firms to build the world's first and only net-power producing OTEC plant, dubbed "Mini-OTEC" For three months in 1979, a small amount of electricity was generated. NELHA operated a 250 kW demonstration plant for six years in the 1990s. With funding from the United States Navy, a 105 kW plant at the site began supplying energy to the local power grid in 2015. A European initiative EUROCEAN - a privately funded joint venture of 9 European companies already active in offshore engineering - was active in promoting OTEC from 1979 to 1983. Initially a large scale offshore facility was studied. Later a 100 kW land based installation was studied combining land based OTEC with Desalination and Aquaculture nicknamed ODA. This was based on the results from a small scale aquaculture facility at the island of St Croix that used a deepwater supply line to feed the aquaculture basins. Also a shore based open cycle plant was investigated. The location of the case of study was the Dutch Kingdom related island Curaçao. Research related to making open-cycle OTEC a reality began earnestly in 1979 at the Solar Energy Research Institute (SERI) with funding from the US Department of Energy. Evaporators and suitably configured direct-contact condensers were developed and patented by SERI (see). An original design for a power-producing experiment, then called the 165-kW experiment was described by Kreith and Bharathan and as the Max Jakob Memorial Award Lecture. The initial design used two parallel axial turbines, using last stage rotors taken from large steam turbines. Later, a team led by Dr. Bharathan at the National Renewable Energy Laboratory (NREL) developed the initial conceptual design for up-dated 210 kW open-cycle OTEC experiment (). This design integrated all components of the cycle, namely, the evaporator, condenser and the turbine into one single vacuum vessel, with the turbine mounted on top to prevent any potential for water to reach it. The vessel was made of concrete as the first process vacuum vessel of its kind. Attempts to make all components using low-cost plastic material could not be fully achieved, as some conservatism was required for the turbine and the vacuum pumps developed as the first of their kind. Later Dr. Bharathan worked with a team of engineers at the Pacific Institute for High Technology Research (PICHTR) to further pursue this design through preliminary and final stages. It was renamed the Net Power Producing Experiment (NPPE) and was constructed at the Natural Energy Laboratory of Hawaii (NELH) by PICHTR by a team led by Chief Engineer Don Evans and the project was managed by Dr. Luis Vega. In 2002, India tested a 1 MW floating OTEC pilot plant near Tamil Nadu. The plant was ultimately unsuccessful due to a failure of the deep sea cold water pipe. Its government continues to sponsor research. In 2006, Makai Ocean Engineering was awarded a contract from the U.S. Office of Naval Research (ONR) to investigate the potential for OTEC to produce nationally significant quantities of hydrogen in at-sea floating plants located in warm, tropical waters. Realizing the need for larger partners to actually commercialize OTEC, Makai approached Lockheed Martin to renew their previous relationship and determine if the time was ready for OTEC. And so in 2007, Lockheed Martin resumed work in OTEC and became a subcontractor to Makai to support their SBIR, which was followed by other subsequent collaborations In March 2011, Ocean Thermal Energy Corporation signed an Energy Services Agreement (ESA) with the Baha Mar resort, Nassau, Bahamas, for the world's first and largest seawater air conditioning (SWAC) system. In June 2015, the project was put on pause while the resort resolved financial and ownership issues. In August 2016, it was announced that the issues had been resolved and that the resort would open in March 2017. It is expected that the SWAC system's construction will resume at that time. In July 2011, Makai Ocean Engineering completed the design and construction of an OTEC Heat Exchanger Test Facility at the Natural Energy Laboratory of Hawaii. The purpose of the facility is to arrive at an optimal design for OTEC heat exchangers, increasing performance and useful life while reducing cost (heat exchangers being the #1 cost driver for an OTEC plant). And in March 2013, Makai announced an award to install and operate a 100 kilowatt turbine on the OTEC Heat Exchanger Test Facility, and once again connect OTEC power to the grid. In July 2016, the Virgin Islands Public Services Commission approved Ocean Thermal Energy Corporation's application to become a Qualified Facility. The company is thus permitted to begin negotiations with the Virgin Islands Water and Power Authority (WAPA) for a Power Purchase Agreement (PPA) pertaining to an Ocean Thermal Energy Conversion (OTEC) plant on the island of St. Croix. This would be the world's first commercial OTEC plant. A project is set to be installed in the African country of São Tomé and Príncipe, which will be the first commercial-scale floating OTEC platform in the world. Developed by Global OTEC, the structure named Dominique will generate 1.5MW, with subsequent barges being installed to help supply the full demand of the country. In 2022, an MoU was signed between the government and British startup Global OTEC. == Currently operating OTEC plants == In March 2013, Saga University with various Japanese industries completed the installation of a new OTEC plant. Okinawa Prefecture announced the start of the OTEC operation testing at Kume Island on April 15, 2013. The main aim is to prove the validity of computer models and demonstrate OTEC to the public. The testing and research will be conducted with the support of Saga University until the end of FY 2016. IHI Plant Construction Co. Ltd, Yokogawa Electric Corporation, and Xenesys Inc were entrusted with constructing the 100 kilowatt class plant within the grounds of the Okinawa Prefecture Deep Sea Water Research Center. The location was specifically chosen in order to utilize existing deep seawater and surface seawater intake pipes installed for the research center in 2000. The pipe is used for the intake of deep sea water for research, fishery, and agricultural use. The plant consists of two 50 kW units in double Rankine configuration. The OTEC facility and deep seawater research center are open to free public tours by appointment in English and Japanese. Currently, this is one of only two fully operational OTEC plants in the world. This plant operates continuously when specific tests are not underway. In 2011, Makai Ocean Engineering completed a heat exchanger test facility at NELHA. Used to test a variety of heat exchange technologies for use in OTEC, Makai has received funding to install a 105 kW turbine. Installation will make this facility the largest operational OTEC facility, though the record for largest power will remain with the Open Cycle plant also developed in Hawaii. In July 2014, DCNS group partnered with Akuo Energy announced NER 300 funding for their NEMO project. If the project was successful, the 16 MW gross 10 MW net offshore plant would have been the largest OTEC facility to date. DCNS planned to have NEMO operational by 2020. Early in April 2018, Naval Energies shut down the project indefinitely due to technical difficulties relating to the main cold-water intake pipe. An ocean thermal energy conversion power plant built by Makai Ocean Engineering went operational in Hawaii in August 2015. The governor of Hawaii, David Ige, "flipped the switch" to activate the plant. This is the first true closed-cycle ocean Thermal Energy Conversion (OTEC) plant to be connected to a U.S. electrical grid. It is a demo plant capable of generating 105 kilowatts, enough to power about 120 homes. == Thermodynamic efficiency == A heat engine gives greater efficiency when run with a large temperature difference. In the oceans the temperature difference between surface and deep water is greatest in the tropics, although still a modest 20 to 25 °C. It is therefore in the tropics that OTEC offers the greatest possibilities. OTEC has the potential to offer global amounts of energy that are 10 to 100 times greater than other ocean energy options such as wave power. OTEC plants can operate continuously providing a base load supply for an electrical power generation system. The main technical challenge of OTEC is to generate significant amounts of power efficiently from small temperature differences. It is still considered an emerging technology. Early OTEC systems were 1 to 3 percent thermally efficient, well below the theoretical maximum 6 and 7 percent for this temperature difference. Modern designs allow performance approaching the theoretical maximum Carnot efficiency. == Power cycle types == Cold seawater is an integral part of each of the three types of OTEC systems: closed-cycle, open-cycle, and hybrid. To operate, the cold seawater must be brought to the surface. The primary approaches are active pumping and desalination. Desalinating seawater near the sea floor lowers its density, which causes it to rise to the surface. The alternative to costly pipes to bring condensing cold water to the surface is to pump vaporized low boiling point fluid into the depths to be condensed, thus reducing pumping volumes and reducing technical and environmental problems and lowering costs. === Closed === Closed-cycle systems use fluid with a low boiling point, such as ammonia (having a boiling point around -33 °C at atmospheric pressure), to power a turbine to generate electricity. Warm surface seawater is pumped through a heat exchanger to vaporize the fluid. The expanding vapor turns the turbo-generator. Cold water, pumped through a second heat exchanger, condenses the vapor into a liquid, which is then recycled through the system. In 1979, the Natural Energy Laboratory and several private-sector partners developed the "mini OTEC" experiment, which achieved the first successful at-sea production of net electrical power from closed-cycle OTEC. The mini OTEC vessel was moored 1.5 miles (2.4 km) off the Hawaiian coast and produced enough net electricity to illuminate the ship's light bulbs and run its computers and television. === Open === Open-cycle OTEC uses warm surface water directly to make electricity. The warm seawater is first pumped into a low-pressure container, which causes it to boil. In some schemes, the expanding vapor drives a low-pressure turbine attached to an electrical generator. The vapor, which has left its salt and other contaminants in the low-pressure container, is pure fresh water. It is condensed into a liquid by exposure to cold temperatures from deep-ocean water. This method produces desalinized fresh water, suitable for drinking water, irrigation or aquaculture. In other schemes, the rising vapor is used in a gas lift technique of lifting water to significant heights. Depending on the embodiment, such vapor lift pump techniques generate power from a hydroelectric turbine either before or after the pump is used. In 1984, the Solar Energy Research Institute (now known as the National Renewable Energy Laboratory) developed a vertical-spout evaporator to convert warm seawater into low-pressure steam for open-cycle plants. Conversion efficiencies were as high as 97% for seawater-to-steam conversion (overall steam production would only be a few percent of the incoming water). In May 1993, an open-cycle OTEC plant at Keahole Point, Hawaii, produced close to 80 kW of electricity during a net power-producing experiment. This broke the record of 40 kW set by a Japanese system in 1982. === Hybrid === A hybrid cycle combines the features of the closed- and open-cycle systems. In a hybrid, warm seawater enters a vacuum chamber and is flash-evaporated, similar to the open-cycle evaporation process. The steam vaporizes the ammonia working fluid of a closed-cycle loop on the other side of an ammonia vaporizer. The vaporized fluid then drives a turbine to produce electricity. The steam condenses within the heat exchanger and provides desalinated water (see heat pipe). === Working fluids === A popular choice of working fluid is ammonia, which has superior transport properties, easy availability, and low cost. Ammonia, however, is toxic and flammable. Fluorinated carbons such as CFCs and HCFCs are not toxic or flammable, but they contribute to ozone layer depletion. Hydrocarbons too are good candidates, but they are highly flammable; in addition, this would create competition for use of them directly as fuels. The power plant size is dependent upon the vapor pressure of the working fluid. With increasing vapor pressure, the size of the turbine and heat exchangers decreases while the wall thickness of the pipe and heat exchangers increase to endure high pressure especially on the evaporator side. == Land, shelf and floating sites == OTEC has the potential to produce gigawatts of electrical power, and in conjunction with electrolysis, could produce enough hydrogen to completely replace all projected global fossil fuel consumption. Reducing costs remains an unsolved challenge, however. OTEC plants require a long, large diameter intake pipe, which is submerged a kilometer or more into the ocean's depths, to bring cold water to the surface. === Land-based === Land-based and near-shore facilities offer three main advantages over those located in deep water. Plants constructed on or near land do not require sophisticated mooring, lengthy power cables, or the more extensive maintenance associated with open-ocean environments. They can be installed in sheltered areas so that they are relatively safe from storms and heavy seas. Electricity, desalinated water, and cold, nutrient-rich seawater could be transmitted from near-shore facilities via trestle bridges or causeways. In addition, land-based or near-shore sites allow plants to operate with related industries such as mariculture or those that require desalinated water. Favored locations include those with narrow shelves (volcanic islands), steep (15–20 degrees) offshore slopes, and relatively smooth sea floors. These sites minimize the length of the intake pipe. A land-based plant could be built well inland from the shore, offering more protection from storms, or on the beach, where the pipes would be shorter. In either case, easy access for construction and operation helps lower costs. Land-based or near-shore sites can also support mariculture or chilled water agriculture. Tanks or lagoons built on shore allow workers to monitor and control miniature marine environments. Mariculture products can be delivered to market via standard transport. One disadvantage of land-based facilities arises from the turbulent wave action in the surf zone. OTEC discharge pipes should be placed in protective trenches to prevent subjecting them to extreme stress during storms and prolonged periods of heavy seas. Also, the mixed discharge of cold and warm seawater may need to be carried several hundred meters offshore to reach the proper depth before it is released, requiring additional expense in construction and maintenance. One way that OTEC systems can avoid some of the problems and expenses of operating in a surf zone is by building them just offshore in waters ranging from 10 to 30 meters deep (Ocean Thermal Corporation 1984). This type of plant would use shorter (and therefore less costly) intake and discharge pipes, which would avoid the dangers of turbulent surf. The plant itself, however, would require protection from the marine environment, such as breakwaters and erosion-resistant foundations, and the plant output would need to be transmitted to shore. === Shelf based === To avoid the turbulent surf zone as well as to move closer to the cold-water resource, OTEC plants can be mounted to the continental shelf at depths up to 100 meters (330 ft). A shelf-mounted plant could be towed to the site and affixed to the sea bottom. This type of construction is already used for offshore oil rigs. The complexities of operating an OTEC plant in deeper water may make them more expensive than land-based approaches. Problems include the stress of open-ocean conditions and more difficult product delivery. Addressing strong ocean currents and large waves adds engineering and construction expense. Platforms require extensive pilings to maintain a stable base. Power delivery can require long underwater cables to reach land. For these reasons, shelf-mounted plants are less attractive. === Floating === Floating OTEC facilities operate off-shore. Although potentially optimal for large systems, floating facilities present several difficulties. The difficulty of mooring plants in very deep water complicates power delivery. Cables attached to floating platforms are more susceptible to damage, especially during storms. Cables at depths greater than 1000 meters are difficult to maintain and repair. Riser cables, which connect the sea bed and the plant, need to be constructed to resist entanglement. As with shelf-mounted plants, floating plants need a stable base for continuous operation. Major storms and heavy seas can break the vertically suspended cold-water pipe and interrupt warm water intake as well. To help prevent these problems, pipes can be made of flexible polyethylene attached to the bottom of the platform and gimballed with joints or collars. Pipes may need to be uncoupled from the plant to prevent storm damage. As an alternative to a warm-water pipe, surface water can be drawn directly into the platform; however, it is necessary to prevent the intake flow from being damaged or interrupted during violent motions caused by heavy seas. Connecting a floating plant to power delivery cables requires the plant to remain relatively stationary. Mooring is an acceptable method, but current mooring technology is limited to depths of about 2,000 meters (6,600 ft). Even at shallower depths, the cost of mooring may be prohibitive. == Political concerns == Because OTEC facilities are more-or-less stationary surface platforms, their exact location and legal status may be affected by the United Nations Convention on the Law of the Sea treaty (UNCLOS). This treaty grants coastal nations 12-and-200-nautical-mile (22 and 370 km) zones of varying legal authority from land, creating potential conflicts and regulatory barriers. OTEC plants and similar structures would be considered artificial islands under the treaty, giving them no independent legal status. OTEC plants could be perceived as either a threat or potential partner to fisheries or to seabed mining operations controlled by the International Seabed Authority. == Cost and economics == Because OTEC systems have not yet been widely deployed, cost estimates are uncertain. A 2010 study by University of Hawaii estimated the cost of electricity for OTEC at 94.0 cents per kilowatt hour (kWh) for a 1.4 MW plant, 44.0 cents per kWh for a 10 MW plant, and 18.0 cents per kWh for a 100 MW plant. A 2015 report by the organization Ocean Energy Systems under the International Energy Agency gave an estimate of about 20.0 cents per kWh for 100 MW plants. Another study estimated power generation costs as low as 7.0 cents per kWh. Comparing to other energy sources, a 2019 study by Lazard estimated the unsubsidized cost of electricity to 3.2 to 4.2 cents per kWh for Solar PV at utility scale and 2.8 to 5.4 cents per kWh for wind power. A report published by IRENA in 2014 claimed that commercial use of OTEC technology can be scaled in a variety of ways. “...small-scale OTEC plants can be made to accommodate the electricity production of small communities (5,000–50,000 residents), but would require the production of valuable by-products – like fresh water or cooling – to be economically viable”. Larger scaled OTEC plants would have a much higher overhead and installation costs. Beneficial factors that should be taken into account include OTEC's lack of waste products and fuel consumption, the area in which it is available (often within 20° of the equator), the geopolitical effects of petroleum dependence, compatibility with alternate forms of ocean power such as wave energy, tidal energy and methane hydrates, and supplemental uses for the seawater. == Some proposed projects == OTEC projects under consideration include a small plant for the U.S. Navy base on the British overseas territory island of Diego Garcia in the Indian Ocean. Ocean Thermal Energy Corporation (formerly OCEES International, Inc.) is working with the U.S. Navy on a design for a proposed 13-MW OTEC plant, to replace the current diesel generators. The OTEC plant would also provide 1.25 million gallons per day of potable water. This project is currently waiting for changes in US military contract policies. OTE has proposed building a 10-MW OTEC plant on Guam. === Bahamas === Ocean Thermal Energy Corporation (OTE) currently has plans to install two 10 MW OTEC plants in the US Virgin Islands and a 5–10 MW OTEC facility in The Bahamas. OTE has also designed the world's largest Seawater Air Conditioning (SWAC) plant for a resort in The Bahamas, which will use cold deep seawater as a method of air-conditioning. In mid-2015, the 95%-complete project was temporarily put on hold while the resort resolved financial and ownership issues. On August 22, 2016, the government of the Bahamas announced that a new agreement had been signed under which the Baha Mar resort will be completed. On September 27, 2016, Bahamian Prime Minister Perry Christie announced that construction had resumed on Baha Mar, and that the resort was slated to open in March 2017. This is on hold, and may never resume. === Hawaii === Lockheed Martin's Alternative Energy Development team has partnered with Makai Ocean Engineering to complete the final design phase of a 10-MW closed cycle OTEC pilot system which planned to become operational in Hawaii in the 2012–2013 time frame. This system was designed to expand to 100-MW commercial systems in the near future. In November, 2010 the U.S. Naval Facilities Engineering Command (NAVFAC) awarded Lockheed Martin a US$4.4 million contract modification to develop critical system components and designs for the plant, adding to the 2009 $8.1 million contract and two Department of Energy grants totaling over $1 million in 2008 and March 2010. A small but operational ocean thermal energy conversion (OTEC) plant was inaugurated in Hawaii in August 2015. The opening of the research and development 100-kilowatt facility marked the first time a closed-cycle OTEC plant was connected to the U.S. grid. === Hainan === On April 13, 2013, Lockheed contracted with the Reignwood Group to build a 10 megawatt plant off the coast of southern China to provide power for a planned resort on Hainan island. A plant of that size would power several thousand homes. The Reignwood Group acquired Opus Offshore in 2011 which forms its Reignwood Ocean Engineering division which also is engaged in development of deepwater drilling. === Japan === Currently the only continuously operating OTEC system is located in Okinawa Prefecture, Japan. The Governmental support, local community support, and advanced research carried out by Saga University were key for the contractors, IHI Plant Construction Co. Ltd, Yokogawa Electric Corporation, and Xenesys Inc, to succeed with this project. Work is being conducted to develop a 1MW facility on Kume Island requiring new pipelines. In July 2014, more than 50 members formed the Global Ocean reSource and Energy Association (GOSEA) an international organization formed to promote the development of the Kumejima Model and work towards the installation of larger deep seawater pipelines and a 1MW OTEC Facility. The companies involved in the current OTEC projects, along with other interested parties have developed plans for offshore OTEC systems as well. - For more details, see "Currently Operating OTEC Plants" above. === United States Virgin Islands === On March 5, 2014, Ocean Thermal Energy Corporation (OTEC) and the 30th Legislature of the United States Virgin Islands (USVI) signed a Memorandum of Understanding to move forward with a study to evaluate the feasibility and potential benefits to the USVI of installing on-shore Ocean Thermal Energy Conversion (OTEC) renewable energy power plants and Seawater Air Conditioning (SWAC) facilities. The benefits to be assessed in the USVI study include both the baseload (24/7) clean electricity generated by OTEC, as well as the various related products associated with OTEC and SWAC, including abundant fresh drinking water, energy-saving air conditioning, sustainable aquaculture and mariculture, and agricultural enhancement projects for the Islands of St Thomas and St Croix. On July 18, 2016, OTE's application to be a Qualifying Facility was approved by the Virgin Islands Public Services Commission. OTE also received permission to begin negotiating contracts associated with this project. === Kiribati === South Korea's Research Institute of Ships and Ocean Engineering (KRISO) received approval in principle from Bureau Veritas for their 1MW offshore OTEC design. No timeline was given for the project which will be located 6 km offshore of the Republic of Kiribati. === Martinique === Akuo Energy and DCNS were awarded NER300 funding on July 8, 2014 for their NEMO (New Energy for Martinique and Overseas) project which is expected to be a 10.7MW-net offshore facility completed in 2020. The award to help with development totaled 72 million Euro. === Maldives === On February 16, 2018, Global OTEC Resources announced plans to build a 150 kW plant in the Maldives, designed bespoke for hotels and resorts. "All these resorts draw their power from diesel generators. Moreover, some individual resorts consume 7,000 litres of diesel a day to meet demands which equates to over 6,000 tonnes of CO2 annually," said Director Dan Grech. The EU awarded a grant and Global OTEC resources launched a crowdfunding campaign for the rest. == Related activities == OTEC has uses other than power production. === Desalination === Desalinated water can be produced in open- or hybrid-cycle plants using surface condensers to turn evaporated seawater into potable water. System analysis indicates that a 2-megawatt plant could produce about 4,300 cubic metres (150,000 cu ft) of desalinated water each day. Another system patented by Richard Bailey creates condensate water by regulating deep ocean water flow through surface condensers correlating with fluctuating dew-point temperatures. This condensation system uses no incremental energy and has no moving parts. On March 22, 2015, Saga University opened a Flash-type desalination demonstration facility on Kumejima. This satellite of their Institute of Ocean Energy uses post-OTEC deep seawater from the Okinawa OTEC Demonstration Facility and raw surface seawater to produce desalinated water. Air is extracted from the closed system with a vacuum pump. When raw sea water is pumped into the flash chamber it boils, allowing pure steam to rise and the salt and remaining seawater to be removed. The steam is returned to liquid in a heat exchanger with cold post-OTEC deep seawater. The desalinated water can be used in hydrogen production or drinking water (if minerals are added). The NELHA plant established in 1993 produced an average of 7,000 gallons of freshwater per day. KOYO USA was established in 2002 to capitalize on this new economic opportunity. KOYO bottles the water produced by the NELHA plant in Hawaii. With the capacity to produce one million bottles of water every day, KOYO is now Hawaii's biggest exporter with $140 million in sales.[81] === Air conditioning === The 41 °F (5 °C) cold seawater made available by an OTEC system creates an opportunity to provide large amounts of cooling to industries and homes near the plant. The water can be used in chilled-water coils to provide air conditioning for buildings. It is estimated that a pipe 1 foot (0.30 m) in diameter can deliver 4,700 gallons of water per minute. Water at 43 °F (6 °C) could provide more than enough air conditioning for a large building. Operating 8,000 hours per year in lieu of electrical conditioning selling for 5–10¢ per kilowatt-hour, it would save $200,000-$400,000 in energy bills annually. The InterContinental Resort and Thalasso-Spa on the island of Bora Bora uses an SWAC system to air-condition its buildings. The system passes seawater through a heat exchanger where it cools freshwater in a closed loop system. This freshwater is then pumped to buildings and directly cools the air. In 2010, Copenhagen Energy opened a district cooling plant in Copenhagen, Denmark. The plant delivers cold seawater to commercial and industrial buildings, and has reduced electricity consumption by 80 percent. Ocean Thermal Energy Corporation (OTE) has designed a 9800-ton SDC system for a vacation resort in The Bahamas. === Chilled-soil agriculture === OTEC technology supports chilled-soil agriculture. When cold seawater flows through underground pipes, it chills the surrounding soil. The temperature difference between roots in the cool soil and leaves in the warm air allows plants that evolved in temperate climates to be grown in the subtropics. Dr. John P. Craven, Dr. Jack Davidson and Richard Bailey patented this process and demonstrated it at a research facility at the Natural Energy Laboratory of Hawaii Authority (NELHA). The research facility demonstrated that more than 100 different crops can be grown using this system. Many normally could not survive in Hawaii or at Keahole Point. Japan has also been researching agricultural uses of Deep Sea Water since 2000 at the Okinawa Deep Sea Water Research Institute on Kume Island. The Kume Island facilities use regular water cooled by Deep Sea Water in a heat exchanger run through pipes in the ground to cool soil. Their techniques have developed an important resource for the island community as they now produce spinach, a winter vegetable, commercially year round. An expansion of the deep seawater agriculture facility was completed by Kumejima Town next to the OTEC Demonstration Facility in 2014. The new facility is for researching the economic practicality of chilled-soil agriculture on a larger scale. === Aquaculture === Aquaculture is the best-known byproduct, because it reduces the financial and energy costs of pumping large volumes of water from the deep ocean. Deep ocean water contains high concentrations of essential nutrients that are depleted in surface waters due to biological consumption. This artificial upwelling mimics the natural upwellings that are responsible for fertilizing and supporting the world's largest marine ecosystems, and the largest densities of life on the planet. Cold-water sea animals, such as salmon and lobster, thrive in this nutrient-rich, deep seawater. Microalgae such as Spirulina, a health food supplement, also can be cultivated. Deep-ocean water can be combined with surface water to deliver water at an optimal temperature. Non-native species such as salmon, lobster, abalone, trout, oysters, and clams can be raised in pools supplied by OTEC-pumped water. This extends the variety of fresh seafood products available for nearby markets. Such low-cost refrigeration can be used to maintain the quality of harvested fish, which deteriorate quickly in warm tropical regions. In Kona, Hawaii, aquaculture companies working with NELHA generate about $40 million annually, a significant portion of Hawaii's GDP. === Hydrogen production === Hydrogen can be produced via electrolysis using OTEC electricity. Generated steam with electrolyte compounds added to improve efficiency is a relatively pure medium for hydrogen production. OTEC can be scaled to generate large quantities of hydrogen. The main challenge is cost relative to other energy sources and fuels. === Mineral extraction === The ocean contains 57 trace elements in salts and other forms and dissolved in solution. In the past, most economic analyses concluded that mining the ocean for trace elements would be unprofitable, in part because of the energy required to pump the water. Mining generally targets minerals that occur in high concentrations, and can be extracted easily, such as magnesium. With OTEC plants supplying water, the only cost is for extraction. The Japanese investigated the possibility of extracting uranium and found developments in other technologies (especially materials sciences) were improving the prospects. === Climate control === Ocean thermal gradient can be used to enhance rainfall and moderate the high ambient summer temperatures in tropics to benefit enormously the mankind and the flora and fauna. When sea surface temperatures are relatively high on an area, lower atmospheric pressure area is formed compared to atmospheric pressure prevailing on the nearby land mass inducing winds from the landmass towards the ocean. Oceanward winds are dry and warm which would not contribute to good rainfall on the landmass compared to landward moist winds. For adequate rainfall and comfortable summer ambient temperatures (below 35 °C) on the landmass, it is preferred to have landward moist winds from the ocean. Creating high pressure zones by artificial upwelling on sea area selectively can also be used to deflect / guide the normal monsoon global winds towards the landmass. Artificial upwelling of nutrient-rich deep ocean water to the surface also enhances fisheries growth in areas with tropical and temperate weather. It would also lead to enhanced carbon sequestration by the oceans from improved algae growth and mass gain by glaciers from the extra snow fall mitigating sea level rise or global warming process. Tropical cyclones also do not pass through the high pressure zones as they intensify by gaining energy from the warm surface waters of the sea. The cold deep sea water (<10 °C) is pumped to the sea surface area to suppress the sea surface temperature (>26 °C) by artificial means using electricity produced by mega scale floating wind turbine plants on the deep sea. The lower sea water surface temperature would enhance the local ambient pressure so that atmospheric landward winds are created. For upwelling the cold sea water, a stationary hydraulically driven propeller (≈50 m diameter) is located on the deep sea floor at 500 to 1000 m depth with a flexible draft tube extending up to the sea surface. The draft tube is anchored to the sea bed at its bottom side and top side to floating pontoons at the sea surface. The flexible draft tube would not collapse as its inside pressure is more compared to outside pressure when the colder water is pumped to the sea surface. Middle east, north east Africa, Indian subcontinent and Australia can get relief from hot and dry weather in summer season, also prone to erratic rainfall, by pumping deep sea water to the sea surface from the Persian gulf, Red sea, Indian Ocean and Pacific Ocean respectively. == Thermodynamics == A rigorous treatment of OTEC reveals that a 20 °C temperature difference will provide as much energy as a hydroelectric plant with 34 m head for the same volume of water flow. The low temperature difference means that water volumes must be very large to extract useful amounts of heat. A 100MW power plant would be expected to pump on the order of 12 million gallons (44,400 tonnes) per minute. For comparison, pumps must move a mass of water greater than the weight of the battleship Bismarck, which weighed 41,700 tonnes, every minute. This makes pumping a substantial parasitic drain on energy production in OTEC systems, with one Lockheed design consuming 19.55 MW in pumping costs for every 49.8 MW net electricity generated. For OTEC schemes using heat exchangers, to handle this volume of water the exchangers need to be enormous compared to those used in conventional thermal power generation plants, making them one of the most critical components due to their impact on overall efficiency. A 100 MW OTEC power plant would require 200 exchangers each larger than a 20-foot shipping container making them the single most expensive component. === Variation of ocean temperature with depth === The total insolation received by the oceans (covering 70% of the earth's surface, with clearness index of 0.5 and average energy retention of 15%) is: 5.45×1018 MJ/yr × 0.7 × 0.5 × 0.15 = 2.87×1017 MJ/yr We can use Beer–Lambert–Bouguer's law to quantify the solar energy absorption by water, − d I ( y ) d y = μ I {\displaystyle -{\frac {dI(y)}{dy}}=\mu I} where, y is the depth of water, I is intensity and μ is the absorption coefficient. Solving the above differential equation, I ( y ) = I 0 exp ⁡ ( − μ y ) {\displaystyle I(y)=I_{0}\exp(-\mu y)\,} The absorption coefficient μ may range from 0.05 m−1 for very clear fresh water to 0.5 m−1 for very salty water. Since the intensity falls exponentially with depth y, heat absorption is concentrated at the top layers. Typically in the tropics, surface temperature values are in excess of 25 °C (77 °F), while at 1 kilometer (0.62 mi), the temperature is about 5–10 °C (41–50 °F). The warmer (and hence lighter) waters at the surface means there are no thermal convection currents. Due to the small temperature gradients, heat transfer by conduction is too low to equalize the temperatures. The ocean is thus both a practically infinite heat source and a practically infinite heat sink. This temperature difference varies with latitude and season, with the maximum in tropical, subtropical and equatorial waters. Hence the tropics are generally the best OTEC locations. === Open/Claude cycle === In this scheme, warm surface water at around 27 °C (81 °F) enters an evaporator at pressure slightly below the saturation pressures causing it to vaporize. H 1 = H f {\displaystyle H_{1}=H_{f}\,} Where Hf is enthalpy of liquid water at the inlet temperature, T1. This temporarily superheated water undergoes volume boiling as opposed to pool boiling in conventional boilers where the heating surface is in contact. Thus the water partially flashes to steam with two-phase equilibrium prevailing. Suppose that the pressure inside the evaporator is maintained at the saturation pressure, T2. H 2 = H 1 = H f + x 2 H f g {\displaystyle H_{2}=H_{1}=H_{f}+x_{2}H_{fg}\,} Here, x2 is the fraction of water by mass that vaporizes. The warm water mass flow rate per unit turbine mass flow rate is 1/x2. The low pressure in the evaporator is maintained by a vacuum pump that also removes the dissolved non-condensable gases from the evaporator. The evaporator now contains a mixture of water and steam of very low vapor quality (steam content). The steam is separated from the water as saturated vapor. The remaining water is saturated and is discharged to the ocean in the open cycle. The steam is a low pressure/high specific volume working fluid. It expands in a special low pressure turbine. H 3 = H g {\displaystyle H_{3}=H_{g}\,} Here, Hg corresponds to T2. For an ideal isentropic (reversible adiabatic) turbine, s 5 , s = s 3 = s f + x 5 , s s f g {\displaystyle s_{5,s}=s_{3}=s_{f}+x_{5,s}s_{fg}\,} The above equation corresponds to the temperature at the exhaust of the turbine, T5. x5,s is the mass fraction of vapor at state 5. The enthalpy at T5 is, H 5 , s = H f + x 5 , s H f g {\displaystyle H_{5,s}=H_{f}+x_{5,s}H_{fg}\,} This enthalpy is lower. The adiabatic reversible turbine work = H3-H5,s . Actual turbine work WT = (H3-H5,s) x polytropic efficiency H 5 = H 3 − a c t u a l w o r k {\displaystyle H_{5}=H_{3}-\ \mathrm {actual} \ \mathrm {work} } The condenser temperature and pressure are lower. Since the turbine exhaust is to be discharged back into the ocean, a direct contact condenser is used to mix the exhaust with cold water, which results in a near-saturated water. That water is now discharged back to the ocean. H6=Hf, at T5. T7 is the temperature of the exhaust mixed with cold sea water, as the vapor content now is negligible, H 7 ≈ H f a t T 7 {\displaystyle H_{7}\approx H_{f}\,\ at\ T_{7}\,} The temperature differences between stages include that between warm surface water and working steam, that between exhaust steam and cooling water, and that between cooling water reaching the condenser and deep water. These represent external irreversibilities that reduce the overall temperature difference. The cold water flow rate per unit turbine mass flow rate, m c = H 5 − H 6 H 6 − H 7 ˙ {\displaystyle {\dot {m_{c}={\frac {H_{5}-\ H_{6}}{H_{6}-\ H_{7}}}}}\,} Turbine mass flow rate, M T ˙ = t u r b i n e w o r k r e q u i r e d W T {\displaystyle {\dot {M_{T}}}={\frac {\mathrm {turbine} \ \mathrm {work} \ \mathrm {required} }{W_{T}}}} Warm water mass flow rate, M w ˙ = M T m w ˙ ˙ {\displaystyle {\dot {M_{w}}}={\dot {M_{T}{\dot {m_{w}}}}}\,} Cold water mass flow rate M c ˙ = M T m C ˙ ˙ {\displaystyle {\dot {{\dot {M_{c}}}={\dot {M_{T}m_{C}}}}}\,} === Closed Anderson cycle === As developed starting in the 1960s by J. Hilbert Anderson of Sea Solar Power, Inc., in this cycle, QH is the heat transferred in the evaporator from the warm sea water to the working fluid. The working fluid exits the evaporator as a gas near its dew point. The high-pressure, high-temperature gas then is expanded in the turbine to yield turbine work, WT. The working fluid is slightly superheated at the turbine exit and the turbine typically has an efficiency of 90% based on reversible, adiabatic expansion. From the turbine exit, the working fluid enters the condenser where it rejects heat, -QC, to the cold sea water. The condensate is then compressed to the highest pressure in the cycle, requiring condensate pump work, WC. Thus, the Anderson closed cycle is a Rankine-type cycle similar to the conventional power plant steam cycle except that in the Anderson cycle the working fluid is never superheated more than a few degrees Fahrenheit. Owing to viscosity effects, working fluid pressure drops in both the evaporator and the condenser. This pressure drop, which depends on the types of heat exchangers used, must be considered in final design calculations but is ignored here to simplify the analysis. Thus, the parasitic condensate pump work, WC, computed here will be lower than if the heat exchanger pressure drop was included. The major additional parasitic energy requirements in the OTEC plant are the cold water pump work, WCT, and the warm water pump work, WHT. Denoting all other parasitic energy requirements by WA, the net work from the OTEC plant, WNP is W N P = W T − W C − W C T − W H T − W A {\displaystyle W_{NP}=W_{T}-W_{C}-W_{CT}-W_{HT}-W_{A}\,} The thermodynamic cycle undergone by the working fluid can be analyzed without detailed consideration of the parasitic energy requirements. From the first law of thermodynamics, the energy balance for the working fluid as the system is W N = Q H − Q C {\displaystyle W_{N}=Q_{H}-Q_{C}\,} where WN = WT + WC is the net work for the thermodynamic cycle. For the idealized case in which there is no working fluid pressure drop in the heat exchangers, Q H = ∫ H T H d s {\displaystyle Q_{H}=\int _{H}T_{H}ds\,} and Q C = ∫ C T C d s {\displaystyle Q_{C}=\int _{C}T_{C}ds\,} so that the net thermodynamic cycle work becomes W N = ∫ H T H d s − ∫ C T C d s {\displaystyle W_{N}=\int _{H}T_{H}ds-\int _{C}T_{C}ds\,} Subcooled liquid enters the evaporator. Due to the heat exchange with warm sea water, evaporation takes place and usually superheated vapor leaves the evaporator. This vapor drives the turbine and the 2-phase mixture enters the condenser. Usually, the subcooled liquid leaves the condenser and finally, this liquid is pumped to the evaporator completing a cycle. == Environmental impact == Carbon dioxide dissolved in deep cold and high pressure layers is brought up to the surface and released as the water warms. Mixing of deep ocean water with shallower water brings up nutrients and makes them available to shallow water life. This may be an advantage for aquaculture of commercially important species, but may also unbalance the ecological system around the power plant. OTEC plants use very large flows of warm surface seawater and cold deep seawater to generate constant renewable power. The deep seawater is oxygen deficient and generally 20–40 times more nutrient rich (in nitrate and nitrite) than shallow seawater. When these plumes are mixed, they are slightly denser than the ambient seawater. Though no large scale physical environmental testing of OTEC has been done, computer models have been developed to simulate the effect of OTEC plants. === Hydrodynamic modeling === In 2010, a computer model was developed to simulate the physical oceanographic effects of one or several 100 megawatt OTEC plant(s). The model suggests that OTEC plants can be configured such that the plant can conduct continuous operations, with resulting temperature and nutrient variations that are within naturally occurring levels. Studies to date suggest that by discharging the OTEC flows downwards at a depth below 70 meters, the dilution is adequate and nutrient enrichment is small enough so that 100-megawatt OTEC plants could be operated in a sustainable manner on a continuous basis. === Biological modeling === The nutrients from an OTEC discharge could potentially cause increased biological activity if they accumulate in large quantities in the photic zone. In 2011 a biological component was added to the hydrodynamic computer model to simulate the biological response to plumes from 100 megawatt OTEC plants. In all cases modeled (discharge at 70 meters depth or more), no unnatural variations occurs in the upper 40 meters of the ocean's surface. The picoplankton response in the 110 - 70 meter depth layer is approximately a 10–25% increase, which is well within naturally occurring variability. The nanoplankton response is negligible. The enhanced productivity of diatoms (microplankton) is small. The subtle phytoplankton increase of the baseline OTEC plant suggests that higher-order biochemical effects will be very small. === Studies === A previous Final Environmental Impact Statement (EIS) for the United States' NOAA from 1981 is available, but needs to be brought up to current oceanographic and engineering standards. Studies have been done to propose the best environmental baseline monitoring practices, focusing on a set of ten chemical oceanographic parameters relevant to OTEC. Most recently, NOAA held an OTEC Workshop in 2010 and 2012 seeking to assess the physical, chemical, and biological impacts and risks, and identify information gaps or needs. The Tethys database provides access to scientific literature and general information on the potential environmental effects of OTEC. == Technical difficulties == === Dissolved gases === The performance of direct contact heat exchangers operating at typical OTEC boundary conditions is important to the Claude cycle. Many early Claude cycle designs used a surface condenser since their performance was well understood. However, direct contact condensers offer significant disadvantages. As cold water rises in the intake pipe, the pressure decreases to the point where gas begins to evolve. If a significant amount of gas comes out of solution, placing a gas trap before the direct contact heat exchangers may be justified. Experiments simulating conditions in the warm water intake pipe indicated about 30% of the dissolved gas evolves in the top 8.5 meters (28 ft) of the tube. The trade-off between pre-deaeration of the seawater and expulsion of non-condensable gases from the condenser is dependent on the gas evolution dynamics, deaerator efficiency, head loss, vent compressor efficiency and parasitic power. Experimental results indicate vertical spout condensers perform some 30% better than falling jet types. === Microbial fouling === Because raw seawater must pass through the heat exchanger, care must be taken to maintain good thermal conductivity. Biofouling layers as thin as 25 to 50 micrometres (0.00098 to 0.00197 in) can degrade heat exchanger performance by as much as 50%. A 1977 study in which mock heat exchangers were exposed to seawater for ten weeks concluded that although the level of microbial fouling was low, the thermal conductivity of the system was significantly impaired. The apparent discrepancy between the level of fouling and the heat transfer impairment is the result of a thin layer of water trapped by the microbial growth on the surface of the heat exchanger. Another study concluded that fouling degrades performance over time, and determined that although regular brushing was able to remove most of the microbial layer, over time a tougher layer formed that could not be removed through simple brushing. The study passed sponge rubber balls through the system. It concluded that although the ball treatment decreased the fouling rate it was not enough to completely halt growth and brushing was occasionally necessary to restore capacity. The microbes regrew more quickly later in the experiment (i.e. brushing became necessary more often) replicating the results of a previous study. The increased growth rate after subsequent cleanings appears to result from selection pressure on the microbial colony. Continuous use of 1 hour per day and intermittent periods of free fouling and then chlorination periods (again 1 hour per day) were studied. Chlorination slowed but did not stop microbial growth; however chlorination levels of 0.1 mg per liter for 1 hour per day may prove effective for long term operation of a plant. The study concluded that although microbial fouling was an issue for the warm surface water heat exchanger, the cold water heat exchanger suffered little or no biofouling and only minimal inorganic fouling. Besides water temperature, microbial fouling also depends on nutrient levels, with growth occurring faster in nutrient rich water. The fouling rate also depends on the material used to construct the heat exchanger. Aluminium tubing slows the growth of microbial life, although the oxide layer which forms on the inside of the pipes complicates cleaning and leads to larger efficiency losses. In contrast, titanium tubing allows biofouling to occur faster but cleaning is more effective than with aluminium. === Sealing === The evaporator, turbine, and condenser operate in partial vacuum ranging from 3% to 1% of atmospheric pressure. The system must be carefully sealed to prevent in-leakage of atmospheric air that can degrade or shut down operation. In closed-cycle OTEC, the specific volume of low-pressure steam is very large compared to that of the pressurized working fluid. Components must have large flow areas to ensure steam velocities do not attain excessively high values. === Parasitic power consumption by exhaust compressor === An approach for reducing the exhaust compressor parasitic power loss is as follows. After most of the steam has been condensed by spout condensers, the non-condensible gas steam mixture is passed through a counter current region which increases the gas-steam reaction by a factor of five. The result is an 80% reduction in the exhaust pumping power requirements. == Cold air/warm water conversion == In winter in coastal Arctic locations, the temperature difference between the seawater and ambient air can be as high as 40 °C (72 °F). Closed-cycle systems could exploit the air-water temperature difference. Eliminating seawater extraction pipes might make a system based on this concept less expensive than OTEC. This technology is due to H. Barjot, who suggested butane as cryogen, because of its boiling point of −0.5 °C (31.1 °F) and its non-solubility in water. Assuming a realistic level of efficiency of 4%, calculations show that the amount of energy generated with one cubic meter water at a temperature of 2 °C (36 °F) in a place with an air temperature of −22 °C (−8 °F) equals the amount of energy generated by letting this cubic meter water run through a hydroelectric plant of 4000 feet (1,200 m) height. Barjot Polar Power Plants could be located on islands in the polar region or designed as swimming barges or platforms attached to the ice cap. The weather station Myggbuka at Greenlands east coast for example, which is only 2,100 km away from Glasgow, detects monthly mean temperatures below −15 °C (5 °F) during 6 winter months in the year. == Application of the thermoelectric effect == In 1979 SERI proposed using the Seebeck effect to produce power with a total conversion efficiency of 2%. In 2014 Liping Liu, Associate Professor at Rutgers University, envisioned an OTEC system that utilises the solid state thermoelectric effect rather than the fluid cycles traditionally used. == See also == Deep water source cooling Geothermal power Heat engine Floating wind turbine Ocean engineering Seawater air conditioning Thermogalvanic cell Marine energy Tidal power Wave power Osmotic power == References == == Sources == William H. Avery; Chih Wu (17 March 1994). Renewable Energy From the Ocean: A Guide to OTEC. Johns Hopkins University Applied Physics Laboratories Series in Science and Engineering. Oxford, New York: Oxford University Press. ISBN 978-0-19-507199-3. == External links ==
Wikipedia/Ocean_thermal_energy_conversion
In physics, the restoring force is a force that acts to bring a body to its equilibrium position. The restoring force is a function only of position of the mass or particle, and it is always directed back toward the equilibrium position of the system. The restoring force is often referred to in simple harmonic motion. The force responsible for restoring original size and shape is called the restoring force. An example is the action of a spring. An idealized spring exerts a force proportional to the amount of deformation of the spring from its equilibrium length, exerted in a direction oppose the deformation. Pulling the spring to a greater length causes it to exert a force that brings the spring back toward its equilibrium length. The amount of force can be determined by multiplying the spring constant, characteristic of the spring, by the amount of stretch, also known as Hooke's law. Another example is of a pendulum. When a pendulum is not swinging all the forces acting on it are in equilibrium. The force due to gravity and the mass of the object at the end of the pendulum is equal to the tension in the string holding the object up. When a pendulum is put in motion, the place of equilibrium is at the bottom of the swing, the location where the pendulum rests. When the pendulum is at the top of its swing the force returning the pendulum to this midpoint is gravity. As a result, gravity may be seen as a restoring force. == See also == Response amplitude operator == References ==
Wikipedia/Restoring_force
The elevation of a geographic location is its height above or below a fixed reference point, most commonly a reference geoid, a mathematical model of the Earth's sea level as an equipotential gravitational surface (see Geodetic datum § Vertical datum). The term elevation is mainly used when referring to points on the Earth's surface, while altitude or geopotential height is used for points above the surface, such as an aircraft in flight or a spacecraft in orbit, and depth is used for points below the surface. Elevation is not to be confused with the distance from the center of the Earth. Due to the equatorial bulge, the summits of Mount Everest and Chimborazo have, respectively, the largest elevation and the largest geocentric distance. == Aviation == In aviation, the term elevation or aerodrome elevation is defined by the ICAO as the highest point of the landing area. It is often measured in feet and can be found in approach charts of the aerodrome. It is not to be confused with terms such as the altitude or height. == Maps and GIS == GIS or geographic information system is a computer system that allows for visualizing, manipulating, capturing, and storage of data with associated attributes. GIS offers better understanding of patterns and relationships of the landscape at different scales. Tools inside the GIS allow for manipulation of data for spatial analysis or cartography. A topographical map is the main type of map used to depict elevation, often through contour lines. In a Geographic Information System (GIS), digital elevation models (DEM) are commonly used to represent the surface (topography) of a place, through a raster (grid) dataset of elevations. Digital terrain models are another way to represent terrain in GIS. USGS (United States Geologic Survey) is developing a 3D Elevation Program (3DEP) to keep up with growing needs for high quality topographic data. 3DEP is a collection of enhanced elevation data in the form of high quality LiDAR data over the conterminous United States, Hawaii, and the U.S. territories. There are three bare earth DEM layers in 3DEP which are nationally seamless at the resolution of 1/3, 1, and 2 arcseconds. == See also == Amsterdam Ordnance Datum, a.k.a. Normaal Amsterdams Peil (NAP), Dutch vertical datum Elevation profile Geodesy GTOPO30 a digital elevation model for the world Hypsometric tints Lapse rate, or the adiabatic lapse rate List of highest mountains on Earth List of the highest major summits of North America Normalhöhennull, German vertical datum, literally: standard elevation zero, (NHN) North American Vertical Datum of 1988, (NAVD 88) Sea Level Datum of 1929, a superseded United States vertical datum, (NGVD 29) Orthometric height Topographic isolation Topographic prominence Vertical pressure variation == References == == External links == U.S. National Geodetic Survey website Geodetic Glossary @ NGS NGVD 29 to NAVD 88 online elevation converter @ NGS United States Geological Survey website Geographical Survey Institute Downloadable ETOPO2 Raw Data Database (2 minute grid) Archived 2012-12-14 at archive.today Downloadable ETOPO5 Raw Data Database (5 minute grid) Archived 2012-12-14 at archive.today Find the elevation of any place *Path’s Elevation Profile using Google Earth
Wikipedia/Hypsographic_curve
Ocean dynamics define and describe the flow of water within the oceans. Ocean temperature and motion fields can be separated into three distinct layers: mixed (surface) layer, upper ocean (above the thermocline), and deep ocean. Ocean dynamics has traditionally been investigated by sampling from instruments in situ. The mixed layer is nearest to the surface and can vary in thickness from 10 to 500 meters. This layer has properties such as temperature, salinity and dissolved oxygen which are uniform with depth reflecting a history of active turbulence (the atmosphere has an analogous planetary boundary layer). Turbulence is high in the mixed layer. However, it becomes zero at the base of the mixed layer. Turbulence again increases below the base of the mixed layer due to shear instabilities. At extratropical latitudes this layer is deepest in late winter as a result of surface cooling and winter storms and quite shallow in summer. Its dynamics is governed by turbulent mixing as well as Ekman transport, exchanges with the overlying atmosphere, and horizontal advection. The upper ocean, characterized by warm temperatures and active motion, varies in depth from 100 m or less in the tropics and eastern oceans to in excess of 800 meters in the western subtropical oceans. This layer exchanges properties such as heat and freshwater with the atmosphere on timescales of a few years. Below the mixed layer the upper ocean is generally governed by the hydrostatic and geostrophic relationships. Exceptions include the deep tropics and coastal regions. The deep ocean is both cold and dark with generally weak velocities (although limited areas of the deep ocean are known to have significant recirculations). The deep ocean is supplied with water from the upper ocean in only a few limited geographical regions: the subpolar North Atlantic and several sinking regions around the Antarctic. Because of the weak supply of water to the deep ocean the average residence time of water in the deep ocean is measured in hundreds of years. In this layer as well the hydrostatic and geostrophic relationships are generally valid and mixing is generally quite weak. == Primitive equations == Ocean dynamics are governed by Newton's equations of motion expressed as the Navier-Stokes equations for a fluid element located at (x,y,z) on the surface of our rotating planet and moving at velocity (u,v,w) relative to that surface: the zonal momentum equation: D u D t = − 1 ρ ∂ p ∂ x + f v + 1 ρ ∂ τ x ∂ z {\displaystyle {\frac {Du}{Dt}}=-{\frac {1}{\rho }}{\frac {\partial p}{\partial x}}+fv+{\frac {1}{\rho }}{\frac {\partial \tau _{x}}{\partial z}}} the meridional momentum equation: D v D t = − 1 ρ ∂ p ∂ y − f u + 1 ρ ∂ τ y ∂ z {\displaystyle {\frac {Dv}{Dt}}=-{\frac {1}{\rho }}{\frac {\partial p}{\partial y}}-fu+{\frac {1}{\rho }}{\frac {\partial \tau _{y}}{\partial z}}} the vertical momentum equation (assumes the ocean is in hydrostatic balance): ∂ p ∂ z = − ρ g {\displaystyle {\frac {\partial p}{\partial z}}=-\rho g} the continuity equation (assumes the ocean is incompressible): ∂ u ∂ x + ∂ v ∂ y + ∂ w ∂ z = 0 {\displaystyle {\frac {\partial u}{\partial x}}+{\frac {\partial v}{\partial y}}+{\frac {\partial w}{\partial z}}=0} the temperature equation: ∂ T ∂ t + u ∂ T ∂ x + v ∂ T ∂ y + w ∂ T ∂ z = Q . {\displaystyle {\frac {\partial T}{\partial t}}+u{\frac {\partial T}{\partial x}}+v{\frac {\partial T}{\partial y}}+w{\frac {\partial T}{\partial z}}=Q.} the salinity equation: ∂ S ∂ t + u ∂ S ∂ x + v ∂ S ∂ y + w ∂ S ∂ z = ( E − P ) S ( z = 0 ) . {\displaystyle {\frac {\partial S}{\partial t}}+u{\frac {\partial S}{\partial x}}+v{\frac {\partial S}{\partial y}}+w{\frac {\partial S}{\partial z}}=(E-P)S(z=0).} Here "u" is zonal velocity, "v" is meridional velocity, "w" is vertical velocity, "p" is pressure, "ρ" is density, "T" is temperature, "S" is salinity, "g" is acceleration due to gravity, "τ" is wind stress, and "f" is the Coriolis parameter. "Q" is the heat input to the ocean, while "P-E" is the freshwater input to the ocean. == Mixed layer dynamics == Mixed layer dynamics are quite complicated; however, in some regions some simplifications are possible. The wind-driven horizontal transport in the mixed layer is approximately described by Ekman Layer dynamics in which vertical diffusion of momentum balances the Coriolis effect and wind stress. This Ekman transport is superimposed on geostrophic flow associated with horizontal gradients of density. == Upper ocean dynamics == Horizontal convergences and divergences within the mixed layer due, for example, to Ekman transport convergence imposes a requirement that ocean below the mixed layer must move fluid particles vertically. But one of the implications of the geostrophic relationship is that the magnitude of horizontal motion must greatly exceed the magnitude of vertical motion. Thus the weak vertical velocities associated with Ekman transport convergence (measured in meters per day) cause horizontal motion with speeds of 10 centimeters per second or more. The mathematical relationship between vertical and horizontal velocities can be derived by expressing the idea of conservation of angular momentum for a fluid on a rotating sphere. This relationship (with a couple of additional approximations) is known to oceanographers as the Sverdrup relation. Among its implications is the result that the horizontal convergence of Ekman transport observed to occur in the subtropical North Atlantic and Pacific forces southward flow throughout the interior of these two oceans. Western boundary currents (the Gulf Stream and Kuroshio) exist in order to return water to higher latitude. == References ==
Wikipedia/Ocean_dynamics
In fluid dynamics, the Craik–Leibovich (CL) vortex force describes a forcing of the mean flow through wave–current interaction, specifically between the Stokes drift velocity and the mean-flow vorticity. The CL vortex force is used to explain the generation of Langmuir circulations by an instability mechanism. The CL vortex-force mechanism was derived and studied by Sidney Leibovich and Alex D. D. Craik in the 1970s and 80s, in their studies of Langmuir circulations (discovered by Irving Langmuir in the 1930s). == Description == The CL vortex force is ρ u S × ω , {\displaystyle \rho \,{\boldsymbol {u}}_{S}\times {\boldsymbol {\omega }},} with u S {\displaystyle {\boldsymbol {u}}_{S}} the (Lagrangian) Stokes drift velocity and vorticity ω = ∇ × u {\displaystyle {\boldsymbol {\omega }}=\nabla \times {\boldsymbol {u}}} (i.e. the curl of the Eulerian mean-flow velocity u {\displaystyle {\boldsymbol {u}}} ). Further ρ {\displaystyle \rho } is the fluid density and ∇ × {\displaystyle \nabla \times } is the curl operator. The CL vortex force finds its origins in the appearance of the Stokes drift in the convective acceleration terms in the mean momentum equation of the Euler equations or Navier–Stokes equations. For constant density, the momentum equation (divided by the density ρ {\displaystyle \rho } ) is: ∂ t u ⏟ (a) + u ⋅ ∇ u ⏟ (b) + 2 Ω × u ⏟ (c) + 2 Ω × u S ⏟ (d) + ∇ ( π + u ⋅ u S ) ⏟ (e) = u S × ( ∇ × u ) ⏟ (f) + ν ∇ ⋅ ∇ u ⏟ (g) , {\displaystyle \underbrace {\partial _{t}{\boldsymbol {u}}} _{\text{(a)}}+\underbrace {{\boldsymbol {u}}\cdot \nabla {\boldsymbol {u}}} _{\text{(b)}}+\underbrace {2{\boldsymbol {\Omega }}\times {\boldsymbol {u}}} _{\text{(c)}}+\underbrace {2{\boldsymbol {\Omega }}\times {\boldsymbol {u}}_{S}} _{\text{(d)}}+\underbrace {\nabla (\pi +{\boldsymbol {u}}\cdot {\boldsymbol {u}}_{S})} _{\text{(e)}}=\underbrace {{\boldsymbol {u}}_{S}\times (\nabla \times {\boldsymbol {u}})} _{\text{(f)}}+\underbrace {\nu \,\nabla \cdot \nabla {\boldsymbol {u}}} _{\text{(g)}},} with (a): temporal acceleration (b): convective acceleration (c): Coriolis force due to the angular velocity Ω {\displaystyle {\boldsymbol {\Omega }}} of the Earth's rotation (d): Coriolis–Stokes force (e): gradient of the augmented pressure (f): Craik–Leibovich vortex force (g): viscous force due to the kinematic viscosity ν {\displaystyle \nu } The CL vortex force can be obtained by several means. Originally, Craik and Leibovich used perturbation theory. An easy way to derive it is through the generalized Lagrangian mean theory. It can also be derived through a Hamiltonian mechanics description. == Notes == == References ==
Wikipedia/Craik–Leibovich_vortex_force
A mooring in oceanography is a collection of devices connected to a wire and anchored on the sea floor. It is the Eulerian way of measuring ocean currents, since a mooring is stationary at a fixed location. In contrast to that, the Lagrangian way measures the motion of an oceanographic drifter, the Lagrangian drifter. == Construction principle == The mooring is held up in the water column with various forms of buoyancy such as glass balls and syntactic foam floats. The attached instrumentation is wide-ranging but often includes CTDs (conductivity, temperature depth sensors), current meters (e.g. acoustic Doppler current profilers or deprecated rotor current meters), and biological sensors to measure various parameters. Long-term moorings can be deployed for durations of two years or more, powered with alkaline or lithium battery packs. == Components == === Top buoy === ==== Surface buoys ==== Moorings often include surface buoys that transmit real time data back to shore. The traditional approach is to use the Argos System. Alternatively, one may use the commercial Iridium satellites which allow higher data rates. ==== Submerged buoys ==== In deeper waters, areas covered by sea ice, areas within or near shipping lines or areas that are prone to theft or vandalism, moorings are often submerged with no surface markers. Submerged moorings typically use an acoustic release or a Timed Release that connects the mooring to an anchor weight on the sea floor. The weight is released by sending a coded acoustic command signal and stays on the ground. Deep water anchors are typically made from steel and may be as large as 100 kg. A common deep water anchor consists of a stack of 2–4 railroad wheels. In shallow waters anchors may consist of a concrete block or small portable anchor. The buoyancy of the floats, i.e. of the top buoy plus additional packs of glass bulbs of foam, is sufficient to carry the instruments back to the surface. In order to avoid entangled ropes, it has been practical to place additional floats directly above each instrument. === Instrument housing === ==== Prawlers ==== Prawlers (profiling crawlers) are sensor bodies which climb and descend the cable, to observe multiple depths. The energy to move is "free," harnessed by ratcheting upward via wave energy, then returning downward via gravity. == Depth correction == Similar to a kite in the wind, the mooring line will follow a so-called (half-)catenary. The influence of currents (and wind if the top buoy is above the sea surface) can be modeled and the shape of the mooring line can be determined by software. If the currents are strong (above 0.1 m/s) and the mooring lines are long (more than 1 km), the instrument position may vary up to 50 m. == See also == Benthic lander, a mooring which does not have any mooring line == References ==
Wikipedia/Mooring_(oceanography)
The North West Shelf Operational Oceanographic System (NOOS) monitors physical, sedimentological and ecological variables for the North Sea area. NOOS is operated by partners from the nine countries bordering the extended North Sea and European North West Shelf; Belgium, Denmark, France, Germany, Ireland, Netherlands, Norway, Sweden, and United Kingdom. Working collaboratively to develop and implement ocean observing systems in the area. Near real time and recent history sea levels are available to on their web site in map, graph or table format. == Membership == As of January 2008 NOOS had sixteen full members and four associate members. Full Members: Bundesamt für Seeschifffahrt und Hydrographie (BSH), Germany Centre for Environment, Fisheries and Aquaculture Science (CEFAS), UK Danish Maritime Safety Administration (DaMSA), Denmark Danish Meteorological Institute (DMI), Denmark Flemish Authorities - MD&K Coastal Division, Belgium French Research Institute for Exploitation of the Sea (IFREMER), France Institute of Marine Research (IMR), Norway Koninklijk Nederlands Meteorologisch Instituut (KNMI), Netherlands Management Unit of the North Sea Mathematical Models (MUMM), Belgium Marine Institute, Ireland Met Office, UK National Institute for Coastal and Marine Management, Rijkswaterstaat (RIKZ), Netherlands Norwegian Meteorological Institute (MET Norway), Norway Proudman Oceanographic Laboratory (POL), UK Service Hydrographique et Oceanographique de la Marine (SHOM), France Swedish Meteorological and Hydrological Institute (SMHI), Sweden Associate Members GKSS Forschungszentrum (GKSS), Germany Nansen Environmental and Remote Sensing Center (NERSC), Norway Norwegian Institute for Water Research (NIVA), Norway University of Oldenburg (Uni-Oldenburg), Germany == Further reading == Edited by L.J. Droppert (2001). The NOOS Plan: North West Shelf Operational Oceanographic System. Southampton Oceanography Centre. pp. 68 pages. ISBN 0-904175-46-4. {{cite book}}: |last= has generic name (help) Siek, Michael Baskara Laksana Adi (2011). Predicting storm surges : chaos, computational intelligence, data assimilation, ensembles. London: CRC Press/Balkema. pp. 36–37. ISBN 9780415621021. == References == == External links == Home page
Wikipedia/North_West_Shelf_Operational_Oceanographic_System
The shallow-water equations (SWE) are a set of hyperbolic partial differential equations (or parabolic if viscous shear is considered) that describe the flow below a pressure surface in a fluid (sometimes, but not necessarily, a free surface). The shallow-water equations in unidirectional form are also called (de) Saint-Venant equations, after Adhémar Jean Claude Barré de Saint-Venant (see the related section below). The equations are derived from depth-integrating the Navier–Stokes equations, in the case where the horizontal length scale is much greater than the vertical length scale. Under this condition, conservation of mass implies that the vertical velocity scale of the fluid is small compared to the horizontal velocity scale. It can be shown from the momentum equation that vertical pressure gradients are nearly hydrostatic, and that horizontal pressure gradients are due to the displacement of the pressure surface, implying that the horizontal velocity field is constant throughout the depth of the fluid. Vertically integrating allows the vertical velocity to be removed from the equations. The shallow-water equations are thus derived. While a vertical velocity term is not present in the shallow-water equations, note that this velocity is not necessarily zero. This is an important distinction because, for example, the vertical velocity cannot be zero when the floor changes depth, and thus if it were zero only flat floors would be usable with the shallow-water equations. Once a solution (i.e. the horizontal velocities and free surface displacement) has been found, the vertical velocity can be recovered via the continuity equation. Situations in fluid dynamics where the horizontal length scale is much greater than the vertical length scale are common, so the shallow-water equations are widely applicable. They are used with Coriolis forces in atmospheric and oceanic modeling, as a simplification of the primitive equations of atmospheric flow. Shallow-water equation models have only one vertical level, so they cannot directly encompass any factor that varies with height. However, in cases where the mean state is sufficiently simple, the vertical variations can be separated from the horizontal and several sets of shallow-water equations can describe the state. == Equations == === Conservative form === The shallow-water equations are derived from equations of conservation of mass and conservation of linear momentum (the Navier–Stokes equations), which hold even when the assumptions of shallow-water break down, such as across a hydraulic jump. In the case of a horizontal bed, with negligible Coriolis forces, frictional and viscous forces, the shallow-water equations are: ∂ ( ρ η ) ∂ t + ∂ ( ρ η u ) ∂ x + ∂ ( ρ η v ) ∂ y = 0 , ∂ ( ρ η u ) ∂ t + ∂ ∂ x ( ρ η u 2 + 1 2 ρ g η 2 ) + ∂ ( ρ η u v ) ∂ y = 0 , ∂ ( ρ η v ) ∂ t + ∂ ∂ y ( ρ η v 2 + 1 2 ρ g η 2 ) + ∂ ( ρ η u v ) ∂ x = 0. {\displaystyle {\begin{aligned}{\frac {\partial (\rho \eta )}{\partial t}}&+{\frac {\partial (\rho \eta u)}{\partial x}}+{\frac {\partial (\rho \eta v)}{\partial y}}=0,\\[3pt]{\frac {\partial (\rho \eta u)}{\partial t}}&+{\frac {\partial }{\partial x}}\left(\rho \eta u^{2}+{\frac {1}{2}}\rho g\eta ^{2}\right)+{\frac {\partial (\rho \eta uv)}{\partial y}}=0,\\[3pt]{\frac {\partial (\rho \eta v)}{\partial t}}&+{\frac {\partial }{\partial y}}\left(\rho \eta v^{2}+{\frac {1}{2}}\rho g\eta ^{2}\right)+{\frac {\partial (\rho \eta uv)}{\partial x}}=0.\end{aligned}}} Here η is the total fluid column height (instantaneous fluid depth as a function of x, y and t), and the 2D vector (u,v) is the fluid's horizontal flow velocity, averaged across the vertical column. Further g is acceleration due to gravity and ρ is the fluid density. The first equation is derived from mass conservation, the second two from momentum conservation. === Non-conservative form === Expanding the derivatives in the above using the product rule, the non-conservative form of the shallow-water equations is obtained. Since velocities are not subject to a fundamental conservation equation, the non-conservative forms do not hold across a shock or hydraulic jump. Also included are the appropriate terms for Coriolis, frictional and viscous forces, to obtain (for constant fluid density): ∂ h ∂ t + ∂ ∂ x ( ( H + h ) u ) + ∂ ∂ y ( ( H + h ) v ) = 0 , ∂ u ∂ t + u ∂ u ∂ x + v ∂ u ∂ y − f v = − g ∂ h ∂ x − k u + ν ( ∂ 2 u ∂ x 2 + ∂ 2 u ∂ y 2 ) , ∂ v ∂ t + u ∂ v ∂ x + v ∂ v ∂ y + f u = − g ∂ h ∂ y − k v + ν ( ∂ 2 v ∂ x 2 + ∂ 2 v ∂ y 2 ) , {\displaystyle {\begin{aligned}{\frac {\partial h}{\partial t}}&+{\frac {\partial }{\partial x}}{\Bigl (}(H+h)u{\Bigr )}+{\frac {\partial }{\partial y}}{\Bigl (}(H+h)v{\Bigr )}=0,\\[3pt]{\frac {\partial u}{\partial t}}&+u{\frac {\partial u}{\partial x}}+v{\frac {\partial u}{\partial y}}-fv=-g{\frac {\partial h}{\partial x}}-ku+\nu \left({\frac {\partial ^{2}u}{\partial x^{2}}}+{\frac {\partial ^{2}u}{\partial y^{2}}}\right),\\[3pt]{\frac {\partial v}{\partial t}}&+u{\frac {\partial v}{\partial x}}+v{\frac {\partial v}{\partial y}}+fu=-g{\frac {\partial h}{\partial y}}-kv+\nu \left({\frac {\partial ^{2}v}{\partial x^{2}}}+{\frac {\partial ^{2}v}{\partial y^{2}}}\right),\end{aligned}}} where It is often the case that the terms quadratic in u and v, which represent the effect of bulk advection, are small compared to the other terms. This is called geostrophic balance, and is equivalent to saying that the Rossby number is small. Assuming also that the wave height is very small compared to the mean height (h ≪ H), we have (without lateral viscous forces): ∂ h ∂ t + H ( ∂ u ∂ x + ∂ v ∂ y ) = 0 , ∂ u ∂ t − f v = − g ∂ h ∂ x − k u , ∂ v ∂ t + f u = − g ∂ h ∂ y − k v . {\displaystyle {\begin{aligned}{\frac {\partial h}{\partial t}}&+H\left({\frac {\partial u}{\partial x}}+{\frac {\partial v}{\partial y}}\right)=0,\\[3pt]{\frac {\partial u}{\partial t}}&-fv=-g{\frac {\partial h}{\partial x}}-ku,\\[3pt]{\frac {\partial v}{\partial t}}&+fu=-g{\frac {\partial h}{\partial y}}-kv.\end{aligned}}} == One-dimensional Saint-Venant equations == The one-dimensional (1-D) Saint-Venant equations were derived by Adhémar Jean Claude Barré de Saint-Venant, and are commonly used to model transient open-channel flow and surface runoff. They can be viewed as a contraction of the two-dimensional (2-D) shallow-water equations, which are also known as the two-dimensional Saint-Venant equations. The 1-D Saint-Venant equations contain to a certain extent the main characteristics of the channel cross-sectional shape. The 1-D equations are used extensively in computer models such as TUFLOW, Mascaret (EDF), SIC (Irstea), HEC-RAS, SWMM5, InfoWorks, Flood Modeller, SOBEK 1DFlow, MIKE 11, and MIKE SHE because they are significantly easier to solve than the full shallow-water equations. Common applications of the 1-D Saint-Venant equations include flood routing along rivers (including evaluation of measures to reduce the risks of flooding), dam break analysis, storm pulses in an open channel, as well as storm runoff in overland flow. === Equations === The system of partial differential equations which describe the 1-D incompressible flow in an open channel of arbitrary cross section – as derived and posed by Saint-Venant in his 1871 paper (equations 19 & 20) – is: and where x is the space coordinate along the channel axis, t denotes time, A(x,t) is the cross-sectional area of the flow at location x, u(x,t) is the flow velocity, ζ(x,t) is the free surface elevation and τ(x,t) is the wall shear stress along the wetted perimeter P(x,t) of the cross section at x. Further ρ is the (constant) fluid density and g is the gravitational acceleration. Closure of the hyperbolic system of equations (1)–(2) is obtained from the geometry of cross sections – by providing a functional relationship between the cross-sectional area A and the surface elevation ζ at each position x. For example, for a rectangular cross section, with constant channel width B and channel bed elevation zb, the cross sectional area is: A = B (ζ − zb) = B h. The instantaneous water depth is h(x,t) = ζ(x,t) − zb(x), with zb(x) the bed level (i.e. elevation of the lowest point in the bed above datum, see the cross-section figure). For non-moving channel walls the cross-sectional area A in equation (1) can be written as: A ( x , t ) = ∫ 0 h ( x , t ) b ( x , h ′ ) d h ′ , {\displaystyle A(x,t)=\int _{0}^{h(x,t)}b(x,h')\,dh',} with b(x,h) the effective width of the channel cross section at location x when the fluid depth is h – so b(x, h) = B(x) for rectangular channels. The wall shear stress τ is dependent on the flow velocity u, they can be related by using e.g. the Darcy–Weisbach equation, Manning formula or Chézy formula. Further, equation (1) is the continuity equation, expressing conservation of water volume for this incompressible homogeneous fluid. Equation (2) is the momentum equation, giving the balance between forces and momentum change rates. The bed slope S(x), friction slope Sf(x, t) and hydraulic radius R(x, t) are defined as: S = − d z b d x , {\displaystyle S=-{\frac {\mathrm {d} z_{\mathrm {b} }}{\mathrm {d} x}},} S f = τ ρ g R {\displaystyle S_{\mathrm {f} }={\frac {\tau }{\rho gR}}} and R = A P . {\displaystyle R={\frac {A}{P}}.} Consequently, the momentum equation (2) can be written as: === Conservation of momentum === The momentum equation (3) can also be cast in the so-called conservation form, through some algebraic manipulations on the Saint-Venant equations, (1) and (3). In terms of the discharge Q = Au: where A, I1 and I2 are functions of the channel geometry, described in the terms of the channel width B(σ,x). Here σ is the height above the lowest point in the cross section at location x, see the cross-section figure. So σ is the height above the bed level zb(x) (of the lowest point in the cross section): A ( σ , x ) = ∫ 0 σ B ( σ ′ , x ) d σ ′ , I 1 ( σ , x ) = ∫ 0 σ ( σ − σ ′ ) B ( σ ′ , x ) d σ ′ and I 2 ( σ , x ) = ∫ 0 σ ( σ − σ ′ ) ∂ B ( σ ′ , x ) ∂ x d σ ′ . {\displaystyle {\begin{aligned}A(\sigma ,x)&=\int _{0}^{\sigma }B(\sigma ',x)\;\mathrm {d} \sigma ',\\I_{1}(\sigma ,x)&=\int _{0}^{\sigma }(\sigma -\sigma ')\,B(\sigma ^{\prime },x)\;\mathrm {d} \sigma '\qquad {\text{and}}\\I_{2}(\sigma ,x)&=\int _{0}^{\sigma }(\sigma -\sigma ')\,{\frac {\partial B(\sigma ',x)}{\partial x}}\;\mathrm {d} \sigma '.\end{aligned}}} Above – in the momentum equation (4) in conservation form – A, I1 and I2 are evaluated at σ = h(x,t). The term g I1 describes the hydrostatic force in a certain cross section. And, for a non-prismatic channel, g I2 gives the effects of geometry variations along the channel axis x. In applications, depending on the problem at hand, there often is a preference for using either the momentum equation in non-conservation form, (2) or (3), or the conservation form (4). For instance in case of the description of hydraulic jumps, the conservation form is preferred since the momentum flux is continuous across the jump. === Characteristics === The Saint-Venant equations (1)–(2) can be analysed using the method of characteristics. The two celerities dx/dt on the characteristic curves are: d x d t = u ± c , {\displaystyle {\frac {\mathrm {d} x}{\mathrm {d} t}}=u\pm c,} with c = g A B . {\displaystyle c={\sqrt {\frac {gA}{B}}}.} The Froude number Fr = |u| / c determines whether the flow is subcritical (Fr < 1) or supercritical (Fr > 1). For a rectangular and prismatic channel of constant width B, i.e. with A = B h and c = √gh, the Riemann invariants are: r + = u + 2 g h {\displaystyle r_{+}=u+2{\sqrt {gh}}} and r − = u − 2 g h , {\displaystyle r_{-}=u-2{\sqrt {gh}},} so the equations in characteristic form are: d d t ( u + 2 g h ) = g ( S − S f ) along d x d t = u + g h and d d t ( u − 2 g h ) = g ( S − S f ) along d x d t = u − g h . {\displaystyle {\begin{aligned}&{\frac {\mathrm {d} }{\mathrm {d} t}}\left(u+2{\sqrt {gh}}\right)=g\left(S-S_{f}\right)&&{\text{along}}\quad {\frac {\mathrm {d} x}{\mathrm {d} t}}=u+{\sqrt {gh}}\quad {\text{and}}\\&{\frac {\mathrm {d} }{\mathrm {d} t}}\left(u-2{\sqrt {gh}}\right)=g\left(S-S_{f}\right)&&{\text{along}}\quad {\frac {\mathrm {d} x}{\mathrm {d} t}}=u-{\sqrt {gh}}.\end{aligned}}} The Riemann invariants and method of characteristics for a prismatic channel of arbitrary cross-section are described by Didenkulova & Pelinovsky (2011). The characteristics and Riemann invariants provide important information on the behavior of the flow, as well as that they may be used in the process of obtaining (analytical or numerical) solutions. === Hamiltonian structure for frictionless flow === In case there is no friction and the channel has a rectangular prismatic cross section, the Saint-Venant equations have a Hamiltonian structure. The Hamiltonian H is equal to the energy of the free-surface flow: H = ρ ∫ ( 1 2 A u 2 + 1 2 g B ζ 2 ) d x , {\displaystyle H=\rho \int \left({\frac {1}{2}}Au^{2}+{\frac {1}{2}}gB\zeta ^{2}\right)\mathrm {d} x,} with constant B the channel width and ρ the constant fluid density. Hamilton's equations then are: ρ B ∂ ζ ∂ t + ∂ ∂ x ( ∂ H ∂ u ) = ρ ( B ∂ ζ ∂ t + ∂ ( A u ) ∂ x ) = ρ ( ∂ A ∂ t + ∂ ( A u ) ∂ x ) = 0 , ρ B ∂ u ∂ t + ∂ ∂ x ( ∂ H ∂ ζ ) = ρ B ( ∂ u ∂ t + u ∂ u ∂ x + g ∂ ζ ∂ x ) = 0 , {\displaystyle {\begin{aligned}&\rho B{\frac {\partial \zeta }{\partial t}}+{\frac {\partial }{\partial x}}\left({\frac {\partial H}{\partial u}}\right)=\rho \left(B{\frac {\partial \zeta }{\partial t}}+{\frac {\partial (Au)}{\partial x}}\right)=\rho \left({\frac {\partial A}{\partial t}}+{\frac {\partial (Au)}{\partial x}}\right)=0,\\&\rho B{\frac {\partial u}{\partial t}}+{\frac {\partial }{\partial x}}\left({\frac {\partial H}{\partial \zeta }}\right)=\rho B\left({\frac {\partial u}{\partial t}}+u{\frac {\partial u}{\partial x}}+g{\frac {\partial \zeta }{\partial x}}\right)=0,\end{aligned}}} since ∂A/∂ζ = B). === Derived modelling === ==== Dynamic wave ==== The dynamic wave is the full one-dimensional Saint-Venant equation. It is numerically challenging to solve, but is valid for all channel flow scenarios. The dynamic wave is used for modeling transient storms in modeling programs including Mascaret (EDF), SIC (Irstea), HEC-RAS, Infoworks ICM MIKE 11, Wash 123d and SWMM5. In the order of increasing simplifications, by removing some terms of the full 1D Saint-Venant equations (aka Dynamic wave equation), we get the also classical Diffusive wave equation and Kinematic wave equation. ==== Diffusive wave ==== For the diffusive wave it is assumed that the inertial terms are less than the gravity, friction, and pressure terms. The diffusive wave can therefore be more accurately described as a non-inertia wave, and is written as: g ∂ h ∂ x + g ( S f − S ) = 0. {\displaystyle g{\frac {\partial h}{\partial x}}+g(S_{f}-S)=0.} The diffusive wave is valid when the inertial acceleration is much smaller than all other forms of acceleration, or in other words when there is primarily subcritical flow, with low Froude values. Models that use the diffusive wave assumption include MIKE SHE and LISFLOOD-FP. In the SIC (Irstea) software this options is also available, since the 2 inertia terms (or any of them) can be removed in option from the interface. ==== Kinematic wave ==== For the kinematic wave it is assumed that the flow is uniform, and that the friction slope is approximately equal to the slope of the channel. This simplifies the full Saint-Venant equation to the kinematic wave: S f − S = 0. {\displaystyle S_{f}-S=0.} The kinematic wave is valid when the change in wave height over distance and velocity over distance and time is negligible relative to the bed slope, e.g. for shallow flows over steep slopes. The kinematic wave is used in HEC-HMS. === Derivation from Navier–Stokes equations === The 1-D Saint-Venant momentum equation can be derived from the Navier–Stokes equations that describe fluid motion. The x-component of the Navier–Stokes equations – when expressed in Cartesian coordinates in the x-direction – can be written as: ∂ u ∂ t + u ∂ u ∂ x + v ∂ u ∂ y + w ∂ u ∂ z = − ∂ p ∂ x 1 ρ + ν ( ∂ 2 u ∂ x 2 + ∂ 2 u ∂ y 2 + ∂ 2 u ∂ z 2 ) + f x , {\displaystyle {\frac {\partial u}{\partial t}}+u{\frac {\partial u}{\partial x}}+v{\frac {\partial u}{\partial y}}+w{\frac {\partial u}{\partial z}}=-{\frac {\partial p}{\partial x}}{\frac {1}{\rho }}+\nu \left({\frac {\partial ^{2}u}{\partial x^{2}}}+{\frac {\partial ^{2}u}{\partial y^{2}}}+{\frac {\partial ^{2}u}{\partial z^{2}}}\right)+f_{x},} where u is the velocity in the x-direction, v is the velocity in the y-direction, w is the velocity in the z-direction, t is time, p is the pressure, ρ is the density of water, ν is the kinematic viscosity, and fx is the body force in the x-direction. If it is assumed that friction is taken into account as a body force, then ν {\displaystyle \nu } can be assumed as zero so: ν ( ∂ 2 u ∂ x 2 + ∂ 2 u ∂ y 2 + ∂ 2 u ∂ z 2 ) = 0. {\displaystyle \nu \left({\frac {\partial ^{2}u}{\partial x^{2}}}+{\frac {\partial ^{2}u}{\partial y^{2}}}+{\frac {\partial ^{2}u}{\partial z^{2}}}\right)=0.} Assuming one-dimensional flow in the x-direction it follows that: v ∂ u ∂ y + w ∂ u ∂ z = 0 {\displaystyle v{\frac {\partial u}{\partial y}}+w{\frac {\partial u}{\partial z}}=0} Assuming also that the pressure distribution is approximately hydrostatic it follows that: p = ρ g h {\displaystyle p=\rho gh} or in differential form: ∂ p = ρ g ( ∂ h ) . {\displaystyle \partial p=\rho g(\partial h).} And when these assumptions are applied to the x-component of the Navier–Stokes equations: − ∂ p ∂ x 1 ρ = − 1 ρ ρ g ( ∂ h ) ∂ x = − g ∂ h ∂ x . {\displaystyle -{\frac {\partial p}{\partial x}}{\frac {1}{\rho }}=-{\frac {1}{\rho }}{\frac {\rho g\left(\partial h\right)}{\partial x}}=-g{\frac {\partial h}{\partial x}}.} There are 2 body forces acting on the channel fluid, namely, gravity and friction: f x = f x , g + f x , f {\displaystyle f_{x}=f_{x,g}+f_{x,f}} where fx,g is the body force due to gravity and fx,f is the body force due to friction. fx,g can be calculated using basic physics and trigonometry: F g = sin ⁡ ( θ ) g M {\displaystyle F_{g}=\sin(\theta )gM} where Fg is the force of gravity in the x-direction, θ is the angle, and M is the mass. The expression for sin θ can be simplified using trigonometry as: sin ⁡ θ = opp hyp . {\displaystyle \sin \theta ={\frac {\text{opp}}{\text{hyp}}}.} For small θ (reasonable for almost all streams) it can be assumed that: sin ⁡ θ = tan ⁡ θ = opp adj = S {\displaystyle \sin \theta =\tan \theta ={\frac {\text{opp}}{\text{adj}}}=S} and given that fx represents a force per unit mass, the expression becomes: f x , g = g S . {\displaystyle f_{x,g}=gS.} Assuming the energy grade line is not the same as the channel slope, and for a reach of consistent slope there is a consistent friction loss, it follows that: f x , f = S f g . {\displaystyle f_{x,f}=S_{f}g.} All of these assumptions combined arrives at the 1-dimensional Saint-Venant equation in the x-direction: ∂ u ∂ t + u ∂ u ∂ x + g ∂ h ∂ x + g ( S f − S ) = 0 , {\displaystyle {\frac {\partial u}{\partial t}}+u{\frac {\partial u}{\partial x}}+g{\frac {\partial h}{\partial x}}+g(S_{f}-S)=0,} ( a ) ( b ) ( c ) ( d ) ( e ) {\displaystyle (a)\quad \ \ (b)\quad \ \ \ (c)\qquad \ \ \ (d)\quad (e)\ } where (a) is the local acceleration term, (b) is the convective acceleration term, (c) is the pressure gradient term, (d) is the friction term, and (e) is the gravity term. Terms The local acceleration (a) can also be thought of as the "unsteady term" as this describes some change in velocity over time. The convective acceleration (b) is an acceleration caused by some change in velocity over position, for example the speeding up or slowing down of a fluid entering a constriction or an opening, respectively. Both these terms make up the inertia terms of the 1-dimensional Saint-Venant equation. The pressure gradient term (c) describes how pressure changes with position, and since the pressure is assumed hydrostatic, this is the change in head over position. The friction term (d) accounts for losses in energy due to friction, while the gravity term (e) is the acceleration due to bed slope. == Wave modelling by shallow-water equations == Shallow-water equations can be used to model Rossby and Kelvin waves in the atmosphere, rivers, lakes and oceans as well as gravity waves in a smaller domain (e.g. surface waves in a bath). In order for shallow-water equations to be valid, the wavelength of the phenomenon they are supposed to model has to be much larger than the depth of the basin where the phenomenon takes place. Somewhat smaller wavelengths can be handled by extending the shallow-water equations using the Boussinesq approximation to incorporate dispersion effects. Shallow-water equations are especially suitable to model tides which have very large length scales (over hundreds of kilometers). For tidal motion, even a very deep ocean may be considered as shallow as its depth will always be much smaller than the tidal wavelength. == Turbulence modelling using non-linear shallow-water equations == Shallow-water equations, in its non-linear form, is an obvious candidate for modelling turbulence in the atmosphere and oceans, i.e. geophysical turbulence. An advantage of this, over Quasi-geostrophic equations, is that it allows solutions like gravity waves, while also conserving energy and potential vorticity. However, there are also some disadvantages as far as geophysical applications are concerned - it has a non-quadratic expression for total energy and a tendency for waves to become shock waves. Some alternate models have been proposed which prevent shock formation. One alternative is to modify the "pressure term" in the momentum equation, but it results in a complicated expression for kinetic energy. Another option is to modify the non-linear terms in all equations, which gives a quadratic expression for kinetic energy, avoids shock formation, but conserves only linearized potential vorticity. == See also == Waves and shallow water == Notes == == Further reading == == External links == Derivation of the shallow-water equations from first principles (instead of simplifying the Navier–Stokes equations, some analytical solutions)
Wikipedia/One-dimensional_Saint-Venant_equations
Underwater acoustics (also known as hydroacoustics) is the study of the propagation of sound in water and the interaction of the mechanical waves that constitute sound with the water, its contents and its boundaries. The water may be in the ocean, a lake, a river or a tank. Typical frequencies associated with underwater acoustics are between 10 Hz and 1 MHz. The propagation of sound in the ocean at frequencies lower than 10 Hz is usually not possible without penetrating deep into the seabed, whereas frequencies above 1 MHz are rarely used because they are absorbed very quickly. Hydroacoustics, using sonar technology, is most commonly used for monitoring of underwater physical and biological characteristics. Hydroacoustics can be used to detect the depth of a water body (bathymetry), as well as the presence or absence, abundance, distribution, size, and behavior of underwater plants and animals. Hydroacoustic sensing involves "passive acoustics" (listening for sounds) or active acoustics making a sound and listening for the echo, hence the common name for the device, echo sounder or echosounder. There are a number of different causes of noise from shipping. These can be subdivided into those caused by the propeller, those caused by machinery, and those caused by the movement of the hull through the water. The relative importance of these three different categories will depend, amongst other things, on the ship type. One of the main causes of hydro acoustic noise from fully submerged lifting surfaces is the unsteady separated turbulent flow near the surface's trailing edge that produces pressure fluctuations on the surface and unsteady oscillatory flow in the near wake. The relative motion between the surface and the ocean creates a turbulent boundary layer (TBL) that surrounds the surface. The noise is generated by the fluctuating velocity and pressure fields within this TBL. The field of underwater acoustics is closely related to a number of other fields of acoustic study, including sonar, transduction, signal processing, acoustical oceanography, bioacoustics, and physical acoustics. == History == Underwater sound has probably been used by marine animals for millions of years. The science of underwater acoustics began in 1490, when Leonardo da Vinci wrote the following, "If you cause your ship to stop and place the head of a long tube in the water and place the outer extremity to your ear, you will hear ships at a great distance from you." In 1687 Isaac Newton wrote his Mathematical Principles of Natural Philosophy which included the first mathematical treatment of sound. The next major step in the development of underwater acoustics was made by Daniel Colladon, a Swiss physicist, and Charles Sturm, a French mathematician. In 1826, on Lake Geneva, they measured the elapsed time between a flash of light and the sound of a submerged ship's bell heard using an underwater listening horn. They measured a sound speed of 1435 metres per second over a 17 kilometre (km) distance, providing the first quantitative measurement of sound speed in water. The result they obtained was within about 2% of currently accepted values. In 1877 Lord Rayleigh wrote the Theory of Sound and established modern acoustic theory. The sinking of Titanic in 1912 and the start of World War I provided the impetus for the next wave of progress in underwater acoustics. Systems for detecting icebergs and U-boats were developed. Between 1912 and 1914, a number of echolocation patents were granted in Europe and the U.S., culminating in Reginald A. Fessenden's echo-ranger in 1914. Pioneering work was carried out during this time in France by Paul Langevin and in Britain by A B Wood and associates. The development of both active ASDIC and passive sonar (SOund Navigation And Ranging) proceeded apace during the war, driven by the first large scale deployments of submarines. Other advances in underwater acoustics included the development of acoustic mines. In 1919, the first scientific paper on underwater acoustics was published, theoretically describing the refraction of sound waves produced by temperature and salinity gradients in the ocean. The range predictions of the paper were experimentally validated by propagation loss measurements. The next two decades saw the development of several applications of underwater acoustics. The fathometer, or depth sounder, was developed commercially during the 1920s. Originally natural materials were used for the transducers, but by the 1930s sonar systems incorporating piezoelectric transducers made from synthetic materials were being used for passive listening systems and for active echo-ranging systems. These systems were used to good effect during World War II by both submarines and anti-submarine vessels. Many advances in underwater acoustics were made which were summarised later in the series Physics of Sound in the Sea, published in 1946. After World War II, the development of sonar systems was driven largely by the Cold War, resulting in advances in the theoretical and practical understanding of underwater acoustics, aided by computer-based techniques. == Theory == === Sound waves in water, bottom of sea === A sound wave propagating underwater consists of alternating compressions and rarefactions of the water. These compressions and rarefactions are detected by a receiver, such as the human ear or a hydrophone, as changes in pressure. These waves may be man-made or naturally generated. === Speed of sound, density and impedance === The speed of sound c {\displaystyle c\,} (i.e., the longitudinal motion of wavefronts) is related to frequency f {\displaystyle f\,} and wavelength λ {\displaystyle \lambda \,} of a wave by c = f ⋅ λ {\displaystyle c=f\cdot \lambda } . This is different from the particle velocity u {\displaystyle u\,} , which refers to the motion of molecules in the medium due to the sound, and relates the plane wave pressure p {\displaystyle p\,} to the fluid density ρ {\displaystyle \rho \,} and sound speed c {\displaystyle c\,} by p = c ⋅ u ⋅ ρ {\displaystyle p=c\cdot u\cdot \rho } . The product of c {\displaystyle c} and ρ {\displaystyle \rho \,} from the above formula is known as the characteristic acoustic impedance. The acoustic power (energy per second) crossing unit area is known as the intensity of the wave and for a plane wave the average intensity is given by I = q 2 / ( ρ c ) {\displaystyle I=q^{2}/(\rho c)\,} , where q {\displaystyle q\,} is the root mean square acoustic pressure. At 1 kHz, the wavelength in water is about 1.5 m. Sometimes the term "sound velocity" is used but this is incorrect as the quantity is a scalar. The large impedance contrast between air and water (the ratio is about 3600) and the scale of surface roughness means that the sea surface behaves as an almost perfect reflector of sound at frequencies below 1 kHz. Sound speed in water exceeds that in air by a factor of 4.4 and the density ratio is about 820. === Absorption of sound === Absorption of low frequency sound is weak. (see Technical Guides – Calculation of absorption of sound in seawater for an on-line calculator). The main cause of sound attenuation in fresh water, and at high frequency in sea water (above 100 kHz) is viscosity. Important additional contributions at lower frequency in seawater are associated with the ionic relaxation of boric acid (up to c. 10 kHz) and magnesium sulfate (c. 10 kHz-100 kHz). Sound may be absorbed by losses at the fluid boundaries. Near the surface of the sea losses can occur in a bubble layer or in ice, while at the bottom sound can penetrate into the sediment and be absorbed. === Sound reflection and scattering === ==== Boundary interactions ==== Both the water surface and bottom are reflecting and scattering boundaries. ===== Surface ===== For many purposes the sea-air surface can be thought of as a perfect reflector. The impedance contrast is so great that little energy is able to cross this boundary. Acoustic pressure waves reflected from the sea surface experience a reversal in phase, often stated as either a "pi phase change" or a "180 deg phase change". This is represented mathematically by assigning a reflection coefficient of minus 1 instead of plus one to the sea surface. At high frequency (above about 1 kHz) or when the sea is rough, some of the incident sound is scattered, and this is taken into account by assigning a reflection coefficient whose magnitude is less than one. For example, close to normal incidence, the reflection coefficient becomes R = − e − 2 k 2 h 2 sin 2 ⁡ A {\displaystyle R=-e^{-2k^{2}h^{2}\sin ^{2}A}} , where h is the rms wave height. A further complication is the presence of wind-generated bubbles or fish close to the sea surface. The bubbles can also form plumes that absorb some of the incident and scattered sound, and scatter some of the sound themselves. ===== Seabed ===== The acoustic impedance mismatch between water and the bottom is generally much less than at the surface and is more complex. It depends on the bottom material types and depth of the layers. Theories have been developed for predicting the sound propagation in the bottom in this case, for example by Biot and by Buckingham. ==== At target ==== The reflection of sound at a target whose dimensions are large compared with the acoustic wavelength depends on its size and shape as well as the impedance of the target relative to that of water. Formulae have been developed for the target strength of various simple shapes as a function of angle of sound incidence. More complex shapes may be approximated by combining these simple ones. === Propagation of sound === Underwater acoustic propagation depends on many factors. The direction of sound propagation is determined by the sound speed gradients in the water. These speed gradients transform the sound wave through refraction, reflection, and dispersion. In the sea the vertical gradients are generally much larger than the horizontal ones. Combining this with a tendency towards increasing sound speed at increasing depth, due to the increasing pressure in the deep sea, causes a reversal of the sound speed gradient in the thermocline, creating an efficient waveguide at the depth, corresponding to the minimum sound speed. The sound speed profile may cause regions of low sound intensity called "Shadow Zones", and regions of high intensity called "Caustics". These may be found by ray tracing methods. At the equator and temperate latitudes in the ocean, the surface temperature is high enough to reverse the pressure effect, such that a sound speed minimum occurs at depth of a few hundred meters. The presence of this minimum creates a special channel known as deep sound channel, or SOFAR (sound fixing and ranging) channel, permitting guided propagation of underwater sound for thousands of kilometers without interaction with the sea surface or the seabed. Another phenomenon in the deep sea is the formation of sound focusing areas, known as convergence zones. In this case sound is refracted downward from a near-surface source and then back up again. The horizontal distance from the source at which this occurs depends on the positive and negative sound speed gradients. A surface duct can also occur in both deep and moderately shallow water when there is upward refraction, for example due to cold surface temperatures. Propagation is by repeated sound bounces off the surface. In general, as sound propagates underwater there is a reduction in the sound intensity over increasing ranges, though in some circumstances a gain can be obtained due to focusing. Propagation loss (sometimes referred to as transmission loss) is a quantitative measure of the reduction in sound intensity between two points, normally the sound source and a distant receiver. If I s {\displaystyle I_{s}} is the far field intensity of the source referred to a point 1 m from its acoustic center and I r {\displaystyle I_{r}} is the intensity at the receiver, then the propagation loss is given by P L = 10 log ⁡ ( I s / I r ) {\displaystyle {\mathit {PL}}=10\log(I_{s}/I_{r})} . In this equation I r {\displaystyle I_{r}} is not the true acoustic intensity at the receiver, which is a vector quantity, but a scalar equal to the equivalent plane wave intensity (EPWI) of the sound field. The EPWI is defined as the magnitude of the intensity of a plane wave of the same RMS pressure as the true acoustic field. At short range the propagation loss is dominated by spreading while at long range it is dominated by absorption and/or scattering losses. An alternative definition is possible in terms of pressure instead of intensity, giving P L = 20 log ⁡ ( p s / p r ) {\displaystyle {\mathit {PL}}=20\log(p_{s}/p_{r})} , where p s {\displaystyle p_{s}} is the RMS acoustic pressure in the far-field of the projector, scaled to a standard distance of 1 m, and p r {\displaystyle p_{r}} is the RMS pressure at the receiver position. These two definitions are not exactly equivalent because the characteristic impedance at the receiver may be different from that at the source. Because of this, the use of the intensity definition leads to a different sonar equation to the definition based on a pressure ratio. If the source and receiver are both in water, the difference is small. ==== Propagation modelling ==== The propagation of sound through water is described by the wave equation, with appropriate boundary conditions. A number of models have been developed to simplify propagation calculations. These models include ray theory, normal mode solutions, and parabolic equation simplifications of the wave equation. Each set of solutions is generally valid and computationally efficient in a limited frequency and range regime, and may involve other limits as well. Ray theory is more appropriate at short range and high frequency, while the other solutions function better at long range and low frequency. Various empirical and analytical formulae have also been derived from measurements that are useful approximations. ==== Reverberation ==== Transient sounds result in a decaying background that can be of much larger duration than the original transient signal. The cause of this background, known as reverberation, is partly due to scattering from rough boundaries and partly due to scattering from fish and other biota. For an acoustic signal to be detected easily, it must exceed the reverberation level as well as the background noise level. ==== Doppler shift ==== If an underwater object is moving relative to an underwater receiver, the frequency of the received sound is different from that of the sound radiated (or reflected) by the object. This change in frequency is known as a Doppler shift. The shift can be easily observed in active sonar systems, particularly narrow-band ones, because the transmitter frequency is known, and the relative motion between sonar and object can be calculated. Sometimes the frequency of the radiated noise (a tonal) may also be known, in which case the same calculation can be done for passive sonar. For active systems the change in frequency is 0.69 Hz per knot per kHz and half this for passive systems as propagation is only one way. The shift corresponds to an increase in frequency for an approaching target. ==== Intensity fluctuations ==== Though acoustic propagation modelling generally predicts a constant received sound level, in practice there are both temporal and spatial fluctuations. These may be due to both small and large scale environmental phenomena. These can include sound speed profile fine structure and frontal zones as well as internal waves. Because in general there are multiple propagation paths between a source and receiver, small phase changes in the interference pattern between these paths can lead to large fluctuations in sound intensity. ==== Non-linearity ==== In water, especially with air bubbles, the change in density due to a change in pressure is not exactly linearly proportional. As a consequence for a sinusoidal wave input additional harmonic and subharmonic frequencies are generated. When two sinusoidal waves are input, sum and difference frequencies are generated. The conversion process is greater at high source levels than small ones. Because of the non-linearity there is a dependence of sound speed on the pressure amplitude so that large changes travel faster than small ones. Thus a sinusoidal waveform gradually becomes a sawtooth one with a steep rise and a gradual tail. Use is made of this phenomenon in parametric sonar and theories have been developed to account for this, e.g. by Westerfield. == Measurements == Sound in water is measured using a hydrophone, which is the underwater equivalent of a microphone. A hydrophone measures pressure fluctuations, and these are usually converted to sound pressure level (SPL), which is a logarithmic measure of the mean square acoustic pressure. Measurements are usually reported in one of two forms: RMS acoustic pressure in pascals (or sound pressure level (SPL) in dB re 1 μPa) spectral density (mean square pressure per unit bandwidth) in pascals squared per hertz (dB re 1 μPa2/Hz) The scale for acoustic pressure in water differs from that used for sound in air. In air the reference pressure is 20 μPa rather than 1 μPa. For the same numerical value of SPL, the intensity of a plane wave (power per unit area, proportional to mean square sound pressure divided by acoustic impedance) in air is about 202×3600 = 1 440 000 times higher than in water. Similarly, the intensity is about the same if the SPL is 61.6 dB higher in the water. The 2017 standard ISO 18405 defines terms and expressions used in the field of underwater acoustics, including the calculation of underwater sound pressure levels. === Sound speed === Approximate values for fresh water and seawater, respectively, at atmospheric pressure are 1450 and 1500 m/s for the sound speed, and 1000 and 1030 kg/m3 for the density. The speed of sound in water increases with increasing pressure, temperature and salinity. The maximum speed in pure water under atmospheric pressure is attained at about 74 °C; sound travels slower in hotter water after that point; the maximum increases with pressure. === Absorption === Many measurements have been made of sound absorption in lakes and the ocean (see Technical Guides – Calculation of absorption of sound in seawater for an on-line calculator). === Ambient noise === Measurement of acoustic signals are possible if their amplitude exceeds a minimum threshold, determined partly by the signal processing used and partly by the level of background noise. Ambient noise is that part of the received noise that is independent of the source, receiver and platform characteristics. Thus it excludes reverberation and towing noise for example. The background noise present in the ocean, or ambient noise, has many different sources and varies with location and frequency. At the lowest frequencies, from about 0.1 Hz to 10 Hz, ocean turbulence and microseisms are the primary contributors to the noise background. Typical noise spectrum levels decrease with increasing frequency from about 140 dB re 1 μPa2/Hz at 1 Hz to about 30 dB re 1 μPa2/Hz at 100 kHz. Distant ship traffic is one of the dominant noise sources in most areas for frequencies of around 100 Hz, while wind-induced surface noise is the main source between 1 kHz and 30 kHz. At very high frequencies, above 100 kHz, thermal noise of water molecules begins to dominate. The thermal noise spectral level at 100 kHz is 25 dB re 1 μPa2/Hz. The spectral density of thermal noise increases by 20 dB per decade (approximately 6 dB per octave). Transient sound sources also contribute to ambient noise. These can include intermittent geological activity, such as earthquakes and underwater volcanoes, rainfall on the surface, and biological activity. Biological sources include cetaceans (especially blue, fin and sperm whales), certain types of fish, and snapping shrimp. Rain can produce high levels of ambient noise. However the numerical relationship between rain rate and ambient noise level is difficult to determine because measurement of rain rate is problematic at sea. === Reverberation === Many measurements have been made of sea surface, bottom and volume reverberation. Empirical models have sometimes been derived from these. A commonly used expression for the band 0.4 to 6.4 kHz is that by Chapman and Harris. It is found that a sinusoidal waveform is spread in frequency due to the surface motion. For bottom reverberation a Lambert's Law is found often to apply approximately, for example see Mackenzie. Volume reverberation is usually found to occur mainly in layers, which change depth with the time of day, e.g., see Marshall and Chapman. The under-surface of ice can produce strong reverberation when it is rough, see for example Milne. === Bottom loss === Bottom loss has been measured as a function of grazing angle for many frequencies in various locations, for example those by the US Marine Geophysical Survey. The loss depends on the sound speed in the bottom (which is affected by gradients and layering) and by roughness. Graphs have been produced for the loss to be expected in particular circumstances. In shallow water bottom loss often has the dominant impact on long range propagation. At low frequencies sound can propagate through the sediment then back into the water. == Underwater hearing == === Comparison with airborne sound levels === As with airborne sound, sound pressure level underwater is usually reported in units of decibels, but there are some important differences that make it difficult (and often inappropriate) to compare SPL in water with SPL in air. These differences include: difference in reference pressure: 1 μPa (one micropascal, or one millionth of a pascal) instead of 20 μPa. difference in interpretation: there are two schools of thought, one maintaining that pressures should be compared directly, and the other that one should first convert to the intensity of an equivalent plane wave. difference in hearing sensitivity: any comparison with (A-weighted) sound in air needs to take into account the differences in hearing sensitivity, either of a human diver or other animal. === Human hearing === ==== Hearing sensitivity ==== The lowest audible SPL for a human diver with normal hearing is about 67 dB re 1 μPa, with greatest sensitivity occurring at frequencies around 1 kHz. This corresponds to a sound intensity 5.4 dB, or 3.5 times, higher than the threshold in air (see Measurements above). ==== Safety thresholds ==== High levels of underwater sound create a potential hazard to human divers. Guidelines for exposure of human divers to underwater sound are reported by the SOLMAR project of the NATO Undersea Research Centre. Human divers exposed to SPL above 154 dB re 1 μPa in the frequency range 0.6 to 2.5 kHz are reported to experience changes in their heart rate or breathing frequency. Diver aversion to low frequency sound is dependent upon sound pressure level and center frequency. === Other species === ==== Aquatic mammals ==== Dolphins and other toothed whales are known for their acute hearing sensitivity, especially in the frequency range 5 to 50 kHz. Several species have hearing thresholds between 30 and 50 dB re 1 μPa in this frequency range. For example, the hearing threshold of the killer whale occurs at an RMS acoustic pressure of 0.02 mPa (and frequency 15 kHz), corresponding to an SPL threshold of 26 dB re 1 μPa. High levels of underwater sound create a potential hazard to marine and amphibious animals. The effects of exposure to underwater noise are reviewed by Southall et al. ==== Fish ==== The hearing sensitivity of fish is reviewed by Ladich and Fay. The hearing threshold of the soldier fish, is 0.32 mPa (50 dB re 1 μPa) at 1.3 kHz, whereas the lobster has a hearing threshold of 1.3 Pa at 70 Hz (122 dB re 1 μPa). The effects of exposure to underwater noise are reviewed by Popper et al. ==== Aquatic birds ==== Several aquatic bird species have been observed to react to underwater sound in the 1–4 kHz range, which follows the frequency range of best hearing sensitivities of birds in air. Seaducks and cormorants have been trained to respond to sounds of 1–4 kHz with lowest hearing threshold (highest sensitivity) of 71 dB re 1 μPa (cormorants) and 105 dB re 1 μPa (seaducks). Diving species have several morphological differences in the ear relative to terrestrial species, suggesting some adaptations of the ear in diving birds to aquatic conditions == Applications of underwater acoustics == === Sonar === Sonar is the name given to the acoustic equivalent of radar. Pulses of sound are used to probe the sea, and the echoes are then processed to extract information about the sea, its boundaries and submerged objects. An alternative use, known as passive sonar, attempts to do the same by listening to the sounds radiated by underwater objects. === Underwater communication === The need for underwater acoustic telemetry exists in applications such as data harvesting for environmental monitoring, communication with and between crewed and uncrewed underwater vehicles, transmission of diver speech, etc. A related application is underwater remote control, in which acoustic telemetry is used to remotely actuate a switch or trigger an event. A prominent example of underwater remote control are acoustic releases, devices that are used to return sea floor deployed instrument packages or other payloads to the surface per remote command at the end of a deployment. Acoustic communications form an active field of research with significant challenges to overcome, especially in horizontal, shallow-water channels. Compared with radio telecommunications, the available bandwidth is reduced by several orders of magnitude. Moreover, the low speed of sound causes multipath propagation to stretch over time delay intervals of tens or hundreds of milliseconds, as well as significant Doppler shifts and spreading. Often acoustic communication systems are not limited by noise, but by reverberation and time variability beyond the capability of receiver algorithms. The fidelity of underwater communication links can be greatly improved by the use of hydrophone arrays, which allow processing techniques such as adaptive beamforming and diversity combining. === Underwater navigation and tracking === Underwater navigation and tracking is a common requirement for exploration and work by divers, ROV, autonomous underwater vehicles (AUV), crewed submersibles and submarines alike. Unlike most radio signals which are quickly absorbed, sound propagates far underwater and at a rate that can be precisely measured or estimated. It can thus be used to measure distances between a tracked target and one or multiple reference of baseline stations precisely, and triangulate the position of the target, sometimes with centimeter accuracy. Starting in the 1960s, this has given rise to underwater acoustic positioning systems which are now widely used. === Seismic exploration === Seismic exploration involves the use of low frequency sound (< 100 Hz) to probe deep into the seabed. Despite the relatively poor resolution due to their long wavelength, low frequency sounds are preferred because high frequencies are heavily attenuated when they travel through the seabed. Sound sources used include airguns, vibroseis and explosives. === Weather and climate observation === Acoustic sensors can be used to monitor the sound made by wind and precipitation. For example, an acoustic rain gauge is described by Nystuen. Lightning strikes can also be detected. Acoustic thermometry of ocean climate (ATOC) uses low frequency sound to measure the global ocean temperature. === Acoustical oceanography === Acoustical oceanography is the use of underwater sound to study the sea, its boundaries and its contents. ==== History ==== Interest in developing echo ranging systems began in earnest following the sinking of the RMS Titanic in 1912. By sending a sound wave ahead of a ship, the theory went, a return echo bouncing off the submerged portion of an iceberg should give early warning of collisions. By directing the same type of beam downwards, the depth to the bottom of the ocean could be calculated. The first practical deep-ocean echo sounder was invented by Harvey C. Hayes, a U.S. Navy physicist. For the first time, it was possible to create a quasi-continuous profile of the ocean floor along the course of a ship. The first such profile was made by Hayes on board the U.S.S. Stewart, a Navy destroyer that sailed from Newport to Gibraltar between June 22 and 29, 1922. During that week, 900 deep-ocean soundings were made. Using a refined echo sounder, the German survey ship Meteor made several passes across the South Atlantic from the equator to Antarctica between 1925 and 1927, taking soundings every 5 to 20 miles. Their work created the first detailed map of the Mid-Atlantic Ridge. It showed that the Ridge was a rugged mountain range, and not the smooth plateau that some scientists had envisioned. Since that time, both naval and research vessels have operated echo sounders almost continuously while at sea. Important contributions to acoustical oceanography have been made by: Leonid Brekhovskikh Walter Munk Herman Medwin John L. Spiesberger C.C. Leroy David E. Weston D. Van Holliday Charles Greenlaw ==== Equipment used ==== The earliest and most widespread use of sound and sonar technology to study the properties of the sea is the use of a rainbow echo sounder to measure water depth. Sounders were the devices used that mapped the many miles of the Santa Barbara Harbor ocean floor until 1993. Fathometers measure the depth of the waters. It works by electronically sending sounds from ships, therefore also receiving the sound waves that bounces back from the bottom of the ocean. A paper chart moves through the fathometer and is calibrated to record the depth. As technology advances, the development of high resolution sonars in the second half of the 20th century made it possible to not just detect underwater objects but to classify them and even image them. Electronic sensors are now attached to ROVs since nowadays, ships or robot submarines have Remotely Operated Vehicles (ROVs). There are cameras attached to these devices giving out accurate images. The oceanographers are able to get a clear and precise quality of pictures. The 'pictures' can also be sent from sonars by having sound reflected off ocean surroundings. Oftentimes sound waves reflect off animals, giving information which can be documented into deeper animal behaviour studies. === Marine biology === Due to its excellent propagation properties, underwater sound is used as a tool to aid the study of marine life, from microplankton to the blue whale. Echo sounders are often used to provide data on marine life abundance, distribution, and behavior information. Echo sounders, also referred to as hydroacoustics is also used for fish location, quantity, size, and biomass. Acoustic telemetry is also used for monitoring fish and marine wildlife. An acoustic transmitter is attached to the fish (sometimes internally) while an array of receivers listen to the information conveyed by the sound wave. This enables the researchers to track the movements of individuals in a small-medium scale. Pistol shrimp create sonoluminescent cavitation bubbles that reach up to 5,000 K (4,700 °C) === Particle physics === A neutrino is a fundamental particle that interacts very weakly with other matter. For this reason, it requires detection apparatus on a very large scale, and the ocean is sometimes used for this purpose. In particular, it is thought that ultra-high energy neutrinos in seawater can be detected acoustically. === Other applications === Other applications include: rain rate measurement wind speed measurement global thermometry monitoring of ocean-atmospheric gas exchange Surveillance Towed Array Sensor System Acoustic Doppler current profiler for water speed measurement Acoustic camera Liquid sound Passive acoustic monitoring == See also == Bioacoustics – Study of sound relating to biology Cambridge Interferometer – a radio telescope interferometer built in the early 1950s to the west of Cambridge, UKPages displaying wikidata descriptions as a fallback Echo sounder – Measuring the depth of water by transmitting sound waves into water and timing the returnPages displaying short descriptions of redirect targets Fisheries acoustics Ocean exploration – Part of oceanography describing the exploration of ocean surfaces Ocean Tracking Network – Global network research and monitoring efforts to study fish migration Refraction (sound) – Change of direction of propagation due to variation of velocity Sonar – Acoustic sensing method Underwater acoustic positioning system – System for tracking and navigation of underwater vehicles or divers using acoustic signals Underwater acoustic communication – Wireless technique of sending and receiving messages through water Underwater Audio, an electronics company == Notes == == References == === Bibliography === Garrison, Tom S. (1 August 2012). Essentials of Oceanography. Cengage Learning. ISBN 978-0-8400-6155-3. Kunzig, Robert (17 October 2000). Mapping the Deep: The Extraordinary Story of Ocean Science. W. W. Norton & Company. ISBN 978-0-393-34535-3. Stewart, Robert H. (September 2009). Introduction to Physical Oceanography. University Press of Florida. ISBN 978-1-61610-045-2. == Further reading == Quality assurance of hydroacoustic surveys: the repeatability of fish-abundance and biomass estimates in lakes within and between hydroacoustic systems (free link to document) Hydroacoustics as a tool for assessing fish biomass and size distribution associated with discrete shallow water estuarine habitats in Louisiana Acoustic assessment of squid stocks Summary of the use of hydroacoustics for quantifying the escapement of adult salmonids (Oncorhynchus and Salmo spp.) in rivers. Ransom, B.H., S.V. Johnston, and T.W. Steig. 1998. Presented at International Symposium and Workshop on Management and Ecology of River Fisheries, University of Hull, England, 30 March-3 April 1998 Multi-frequency acoustic assessment of fisheries and plankton resources. Torkelson, T.C., T.C. Austin, and P.H. Weibe. 1998. Presented at the 135th Meeting of the Acoustical Society of America and the 16th Meeting of the International Congress of Acoustics, Seattle, Washington. Acoustics Unpacked A great reference for freshwater hydroacoustics for resource assessment Inter-Calibration of Scientific Echosounders in the Great Lakes Hydroacoustic Evaluation of Spawning Red Hind Aggregations Along the Coast of Puerto Rico in 2002 and 2003 Feasibility Assessment of Split-Beam Hydroacoustic Techniques for Monitoring Adult Shortnose Sturgeon in the Delaware River Categorising Salmon Migration Behaviour Using Characteristics of Split-beam Acoustic Data Evaluation of Methods to Estimate Lake Herring Spawner Abundance in Lake Superior Estimating Sockeye Salmon Smolt Flux and Abundance with Side-Looking Sonar Herring Research: Using Acoustics to Count Fish. Hydroacoustic Applications in Lake, River and Marine environments for study of plankton, fish, vegetation, substrate or seabed classification, and bathymetry. Hydroacoustics: Rivers (in: Salmonid Field Protocols Handbook: Chapter 4) Hydroacoustics: Lakes and Reservoirs (in: Salmonid Field Protocols Handbook: Chapter 5) PAMGUARD: An Open-Source Software Community Developing Marine Mammal Acoustic Detection and Localisation Software to Benefit the Marine Environment; https://web.archive.org/web/20070904035315/http://www.pamguard.org/home.shtml == External links == Ultrasonics and Underwater Acoustics Monitoring the global ocean through underwater acoustics ASA Underwater Acoustics Technical Committee An Ocean of Sound Underwater Acoustic Communications Acoustic Communications Group at the Woods Hole Oceanographic Institution Sound in the Sea SFSU Underwater Acoustics Research Group Discovery of Sound in the Sea Marine acoustics research
Wikipedia/Acoustical_oceanography
In fluid dynamics, an eddy is the swirling of a fluid and the reverse current created when the fluid is in a turbulent flow regime. The moving fluid creates a space devoid of downstream-flowing fluid on the downstream side of the object. Fluid behind the obstacle flows into the void creating a swirl of fluid on each edge of the obstacle, followed by a short reverse flow of fluid behind the obstacle flowing upstream, toward the back of the obstacle. This phenomenon is naturally observed behind large emergent rocks in swift-flowing rivers. An eddy is a movement of fluid that deviates from the general flow of the fluid. An example for an eddy is a vortex which produces such deviation. However, there are other types of eddies that are not simple vortices. For example, a Rossby wave is an eddy which is an undulation that is a deviation from mean flow, but does not have the local closed streamlines of a vortex. == Swirl and eddies in engineering == The propensity of a fluid to swirl is used to promote good fuel/air mixing in internal combustion engines. In fluid mechanics and transport phenomena, an eddy is not a property of the fluid, but a violent swirling motion caused by the position and direction of turbulent flow. == Reynolds number and turbulence == In 1883, scientist Osborne Reynolds conducted a fluid dynamics experiment involving water and dye, where he adjusted the velocities of the fluids and observed the transition from laminar to turbulent flow, characterized by the formation of eddies and vortices. Turbulent flow is defined as the flow in which the system's inertial forces are dominant over the viscous forces. This phenomenon is described by Reynolds number, a unit-less number used to determine when turbulent flow will occur. Conceptually, the Reynolds number is the ratio between inertial forces and viscous forces. The general form for the Reynolds number flowing through a tube of radius r (or diameter d): R e = 2 v ρ r μ = ρ v d μ {\displaystyle \mathrm {Re} ={\frac {2v\rho r}{\mu }}={\frac {\rho vd}{\mu }}} where v is the velocity of the fluid, ρ is its density, r is the radius of the tube, and μ is the dynamic viscosity of the fluid. A turbulent flow in a fluid is defined by the critical Reynolds number, for a closed pipe this works out to approximately R e c ≈ 2000. {\displaystyle \mathrm {Re} _{\text{c}}\approx 2000.} In terms of the critical Reynolds number, the critical velocity is represented as v c = R e c μ ρ d . {\displaystyle v_{\text{c}}={\frac {\mathrm {Re} _{\text{c}}\mu }{\rho d}}.} == Research and development == === Computational fluid dynamics === These are turbulence models in which the Reynolds stresses, as obtained from a Reynolds averaging of the Navier–Stokes equations, are modelled by a linear constitutive relationship with the mean flow straining field, as: − ρ ⟨ u i u j ⟩ = 2 μ t S i , j − 2 3 ρ κ δ i , j {\displaystyle -\rho \langle u_{i}u_{j}\rangle =2\mu _{t}S_{i,j}-{\tfrac {2}{3}}\rho \kappa \delta _{i,j}} where μ t {\displaystyle \mu _{t}} is the coefficient termed turbulence "viscosity" (also called the eddy viscosity) κ = 1 2 ( ⟨ u 1 u 1 ⟩ + ⟨ u 2 u 2 ⟩ + ⟨ u 3 u 3 ⟩ ) {\displaystyle \kappa ={\tfrac {1}{2}}{\bigl (}\langle u_{1}u_{1}\rangle +\langle u_{2}u_{2}\rangle +\langle u_{3}u_{3}\rangle {\bigr )}} is the mean turbulent kinetic energy S i , j {\displaystyle S_{i,j}} is the mean strain rate Note that that inclusion of 2 3 ρ κ δ i , j {\displaystyle {\tfrac {2}{3}}\rho \kappa \delta _{i,j}} in the linear constitutive relation is required by tensorial algebra purposes when solving for two-equation turbulence models (or any other turbulence model that solves a transport equation for κ {\displaystyle \kappa } . === Hemodynamics === Hemodynamics is the study of blood flow in the circulatory system. Blood flow in straight sections of the arterial tree are typically laminar (high, directed wall stress), but branches and curvatures in the system cause turbulent flow. Turbulent flow in the arterial tree can cause a number of concerning effects, including atherosclerotic lesions, postsurgical neointimal hyperplasia, in-stent restenosis, vein bypass graft failure, transplant vasculopathy, and aortic valve calcification. === Industrial processes === Lift and drag properties of golf balls are customized by the manipulation of dimples along the surface of the ball, allowing for the golf ball to travel further and faster in the air. The data from turbulent-flow phenomena has been used to model different transitions in fluid flow regimes, which are used to thoroughly mix fluids and increase reaction rates within industrial processes. === Fluid currents and pollution control === Oceanic and atmospheric currents transfer particles, debris, and organisms all across the globe. While the transport of organisms, such as phytoplankton, are essential for the preservation of ecosystems, oil and other pollutants are also mixed in the current flow and can carry pollution far from its origin. Eddy formations circulate trash and other pollutants into concentrated areas which researchers are tracking to improve clean-up and pollution prevention. The distribution and motion of plastics caused by eddy formations in natural water bodies can be predicted using Lagrangian transport models. Mesoscale ocean eddies play crucial roles in transferring heat poleward, as well as maintaining heat gradients at different depths. === Environmental flows === Modeling eddy development, as it relates to turbulence and fate transport phenomena, is vital in grasping an understanding of environmental systems. By understanding the transport of both particulate and dissolved solids in environmental flows, scientists and engineers will be able to efficiently formulate remediation strategies for pollution events. Eddy formations play a vital role in the fate and transport of solutes and particles in environmental flows such as in rivers, lakes, oceans, and the atmosphere. Upwelling in stratified coastal estuaries warrant the formation of dynamic eddies which distribute nutrients out from beneath the boundary layer to form plumes. Shallow waters, such as those along the coast, play a complex role in the transport of nutrients and pollutants due to the proximity of the upper-boundary driven by the wind and the lower-boundary near the bottom of the water body. == Mesoscale ocean eddies == Eddies are common in the ocean, and range in diameter from centimeters to hundreds of kilometers. The smallest scale eddies may last for a matter of seconds, while the larger features may persist for months to years. Eddies that are between about 10 and 500 km (6 and 300 miles) in diameter and persist for periods of days to months are known in oceanography as mesoscale eddies. Mesoscale eddies can be split into two categories: static eddies, caused by flow around an obstacle (see animation), and transient eddies, caused by baroclinic instability. When the ocean contains a sea surface height gradient this creates a jet or current, such as the Antarctic Circumpolar Current. This current as part of a baroclinically unstable system meanders and creates eddies (in much the same way as a meandering river forms an oxbow lake). These types of mesoscale eddies have been observed in many major ocean currents, including the Gulf Stream, the Agulhas Current, the Kuroshio Current, and the Antarctic Circumpolar Current, amongst others. Mesoscale ocean eddies are characterized by currents that flow in a roughly circular motion around the center of the eddy. The sense of rotation of these currents may either be cyclonic or anticyclonic (such as Haida Eddies). Oceanic eddies are also usually made of water masses that are different from those outside the eddy. That is, the water within an eddy usually has different temperature and salinity characteristics to the water outside the eddy. There is a direct link between the water mass properties of an eddy and its rotation. Warm eddies rotate anti-cyclonically, while cold eddies rotate cyclonically. Because eddies may have a vigorous circulation associated with them, they are of concern to naval and commercial operations at sea. Further, because eddies transport anomalously warm or cold water as they move, they have an important influence on heat transport in certain parts of the ocean. === Influences on apex predators === The sub-tropical Northern Atlantic is known to have both cyclonic and anticyclonic eddies that are associated with high surface chlorophyll and low surface chlorophyll, respectively. The presence of chlorophyll and higher levels of chlorophyll allows this region to support higher biomass of phytoplankton, as well as, supported by areas of increased vertical nutrient fluxes and transportation of biological communities. This area of the Atlantic is also thought to be an ocean desert, which creates an interesting paradox due to it hosting a variety of large pelagic fish populations and apex predators. These mesoscale eddies have shown to be beneficial in further creating ecosystem-based management for food web models to better understand the utilization of these eddies by both the apex predators and their prey. Gaube et al. (2018), used “Smart” Position or Temperature Transmitting tags (SPOT) and Pop-Up Satellite Archival Transmitting tags (PSAT) to track the movement and diving behavior of two female white sharks (Carcharodon carcharias) within the eddies. The eddies were defined using sea surface height (SSH) and contours using the horizontal speed-based radius scale. This study found that the white sharks dove in both cyclones but favored the anticyclone which had three times more dives as the cyclonic eddies. Additionally, in the Gulf Stream eddies, the anticyclonic eddies were 57% more common and had more dives and deeper dives than the open ocean eddies and Gulf Stream cyclonic eddies. Within these anticyclonic eddies, the isotherm was displaced 50 meters downward allowing for the warmer water to penetrate deeper in the water column. This warmer water displacement may allow for the white sharks to make longer dives without the added energetic cost from thermal regulation in the cooler cyclones. Even though these anticyclonic eddies resulted in lower levels of chlorophyll in comparison to the cyclonic eddies, the warmer waters at deeper depths may allow for a deeper mixed layer and higher concentration of diatoms which in turn result in higher rates of primary productivity. Furthermore, the prey populations could be distributed more within these eddies attracting these larger female sharks to forage in this mesopelagic zone. This diving pattern may follow a diel vertical migration but without more evidence on the biomass of their prey within this zone, these conclusions cannot be made only using this circumstantial evidence. The biomass in the mesopelagic zone is still understudied leading to the biomass of fish within this layer to potentially be underestimated. A more accurate measurement on this biomass may serve to benefit the commercial fishing industry providing them with additional fishing grounds within this region. Moreover, further understanding this region in the open ocean and how the removal of fish in this region may impact this pelagic food web is crucial for the fish populations and apex predators that may rely on this food source in addition to making better ecosystem-based management plans. == See also == Vortex Eddy pumping - component of vertical motion in eddies relevant for biology and biogeochemistry Eddy diffusion Haida Eddies Irminger Rings Reynolds number - a dimensionless constant used to predict the onset of turbulent flow Reynolds experiment Kármán vortex street Whirlpool Whirlwind River eddies in whitewater Wake turbulence Computational fluid dynamics Laminar flow Hemodynamics Modons, or dipole eddy pairs. == References ==
Wikipedia/Eddy_(fluid_dynamics)
In fluid dynamics, the Coriolis–Stokes force is a forcing of the mean flow in a rotating fluid due to interaction of the Coriolis effect and wave-induced Stokes drift. This force acts on water independently of the wind stress. This force is named after Gaspard-Gustave Coriolis and George Gabriel Stokes, two nineteenth-century scientists. Important initial studies into the effects of the Earth's rotation on the wave motion – and the resulting forcing effects on the mean ocean circulation – were done by Ursell & Deacon (1950), Hasselmann (1970) and Pollard (1970). The Coriolis–Stokes forcing on the mean circulation in an Eulerian reference frame was first given by Hasselmann (1970): ρ f × u S , {\displaystyle \rho {\boldsymbol {f}}\times {\boldsymbol {u}}_{S},} to be added to the common Coriolis forcing ρ f × u . {\displaystyle \rho {\boldsymbol {f}}\times {\boldsymbol {u}}.} Here u {\displaystyle {\boldsymbol {u}}} is the mean flow velocity in an Eulerian reference frame and u S {\displaystyle {\boldsymbol {u}}_{S}} is the Stokes drift velocity – provided both are horizontal velocities (perpendicular to z ^ {\displaystyle {\hat {\boldsymbol {z}}}} ). Further ρ {\displaystyle \rho } is the fluid density, × {\displaystyle \times } is the cross product operator, f = f z ^ {\displaystyle {\boldsymbol {f}}=f{\hat {\boldsymbol {z}}}} where f = 2 Ω sin ⁡ ϕ {\displaystyle f=2\Omega \sin \phi } is the Coriolis parameter (with Ω {\displaystyle \Omega } the Earth's rotation angular speed and sin ⁡ ϕ {\displaystyle \sin \phi } the sine of the latitude) and z ^ {\displaystyle {\hat {\boldsymbol {z}}}} is the unit vector in the vertical upward direction (opposing the Earth's gravity). Since the Stokes drift velocity u S {\displaystyle {\boldsymbol {u}}_{S}} is in the wave propagation direction, and f {\displaystyle {\boldsymbol {f}}} is in the vertical direction, the Coriolis–Stokes forcing is perpendicular to the wave propagation direction (i.e. in the direction parallel to the wave crests). In deep water the Stokes drift velocity is u S = c ( k a ) 2 exp ⁡ ( 2 k z ) {\displaystyle {\boldsymbol {u}}_{S}={\boldsymbol {c}}\,(ka)^{2}\exp(2kz)} with c {\displaystyle {\boldsymbol {c}}} the wave's phase velocity, k {\displaystyle k} the wavenumber, a {\displaystyle a} the wave amplitude and z {\displaystyle z} the vertical coordinate (positive in the upward direction opposing the gravitational acceleration). == See also == Ekman layer Ekman transport == Notes == == References ==
Wikipedia/Coriolis–Stokes_force
Biological oceanography is the study of how organisms affect and are affected by the physics, chemistry, and geology of the oceanographic system. Biological oceanography may also be referred to as ocean ecology, in which the root word of ecology is Oikos (oικoσ), meaning ‘house’ or ‘habitat’ in Greek. With that in mind, it is of no surprise then that the main focus of biological oceanography is on the microorganisms within the ocean; looking at how they are affected by their environment and how that affects larger marine creatures and their ecosystem. Biological oceanography is similar to marine biology, but is different because of the perspective used to study the ocean. Biological oceanography takes a bottom-up approach (in terms of the food web), while marine biology studies the ocean from a top-down perspective. Biological oceanography mainly focuses on the ecosystem of the ocean with an emphasis on plankton: their diversity (morphology, nutritional sources, motility, and metabolism); their productivity and how that plays a role in the global carbon cycle; and their distribution (predation and life cycle). == History == In 325 BC, Pytheas of Massalia, a Greek geographer, explored much of the coast of England and Norway and developed the means of determining latitude from the declination of the North Star. His account of tides is also one of the earliest accounts that suggest a relationship between them and the moon. This relationship was later developed by English monk Bede in De Temporum Ratione (The Reckoning of Time) around 700 AD. Understanding the ocean began with the general exploration and voyaging for trade. Some notable events closer to our time, include Prince Henry the Navigator’s ocean exploration in the 1400s. In 1513, Ponce de Leon described the Florida Current. In 1674, Robert Boyle investigated the relationship between salinity, temperature, and pressure in the depths of the ocean. Captain James Cook’s voyages were responsible for the extensive data collection on geography, geology, biota, currents, tides, and water temperatures of the Atlantic and Pacific oceans in the 1760s and 1770s. In 1820, Alexander Marcet noted the varying chemical composition of seawater in the different oceans. Not long after, in 1843, Edward Forbes, a British naturalist, stated that marine organisms could not exist deeper than 300 fathoms (even though many had already collected organisms much deeper, many followed Forbes' influence). Forbes’ theory was finally believed to be incorrect by the masses when submarine cable was lifted from a depth of 1830 m and covered in animals. This finding began the plans for the Challenger Expedition. The Challenger Expedition was pivotal to biological oceanography and oceanography in general. The Challenger Expedition was headed by Charles Wyville Thomson in 1872–1876. The expedition also included two other naturalists, Henry N. Moseley and John Murray. Before the expedition, the ocean was, although interesting to many, considered an unpredictable and mostly life-less body of water, and this expedition made them rethink this stance on the ocean This expedition was at the behest of The Royal Society to see if they would be able to lay cables at the bottom of the ocean. They also brought the equipment to collect data about the biological, chemical, and geological properties of the ocean in a systematic way. They mapped the oceanic sediment and collected data The data collected in this voyage proved that there was life in deep waters (5500 meters) and that the composition of water in the ocean is consistent. The success of the Challenger Expedition led to many more expeditions by the Germans, French, US, and other British explorers. == Motivation == Oceans occupy about 71% of the Earth's surface. Whilst the average depth of the oceans is about 3800 m, the deepest parts are almost 11000 m. The marine environment has a total volume (approximately 1370 x 106 km3) that is 300 times larger for life than the volume of land and freshwater combined. It is thought that the earliest organisms originated in the ancient oceans, long before any forms of life appeared on land. Ocean biology is dominated by organisms that are fundamentally different from organisms on land and the time scales of the ocean are much different than the atmosphere (whilst the atmosphere exchanges globally every 3 weeks, the ocean can take 1000 years). For these reasons we cannot make assumptions about ocean life based on what we know from land and atmospheric models. The range of diversity of life in the ocean is one of the main motivations behind the continued study of biological oceanography. Such a range in diversity means there is a need for a range of equipment and tools used to study diversity. With the ocean organisms being much more inaccessible and not easily observable (relative to terrestrial organisms), there is a slower growth of knowledge and a consistent need for further exploration and study. The second main motivation behind the continued study of biological oceanography is climate change. Biological oceanography ties closely with physical and chemical oceanography and the details we learn from biological oceanography tell us information about the bigger picture and help us build models of larger-scale processes. Such models are even more critical when the global environment is changing at an unprecedented rate. There are global patterns in environmental conditions, such as changes in pH, temperature, salinity, and CO2, but not everywhere sees the same change. The ocean makes the earth habitable through regulation of the Earth's climate and processes such as primary production which provide oxygen as a byproduct. Biology is central to facilitating some of these processes but with climate change and human impacts, the ocean environment is constantly changing and so calls for consistent and continued research. Some of the main questions that biological oceanographers seek to answer may include: what sorts of organisms inhabit different sectors and depths of the ocean and why? A lot of biological oceanographic research studies the production of organic matter by ocean life and examines what factors affect their growth, and as a result the production rates of organic matter. Some biological oceanographers look at the relationships between organisms themselves, all the way from microbes to whales, and some look at the relationships between certain organisms and the chemical or physical characteristics of the ocean. Biological oceanographers also seek to answer questions with a more direct and immediate impact on humans- such as asking what we can expect to harvest from the sea, and answering how the weather, seasons, or recent natural disasters may affect the fisheries’ harvest. Some of the main questions at the moment and for the future is looking at how climate change will affect the ocean biota. == See also == Marine life – Organisms that live in salt water Marine microorganism – Any life form too small for the naked human eye to see that lives in a marine environmentPages displaying short descriptions of redirect targets Oceanography – Study of physical, chemical, and biological processes in the ocean Phytoplankton – Autotrophic members of the plankton ecosystem Zooplankton – Heterotrophic protistan or metazoan members of the plankton ecosystem Physical Oceanography – Study of physical conditions and processes within the oceanPages displaying short descriptions of redirect targets Chemical Oceanography – Chemistry of oceans and seasPages displaying short descriptions of redirect targets Climate Change – Human-caused changes to climate on EarthPages displaying short descriptions of redirect targets Marine plastic pollution – Environmental pollution by plastics Planktology – Study of plankton == References == == External links == Media related to Biological oceanography at Wikimedia Commons
Wikipedia/Biological_oceanography
In fluid dynamics, the mild-slope equation describes the combined effects of diffraction and refraction for water waves propagating over bathymetry and due to lateral boundaries—like breakwaters and coastlines. It is an approximate model, deriving its name from being originally developed for wave propagation over mild slopes of the sea floor. The mild-slope equation is often used in coastal engineering to compute the wave-field changes near harbours and coasts. The mild-slope equation models the propagation and transformation of water waves, as they travel through waters of varying depth and interact with lateral boundaries such as cliffs, beaches, seawalls and breakwaters. As a result, it describes the variations in wave amplitude, or equivalently wave height. From the wave amplitude, the amplitude of the flow velocity oscillations underneath the water surface can also be computed. These quantities—wave amplitude and flow-velocity amplitude—may subsequently be used to determine the wave effects on coastal and offshore structures, ships and other floating objects, sediment transport and resulting bathymetric changes of the sea bed and coastline, mean flow fields and mass transfer of dissolved and floating materials. Most often, the mild-slope equation is solved by computer using methods from numerical analysis. A first form of the mild-slope equation was developed by Eckart in 1952, and an improved version—the mild-slope equation in its classical formulation—has been derived independently by Juri Berkhoff in 1972. Thereafter, many modified and extended forms have been proposed, to include the effects of, for instance: wave–current interaction, wave nonlinearity, steeper sea-bed slopes, bed friction and wave breaking. Also parabolic approximations to the mild-slope equation are often used, in order to reduce the computational cost. In case of a constant depth, the mild-slope equation reduces to the Helmholtz equation for wave diffraction. == Formulation for monochromatic wave motion == For monochromatic waves according to linear theory—with the free surface elevation given as ζ ( x , y , t ) = ℜ { η ( x , y ) e − i ω t } {\displaystyle \zeta (x,y,t)=\Re \left\{\eta (x,y)\,e^{-i\omega t}\right\}} and the waves propagating on a fluid layer of mean water depth h ( x , y ) {\displaystyle h(x,y)} —the mild-slope equation is: ∇ ⋅ ( c p c g ∇ η ) + k 2 c p c g η = 0 , {\displaystyle \nabla \cdot \left(c_{p}\,c_{g}\,\nabla \eta \right)\,+\,k^{2}\,c_{p}\,c_{g}\,\eta \,=\,0,} where: η ( x , y ) {\displaystyle \eta (x,y)} is the complex-valued amplitude of the free-surface elevation ζ ( x , y , t ) ; {\displaystyle \zeta (x,y,t);} ( x , y ) {\displaystyle (x,y)} is the horizontal position; ω {\displaystyle \omega } is the angular frequency of the monochromatic wave motion; i {\displaystyle i} is the imaginary unit; ℜ { ⋅ } {\displaystyle \Re \{\cdot \}} means taking the real part of the quantity between braces; ∇ {\displaystyle \nabla } is the horizontal gradient operator; ∇ ⋅ {\displaystyle \nabla \cdot } is the divergence operator; k {\displaystyle k} is the wavenumber; c p {\displaystyle c_{p}} is the phase speed of the waves and c g {\displaystyle c_{g}} is the group speed of the waves. The phase and group speed depend on the dispersion relation, and are derived from Airy wave theory as: ω 2 = g k tanh ( k h ) , c p = ω k and c g = 1 2 c p [ 1 + k h 1 − tanh 2 ⁡ ( k h ) tanh ( k h ) ] {\displaystyle {\begin{aligned}\omega ^{2}&=\,g\,k\,\tanh \,(kh),\\c_{p}&=\,{\frac {\omega }{k}}\quad {\text{and}}\\c_{g}&=\,{\frac {1}{2}}\,c_{p}\,\left[1\,+\,kh\,{\frac {1-\tanh ^{2}(kh)}{\tanh \,(kh)}}\right]\end{aligned}}} where g {\displaystyle g} is Earth's gravity and tanh {\displaystyle \tanh } is the hyperbolic tangent. For a given angular frequency ω {\displaystyle \omega } , the wavenumber k {\displaystyle k} has to be solved from the dispersion equation, which relates these two quantities to the water depth h {\displaystyle h} . == Transformation to an inhomogeneous Helmholtz equation == Through the transformation ψ = η c p c g , {\displaystyle \psi \,=\,\eta \,{\sqrt {c_{p}\,c_{g}}},} the mild slope equation can be cast in the form of an inhomogeneous Helmholtz equation: Δ ψ + k c 2 ψ = 0 with k c 2 = k 2 − Δ ( c p c g ) c p c g , {\displaystyle \Delta \psi \,+\,k_{c}^{2}\,\psi \,=\,0\qquad {\text{with}}\qquad k_{c}^{2}\,=\,k^{2}\,-\,{\frac {\Delta \left({\sqrt {c_{p}\,c_{g}}}\right)}{\sqrt {c_{p}\,c_{g}}}},} where Δ {\displaystyle \Delta } is the Laplace operator. == Propagating waves == In spatially coherent fields of propagating waves, it is useful to split the complex amplitude η ( x , y ) {\displaystyle \eta (x,y)} in its amplitude and phase, both real valued: η ( x , y ) = a ( x , y ) e i θ ( x , y ) , {\displaystyle \eta (x,y)\,=\,a(x,y)\,e^{i\,\theta (x,y)},} where a = | η | {\displaystyle a=|\eta |\,} is the amplitude or absolute value of η {\displaystyle \eta \,} and θ = arg ⁡ { η } {\displaystyle \theta =\arg\{\eta \}\,} is the wave phase, which is the argument of η . {\displaystyle \eta .} This transforms the mild-slope equation in the following set of equations (apart from locations for which ∇ θ {\displaystyle \nabla \theta } is singular): ∂ κ y ∂ x − ∂ κ x ∂ y = 0 with κ x = ∂ θ ∂ x and κ y = ∂ θ ∂ y , κ 2 = k 2 + ∇ ⋅ ( c p c g ∇ a ) c p c g a with κ = κ x 2 + κ y 2 and ∇ ⋅ ( v g E ) = 0 with E = 1 2 ρ g a 2 and v g = c g κ k , {\displaystyle {\begin{aligned}{\frac {\partial \kappa _{y}}{\partial {x}}}\,-\,{\frac {\partial \kappa _{x}}{\partial {y}}}\,=\,0\qquad &{\text{ with }}\kappa _{x}\,=\,{\frac {\partial \theta }{\partial {x}}}{\text{ and }}\kappa _{y}\,=\,{\frac {\partial \theta }{\partial {y}}},\\\kappa ^{2}\,=\,k^{2}\,+\,{\frac {\nabla \cdot \left(c_{p}\,c_{g}\,\nabla a\right)}{c_{p}\,c_{g}\,a}}\qquad &{\text{ with }}\kappa \,=\,{\sqrt {\kappa _{x}^{2}\,+\,\kappa _{y}^{2}}}\quad {\text{ and}}\\\nabla \cdot \left({\boldsymbol {v}}_{g}\,E\right)\,=\,0\qquad &{\text{ with }}E\,=\,{\frac {1}{2}}\,\rho \,g\,a^{2}\quad {\text{and}}\quad {\boldsymbol {v}}_{g}\,=\,c_{g}\,{\frac {\boldsymbol {\kappa }}{k}},\end{aligned}}} where E {\displaystyle E} is the average wave-energy density per unit horizontal area (the sum of the kinetic and potential energy densities), κ {\displaystyle {\boldsymbol {\kappa }}} is the effective wavenumber vector, with components ( κ x , κ y ) , {\displaystyle (\kappa _{x},\kappa _{y}),} v g {\displaystyle {\boldsymbol {v}}_{g}} is the effective group velocity vector, ρ {\displaystyle \rho } is the fluid density, and g {\displaystyle g} is the acceleration by the Earth's gravity. The last equation shows that wave energy is conserved in the mild-slope equation, and that the wave energy E {\displaystyle E} is transported in the κ {\displaystyle {\boldsymbol {\kappa }}} -direction normal to the wave crests (in this case of pure wave motion without mean currents). The effective group speed | v g | {\displaystyle |{\boldsymbol {v}}_{g}|} is different from the group speed c g . {\displaystyle c_{g}.} The first equation states that the effective wavenumber κ {\displaystyle {\boldsymbol {\kappa }}} is irrotational, a direct consequence of the fact it is the derivative of the wave phase θ {\displaystyle \theta } , a scalar field. The second equation is the eikonal equation. It shows the effects of diffraction on the effective wavenumber: only for more-or-less progressive waves, with | ∇ ⋅ ( c p c g ∇ a ) | ≪ k 2 c p c g a , {\displaystyle \left|\nabla \cdot (c_{p}\,c_{g}\,\nabla a)\right|\ll k^{2}\,c_{p}\,c_{g}\,a,} the splitting into amplitude a {\displaystyle a} and phase θ {\displaystyle \theta } leads to consistent-varying and meaningful fields of a {\displaystyle a} and κ {\displaystyle {\boldsymbol {\kappa }}} . Otherwise, κ2 can even become negative. When the diffraction effects are totally neglected, the effective wavenumber κ is equal to k {\displaystyle k} , and the geometric optics approximation for wave refraction can be used. == Derivation of the mild-slope equation == The mild-slope equation can be derived by the use of several methods. Here, we will use a variational approach. The fluid is assumed to be inviscid and incompressible, and the flow is assumed to be irrotational. These assumptions are valid ones for surface gravity waves, since the effects of vorticity and viscosity are only significant in the Stokes boundary layers (for the oscillatory part of the flow). Because the flow is irrotational, the wave motion can be described using potential flow theory. The following time-dependent equations give the evolution of the free-surface elevation ζ ( x , y , t ) {\displaystyle \zeta (x,y,t)} and free-surface potential ϕ ( x , y , t ) : {\displaystyle \phi (x,y,t):} g ∂ ζ ∂ t + ∇ ⋅ ( c p c g ∇ φ ) + ( k 2 c p c g − ω 0 2 ) φ = 0 , ∂ φ ∂ t + g ζ = 0 , with ω 0 2 = g k tanh ⁡ ( k h ) . {\displaystyle {\begin{aligned}g\,{\frac {\partial \zeta }{\partial {t}}}&+\nabla \cdot \left(c_{p}c_{g}\,\nabla \varphi \right)+\left(k^{2}c_{p}c_{g}-\omega _{0}^{2}\right)\varphi =0,\\{\frac {\partial \varphi }{\partial {t}}}&+g\zeta =0,\quad {\text{with}}\quad \omega _{0}^{2}=gk\tanh(kh).\end{aligned}}} From the two evolution equations, one of the variables φ {\displaystyle \varphi } or ζ {\displaystyle \zeta } can be eliminated, to obtain the time-dependent form of the mild-slope equation: − ∂ 2 ζ ∂ t 2 + ∇ ⋅ ( c p c g ∇ ζ ) + ( k 2 c p c g − ω 0 2 ) ζ = 0 , {\displaystyle -{\frac {\partial ^{2}\zeta }{\partial t^{2}}}+\nabla \cdot \left(c_{p}c_{g}\,\nabla \zeta \right)+\left(k^{2}c_{p}c_{g}-\omega _{0}^{2}\right)\zeta =0,} and the corresponding equation for the free-surface potential is identical, with ζ {\displaystyle \zeta } replaced by φ . {\displaystyle \varphi .} The time-dependent mild-slope equation can be used to model waves in a narrow band of frequencies around ω 0 . {\displaystyle \omega _{0}.} === Monochromatic waves === Consider monochromatic waves with complex amplitude η ( x , y ) {\displaystyle \eta (x,y)} and angular frequency ω {\displaystyle \omega } : ζ ( x , y , t ) = ℜ { η ( x , y ) e − i ω i t } , {\displaystyle \zeta (x,y,t)=\Re \left\{\eta (x,y)\,e^{-i\omega it}\right\},} with ω {\displaystyle \omega } and ω 0 {\displaystyle \omega _{0}} chosen equal to each other, ω = ω 0 . {\displaystyle \omega =\omega _{0}.} Using this in the time-dependent form of the mild-slope equation, recovers the classical mild-slope equation for time-harmonic wave motion: ∇ ⋅ ( c p c g ∇ η ) + k 2 c p c g η = 0. {\displaystyle \nabla \cdot \left(c_{p}\,c_{g}\,\nabla \eta \right)\,+\,k^{2}\,c_{p}\,c_{g}\,\eta \,=\,0.} == Applicability and validity of the mild-slope equation == The standard mild slope equation, without extra terms for bed slope and bed curvature, provides accurate results for the wave field over bed slopes ranging from 0 to about 1/3. However, some subtle aspects, like the amplitude of reflected waves, can be completely wrong, even for slopes going to zero. This mathematical curiosity has little practical importance in general since this reflection becomes vanishingly small for small bottom slopes. == Notes == == References == Dingemans, M. W. (1997), Water wave propagation over uneven bottoms, Advanced Series on Ocean Engineering, vol. 13, World Scientific, Singapore, ISBN 981-02-0427-2, OCLC 36126836, 2 Parts, 967 pages. Liu, P. L.-F. (1990), "Wave transformation", in B. Le Méhauté and D. M. Hanes (ed.), Ocean Engineering Science, The Sea, vol. 9A, Wiley Interscience, pp. 27–63, ISBN 0-471-52856-0 Mei, Chiang C. (1994), The applied dynamics of ocean surface waves, Advanced Series on Ocean Engineering, vol. 1, World Scientific, ISBN 9971-5-0789-7, 740 pages. Porter, D.; Chamberlain, P. G. (1997), "Linear wave scattering by two-dimensional topography", in J. N. Hunt (ed.), Gravity waves in water of finite depth, Advances in Fluid Mechanics, vol. 10, Computational Mechanics Publications, pp. 13–53, ISBN 1-85312-351-X Porter, D. (2003), "The mild-slope equations", Journal of Fluid Mechanics, 494: 51–63, Bibcode:2003JFM...494...51P, doi:10.1017/S0022112003005846, S2CID 121112316
Wikipedia/Mild-slope_equation
The Princeton Ocean Model (POM) is a community general numerical model for ocean circulation that can be used to simulate and predict oceanic currents, temperatures, salinities and other water properties. POM-WEB and POMusers.org == Development == The model code was originally developed at Princeton University (G. Mellor and Alan Blumberg) in collaboration with Dynalysis of Princeton (H. James Herring, Richard C. Patchen). The model incorporates the Mellor–Yamada turbulence scheme developed in the early 1970s by George Mellor and Ted Yamada; this turbulence sub-model is widely used by oceanic and atmospheric models. At the time, early computer ocean models such as the Bryan–Cox model (developed in the late 1960s at the Geophysical Fluid Dynamics Laboratory, GFDL, and later became the Modular Ocean Model, MOM)), were aimed mostly at coarse-resolution simulations of the large-scale ocean circulation, so there was a need for a numerical model that can handle high-resolution coastal ocean processes. The Blumberg–Mellor model (which later became POM) thus included new features such as free surface to handle tides, sigma vertical coordinates (i.e., terrain-following) to handle complex topographies and shallow regions, a curvilinear grid to better handle coastlines, and a turbulence scheme to handle vertical mixing. At the early 1980s the model was used primarily to simulate estuaries such as the Hudson–Raritan Estuary (by Leo Oey) and the Delaware Bay (Boris Galperin), but also first attempts to use a sigma coordinate model for basin-scale problems have started with the coarse resolution model of the Gulf of Mexico (Blumberg and Mellor) and models of the Arctic Ocean (with the inclusion of ice-ocean coupling by Lakshmi Kantha and Sirpa Hakkinen). In the early 1990s when the web and browsers started to be developed, POM became one of the first ocean model codes that were provided free of charge to users through the web. The establishment of the POM users group and its web support (by Tal Ezer) resulted in a continuous increase in the number of POM users which grew from about a dozen U.S. users in the 1980s to over 1000 users in 2000 and over 4000 users by 2009; there are users from over 70 different countries. In the 1990s the usage of POM expands to simulations of the Mediterranean Sea (Zavatarelli) and the first simulations with a sigma coordinate model of the entire Atlantic Ocean for climate research (Ezer). The development of the Mellor–Ezer optimal interpolation data assimilation scheme that projects surface satellite data into deep layers allows the construction of the first ocean forecast systems for the Gulf Stream and the U.S. east coast running operationally at the NOAA's National Weather Service (Frank Aikman and others). Operational forecast system for other regions such as the Great Lakes, the Gulf of Mexico (Oey), the Gulf of Maine (Huijie Xue) and the Hudson River (Blumberg) followed. For more information on applications of the model, see the searchable database of over 1800 POM-related publications. == Derivatives and other models == In the late 1990s and the 2000s many other terrain-following community ocean models have been developed; some of their features can be traced back to features included in the original POM, other features are additional numerical and parameterization improvements. Several ocean models are direct descendants of POM such as the commercial version of POM known as the estuarine and coastal ocean model (ECOM), the navy coastal ocean model (NCOM) and the finite-volume coastal ocean model (FVCOM). Recent developments in POM include a generalized coordinate system that combines sigma and z-level grids (Mellor and Ezer), inundation features that allow simulations of wetting and drying (e.g., flood of land area) (Oey), and coupling ocean currents with surface waves (Mellor). Efforts to improve turbulent mixing also continue (Galperin, Kantha, Mellor and others). == Users' meetings == POM users' meetings were held every few years, and in recent years the meetings were extended to include other models and renamed the International Workshop on Modeling the Ocean (IWMO). Meeting Pages: List of meetings: 1. 1996, June 10–12, Princeton, NJ, USA (POM96) 2. 1998, February 17–19, Miami, FL, USA (POM98) 3. 1999, September 20–22, Bar Harbor, ME, USA (SigMod99) 4. 2001, August 20–22, Boulder, CO, USA (SigMod01) 5. 2003, August 4–6, Seattle, WA, USA (SigMod03) 6. 2009, February 23–26, Taipei, Taiwan (1st IWMO-2009) 7. 2010, May 24–26, Norfolk, VA, USA (2nd IWMO-2010; IWMO-2010) 8. 2011, June 6–9, Qingdao, China (3rd IWMO-2011; IWMO-2011) 9. 2012, May 21–24, Yokohama, Japan (4th IWMO-2012; [1]) 10. 2013, June 17–20, Bergen, Norway (5th IWMO-2013; [2]) 11. 2014, June 23–27, Halifax, Nova Scotia, Canada (6th IWMO-2014; [3]) 12. 2015, June 1–5, Canberra, Australia (7th IWMO-2015; [4]). 13. 2016, June 7–10, Bologna, Italy (8th IWMO-2016;[5]). 14. 2017, July 3–6, Seoul, South Korea (9th IWMO-2017;[6]). 15. 2018, June 25–28, Santos, Brazil (10th IWMO-2018;[7]). 16. 2019, June 17–20, Wuxi, China (11th IWMO-2019;[8]). 17. 2022. June 28-July 1, Ann Arbor, MI (12th IWMO-2022). 17. 2023, June 27–30, Hamburg, Germany (13th IWMO-2023). Reviewed papers from the IWMO meetings are published by Ocean Dynamics in special issues (IWMO-2009 Part-I, IWMO-2009 Part-II, IWMO-2010, IWMO-2011, IWMO-2012, IWMO-2013, IWMO-2014). == References == == External links == POM-WEB page (registration and information) MPI-POM and Taiwan Ocean Prediction (TOP) Archived June 16, 2016, at the Wayback Machine
Wikipedia/Princeton_Ocean_Model
Argo is an international programme for researching the ocean. It uses profiling floats to observe temperature, salinity and currents. Recently it has observed bio-optical properties in the Earth's oceans. It has been operating since the early 2000s. The real-time data it provides support climate and oceanographic research. A special research interest is to quantify the ocean heat content (OHC). The Argo fleet consists of almost 4000 drifting "Argo floats" (as profiling floats used by the Argo program are often called) deployed worldwide. Each float weighs 20–30 kg. In most cases probes drift at a depth of 1000 metres. Experts call this the parking depth. Every 10 days, by changing their buoyancy, they dive to a depth of 2000 metres and then move to the sea-surface. As they move they measure conductivity and temperature profiles as well as pressure. Scientists calculate salinity and density from these measurements. Seawater density is important in determining large-scale motions in the ocean. Average current velocities at 1000 metres are directly measured by the distance and direction a float drifts while parked at that depth, which is determined by GPS or Argos system positions at the surface. The data is transmitted to shore via satellite, and is freely available to everyone, without restrictions. The Argo program is named after the Greek mythical ship Argo to emphasize the complementary relationship of Argo with the Jason satellite altimeters. Both the standard Argo floats and the 4 satellites launched so far to monitor changing sea-level all operate on a 10-day duty cycle. == International collaboration == The Argo program is a collaborative partnership of more than 30 nations from all continents (most shown on the graphic map in this article) that maintains a global array and provides a dataset anyone can use to explore the ocean environment. Argo is a component of the Global Ocean Observing System (GOOS), and is coordinated by the Argo Steering Team, an international body of scientists and technical experts that meets once per year. The Argo data stream is managed by the Argo Data Management Team. Argo is also supported by the Group on Earth Observations, and has been endorsed since its beginnings by the World Climate Research Programme's CLIVAR Project (Variability and predictability of the ocean-atmosphere system), and by the Global Ocean Data Assimilation Experiment (GODAE OceanView). == History == A program called Argo was first proposed at OceanObs 1999 which was a conference organised by international agencies with the aim of creating a coordinated approach to ocean observations. The original Argo prospectus was created by a small group of scientists, chaired by Dean Roemmich, who described a program that would have a global array of about 3000 floats in place by sometime in 2007. The 3000-float array was achieved in November 2007 and was global. The Argo Steering Team met for the first time in 1999 in Maryland (USA) and outlined the principles of global data sharing. The Argo Steering Team made a 10-year report to OceanObs-2009 and received suggestions on how the array might be improved. These suggestions included enhancing the array at high latitudes, in marginal seas (such as the Gulf of Mexico and the Mediterranean) and along the equator, improved observation of strong boundary currents (such as the Gulf Stream and Kuroshio), extension of observations into deep water and the addition of sensors for monitoring biological and chemical changes in the oceans. In November 2012 an Indian float in the Argo array gathered the one-millionth profile (twice the number collected by research vessels during all of the 20th century) an event that was reported in several press releases. As can be seen in the plot opposite, by early 2018 the Bio-Argo program is expanding rapidly. == Float design and operation == The critical capability of an Argo float is its ability to rise and descend in the ocean on a programmed schedule. The floats do this by changing their effective density. The density of any object is given by its mass divided by its volume. The Argo float keeps its mass constant, but by altering its volume, it changes its density. To do this, mineral oil is forced out of the float's pressure case and expands a rubber bladder at the bottom end of the float. As the bladder expands, the float becomes less dense than seawater and rises to the surface. Upon finishing its tasks at the surface, the float withdraws the oil and descends again. A handful of companies and organizations manufacture profiling floats used in the Argo program. APEX floats, made by Teledyne Webb Research, are the most common element of the current array. SOLO and SOLO-II floats (the latter use a reciprocating pump for buoyancy changes, unlike screw-driven pistons in other floats) were developed at Scripps Institution of Oceanography. Other types include the NINJA float, made by the Tsurumi Seiki Co. of Japan, and the ARVOR, DEEP-ARVOR & PROVOR floats developed by IFREMER in France, in industrial partnership with French Company nke instrumentation. Most floats use sensors made by Sea-Bird Scientific (https://www.seabird.com/) , which also makes a profiling float called Navis. A typical Argo float is a cylinder just over 1 metre long and 14 cm across with a hemispherical cap. Thus it has a minimum volume of about 16,600 cubic centimetres (cm3). At Ocean Station Papa in the Gulf of Alaska the temperature and salinity at the surface might be about 6°C and 32.55 parts per thousand giving a density of sea-water of 1.0256 g/cm3. At a depth of 2000 metres (pressure of 2000 decibars) the temperature might be 2°C and the salinity 34.58 parts per thousand. Thus, including the effect of pressure (water is slightly compressible) the density of sea-water is about 1.0369 g/cm3. The change in density divided by the deep density is 0.0109. The float has to match these densities if it is to reach 2000 metres depth and then rise to the surface. Since the density of the float is its mass divided by volume, it needs to change its volume by 0.0109 × 16,600 = 181 cm3 to drive that excursion; a small amount of that volume change is provided by the compressibility of the float itself, and excess buoyancy is required at the surface in order to keep the antenna above water. All Argo floats carry sensors to measure the temperature and salinity of the ocean as they vary with depth, but an increasing number of floats also carry other sensors, such as for measuring dissolved oxygen and ultimately other variables of biological and chemical interest such as chlorophyll, nutrients and pH. An extension to the Argo project called BioArgo is being developed and, when implemented, will add a biological and chemical component to this method of sampling the oceans. The antenna for satellite data collection is mounted at the top of the float which extends clear of the sea surface after it completes its ascent. The ocean is saline, hence an electrical conductor, so that radio communications from under the sea surface are not possible. Early in the program Argo floats exclusively used slow mono-directional satellite communications but the majority of floats being deployed in mid-2013 use rapid bi-directional communications. The result of this is that Argo floats now transmit much more data than was previously possible and they spend only about 20 minutes on the sea surface rather than 8–12 hours, greatly reducing problems such as grounding and bio-fouling. The average life span of Argo floats has increased greatly since the program began, first exceeding 4-year mean lifetime for floats deployed in 2005. Ongoing improvements should result in further extensions to 6 years and longer. As of June 2014, new types of floats were being tested to collect measurements much deeper than can be reached by standard Argo floats. These "Deep Argo" floats are designed to reach depths of 4000 or 6000 metres, versus 2000 metres for standard floats. This will allow a much greater volume of the ocean to be sampled. Such measurements are important for developing a comprehensive understanding of the ocean, such as trends in heat content. == Array design == The original plan advertised in the Argo prospectus called for a nearest-neighbour distance between floats, on average, of 3° latitude by 3° longitude. This allowed for higher resolution (in kilometres) at high latitudes, both north and south, and was considered necessary because of the decrease in the Rossby radius of deformation which governs the scale of oceanographic features, such as eddies. By 2007 this was largely achieved, but the target resolution has never yet been completely achieved in the deep southern ocean. Efforts are being made to complete the original plan in all parts of the world oceans but this is difficult in the deep Southern Ocean as deployment opportunities occur only very rarely. As mentioned in the history section, enhancements are now planned in the equatorial regions of the oceans, in boundary currents and in marginal seas. This requires that the total number of floats be increased from the original plan of 3000 floats to a 4000-float array. One consequence of the use of profiling floats to sample the ocean is that seasonal bias can be removed. The diagram opposite shows the count of all float profiles acquired each month by Argo south of 30°S (upper curve) from the start of the program to November 2012 compared with the same diagram for all other data available. The lower curve shows a strong annual bias with four times as many profiles being collected in austral summer than in austral winter. For the upper (Argo) plot, there is no bias apparent. == Data access == One of the critical features of the Argo model is that of global and unrestricted access to data in near real-time. When a float transmits a profile it is quickly converted to a format that can be inserted on the Global Telecommunications System (GTS). The GTS is operated by the World Meteorological Organisation, or WMO, specifically for the purpose of sharing data needed for weather forecasting. Thus all nations who are members of the WMO receive all Argo profiles within a few hours of the acquisition of the profile. Data are also made available through ftp and WWW access via two Argo Global Data Centres (or GDACs), one in France and one in the US. About 90% of all profiles acquired are made available to global access within 24 hours, with the remaining profiles becoming available soon thereafter. For a researcher to use data acquired via the GTS or from the Argo Global Data Centres (GDACs) does require programming skills. The GDACs supply multi-profile files that are a native file format for Ocean DataView. For any day there are files with names like 20121106_prof.nc that are called multi-profile files. This example is a file specific to 6 November 2012 and contains all profiles in a single NetCDF file for one ocean basin. The GDACs identify three ocean basins, Atlantic, Indian and Pacific. Thus three multi-profile files will carry every Argo profile acquired on that specific day. A user who wants to explore Argo data but lacks programming skills might like to download the Argo Global Marine Atlas which is an easy-to-use utility that allows the creation of products based on Argo data such as the salinity section shown above, but also horizontal maps of ocean properties, time series at any location etc. This Atlas also carries an "update" button that allows data to be updated periodically. The Argo Global Marine Atlas is maintained at the Scripps Institution of Oceanography in La Jolla, California. Argo data can also be displayed in Google Earth with a layer developed by the Argo Technical Coordinator. == Data results == Argo is now the dominant source of information about the climatic state of the oceans and is being widely used in many publications as seen in the diagram opposite. Topics addressed include air-sea interaction, ocean currents, interannual variability, El Niño, mesoscale eddies, water mass properties and transformation. Argo is also now permitting direct computations of the global ocean heat content. They determine that areas of the world with high surface salinity are getting saltier and areas of the world with relatively low surface salinity are getting fresher. This has been described as 'the rich get richer and the poor get poorer'. Scientifically speaking, the distributions of salt are governed by the difference between precipitation and evaporation. Areas, such as the northern North Pacific Ocean, where precipitation dominates evaporation are fresher than average. The implication of their result is that the Earth is seeing an intensification of the global hydrological cycle. Argo data are also being used to drive computer models of the climate system leading to improvements in the ability of nations to forecast seasonal climate variations. Argo data were critical in the drafting of Chapter 3 (Working Group 1) of the IPCC Fifth Assessment Report (released September 2013) and an appendix was added to that chapter to emphasize the profound change that had taken place in the quality and volume of ocean data since the IPCC Fourth Assessment Report and the resulting improvement in confidence in the description of surface salinity changes and upper-ocean heat content. Argo data were used along with sea level change data from satellite altimetry in a new approach to analyzing global warming, reported in Eos in 2017. David Morrison reports that "[b]oth of these data sets show clear signatures of heat deposition in the ocean, from the temperature changes in the top 2 km of water and from the expansion of the ocean water due to heating. These two measures are less noisy than land and atmospheric temperatures." Argo and CERES data collected between 2005 and 2019 have been compared as independent measures of the global change in Earth's energy imbalance. Both data sets showed similar behavior at annualized resolution, as well as a doubling of the linear trend in planet's heating rate during that 14-year span. == See also == Ocean acoustic tomography Underwater gliders Integrated Ocean Observing System == References == == External links == The Argo Portal International Argo Information Centre Argo at the Scripps Institution of Oceanography, San Diego Sea-Bird Scientific SBE 41CP Argo CTD Realtime Interactive Map Realtime Google Earth File Coriolis Global Argo Data Server - EU Mirror FNMOC Global Argo Data server - US Mirror NOAA/Pacific Marine Environmental Laboratory profiling float project deploys floats as part of the Argo program, provides data on-line, and is active in delayed-mode salinity calibration and quality control for US Argo floats. Sea-Bird Scientific Navis BGCi Float Changing conditions in the Gulf of Alaska as seen by Argo Government of Canada, Department of Fisheries and Oceans, Argo Project A New World View Argo explorations article by Scripps Institution of Oceanography JCOMMOPS Argo on NOSA "Argo Floats: How do we measure the ocean" (animation for children)
Wikipedia/Argo_(oceanography)
The theory of tides is the application of continuum mechanics to interpret and predict the tidal deformations of planetary and satellite bodies and their atmospheres and oceans (especially Earth's oceans) under the gravitational loading of another astronomical body or bodies (especially the Moon and Sun). == History == === Classical era === The tides received relatively little attention in the civilizations around the Mediterranean Sea, as the tides there are relatively small, and the areas that experience tides do so unreliably. A number of theories were advanced, however, from comparing the movements to breathing or blood flow to theories involving whirlpools or river cycles. A similar "breathing earth" idea was considered by some Asian thinkers. Plato reportedly believed that the tides were caused by water flowing in and out of undersea caverns. Crates of Mallus attributed the tides to "the counter-movement (ἀντισπασμός) of the sea” and Apollodorus of Corcyra to "the refluxes from the Ocean". An ancient Indian Purana text dated to 400-300 BC refers to the ocean rising and falling because of heat expansion from the light of the Moon. The Yolngu people of northeastern Arnhem Land in the Northern Territory of Australia identified a link between the Moon and the tides, which they mythically attributed to the Moon filling with water and emptying out again. Ultimately the link between the Moon (and Sun) and the tides became known to the Greeks, although the exact date of discovery is unclear; references to it are made in sources such as Pytheas of Massilia in 325 BC and Pliny the Elder's Natural History in 77 AD. Although the schedule of the tides and the link to lunar and solar movements was known, the exact mechanism that connected them was unclear. Classicists Thomas Little Heath claimed that both Pytheas and Posidonius connected the tides with the moon, "the former directly, the latter through the setting up of winds". Seneca mentions in De Providentia the periodic motion of the tides controlled by the lunar sphere. Eratosthenes (3rd century BC) and Posidonius (1st century BC) both produced detailed descriptions of the tides and their relationship to the phases of the Moon, Posidonius in particular making lengthy observations of the sea on the Spanish coast, although little of their work survived. The influence of the Moon on tides was mentioned in Ptolemy's Tetrabiblos as evidence of the reality of astrology. Seleucus of Seleucia is thought to have theorized around 150 BC that tides were caused by the Moon as part of his heliocentric model. Aristotle, judging from discussions of his beliefs in other sources, is thought to have believed the tides were caused by winds driven by the Sun's heat, and he rejected the theory that the Moon caused the tides. An apocryphal legend claims that he committed suicide in frustration with his failure to fully understand the tides. Heraclides also held "the sun sets up winds, and that these winds, when they blow, cause the high tide and, when they cease, the low tide". Dicaearchus also "put the tides down to the direct action of the sun according to its position". Philostratus discusses tides in Book Five of Life of Apollonius of Tyana (circa 217-238 AD); he was vaguely aware of a correlation of the tides with the phases of the Moon but attributed them to spirits moving water in and out of caverns, which he connected with the legend that spirits of the dead cannot move on at certain phases of the Moon. === Medieval period === The Venerable Bede discusses the tides in The Reckoning of Time and shows that the twice-daily timing of tides is related to the Moon and that the lunar monthly cycle of spring and neap tides is also related to the Moon's position. He goes on to note that the times of tides vary along the same coast and that the water movements cause low tide at one place when there is high tide elsewhere. However, he made no progress regarding the question of how exactly the Moon created the tides. Medieval rule-of-thumb methods for predicting tides were said to allow one "to know what Moon makes high water" from the Moon's movements. Dante references the Moon's influence on the tides in his Divine Comedy. Medieval European understanding of the tides was often based on works of Muslim astronomers that became available through Latin translation starting from the 12th century. Abu Ma'shar al-Balkhi, in his Introductorium in astronomiam, taught that ebb and flood tides were caused by the Moon. Abu Ma'shar discussed the effects of wind and Moon's phases relative to the Sun on the tides. In the 12th century, al-Bitruji contributed the notion that the tides were caused by the general circulation of the heavens. Medieval Arabic astrologers frequently referenced the Moon's influence on the tides as evidence for the reality of astrology; some of their treatises on the topic influenced western Europe. Some theorized that the influence was caused by lunar rays heating the ocean's floor. === Modern era === Simon Stevin in his 1608 De spiegheling der Ebbenvloet (The Theory of Ebb and Flood) dismisses a large number of misconceptions that still existed about ebb and flood. Stevin pleads for the idea that the attraction of the Moon was responsible for the tides and writes in clear terms about ebb, flood, spring tide and neap tide, stressing that further research needed to be made. In 1609, Johannes Kepler correctly suggested that the gravitation of the Moon causes the tides, which he compared to magnetic attraction basing his argument upon ancient observations and correlations. In 1616, Galileo Galilei wrote Discourse on the Tides. He strongly and mockingly rejects the lunar theory of the tides, and tries to explain the tides as the result of the Earth's rotation and revolution around the Sun, believing that the oceans moved like water in a large basin: as the basin moves, so does the water. But his contemporaries noticed that this made predictions that did not fit observations. René Descartes theorized that the tides (alongside the movement of planets, etc.) were caused by aetheric vortices, without reference to Kepler's theories of gravitation by mutual attraction; this was extremely influential, with numerous followers of Descartes expounding on this theory throughout the 17th century, particularly in France. However, Descartes and his followers acknowledged the influence of the Moon, speculating that pressure waves from the Moon via the aether were responsible for the correlation. Newton, in the Principia, provides a correct explanation for the tidal force, which can be used to explain tides on a planet covered by a uniform ocean but which takes no account of the distribution of the continents or ocean bathymetry. ==== Dynamic theory ==== While Newton explained the tides by describing the tide-generating forces and Daniel Bernoulli gave a description of the static reaction of the waters on Earth to the tidal potential, the dynamic theory of tides, developed by Pierre-Simon Laplace in 1775, describes the ocean's real reaction to tidal forces. Laplace's theory of ocean tides takes into account friction, resonance and natural periods of ocean basins. It predicts the large amphidromic systems in the world's ocean basins and explains the oceanic tides that are actually observed. The equilibrium theory—based on the gravitational gradient from the Sun and Moon but ignoring the Earth's rotation, the effects of continents, and other important effects—could not explain the real ocean tides. Since measurements have confirmed the dynamic theory, many things have possible explanations now, like how the tides interact with deep sea ridges, and chains of seamounts give rise to deep eddies that transport nutrients from the deep to the surface. The equilibrium tide theory calculates the height of the tide wave of less than half a meter, while the dynamic theory explains why tides are up to 15 meters. Satellite observations confirm the accuracy of the dynamic theory, and the tides worldwide are now measured to within a few centimeters. Measurements from the CHAMP satellite closely match the models based on the TOPEX data. Accurate models of tides worldwide are essential for research since the variations due to tides must be removed from measurements when calculating gravity and changes in sea levels. ==== Laplace's tidal equations ==== In 1776, Laplace formulated a single set of linear partial differential equations for tidal flow described as a barotropic two-dimensional sheet flow. Coriolis effects are introduced as well as lateral forcing by gravity. Laplace obtained these equations by simplifying the fluid dynamics equations, but they can also be derived from energy integrals via Lagrange's equation. For a fluid sheet of average thickness D, the vertical tidal elevation ζ, as well as the horizontal velocity components u and v (in the latitude φ and longitude λ directions, respectively) satisfy Laplace's tidal equations: ∂ ζ ∂ t + 1 a cos ⁡ ( φ ) [ ∂ ∂ λ ( u D ) + ∂ ∂ φ ( v D cos ⁡ ( φ ) ) ] = 0 , ∂ u ∂ t − v 2 Ω sin ⁡ ( φ ) + 1 a cos ⁡ ( φ ) ∂ ∂ λ ( g ζ + U ) = 0 , and ∂ v ∂ t + u 2 Ω sin ⁡ ( φ ) + 1 a ∂ ∂ φ ( g ζ + U ) = 0 , {\displaystyle {\begin{aligned}{\frac {\partial \zeta }{\partial t}}&+{\frac {1}{a\cos(\varphi )}}\left[{\frac {\partial }{\partial \lambda }}(uD)+{\frac {\partial }{\partial \varphi }}\left(vD\cos(\varphi )\right)\right]=0,\\[2ex]{\frac {\partial u}{\partial t}}&-v\,2\Omega \sin(\varphi )+{\frac {1}{a\cos(\varphi )}}{\frac {\partial }{\partial \lambda }}\left(g\zeta +U\right)=0,\quad {\text{and}}\\[2ex]{\frac {\partial v}{\partial t}}&+u\,2\Omega \sin(\varphi )+{\frac {1}{a}}{\frac {\partial }{\partial \varphi }}\left(g\zeta +U\right)=0,\end{aligned}}} where Ω is the angular frequency of the planet's rotation, g is the planet's gravitational acceleration at the mean ocean surface, a is the planetary radius, and U is the external gravitational tidal-forcing potential. William Thomson (Lord Kelvin) rewrote Laplace's momentum terms using the curl to find an equation for vorticity. Under certain conditions this can be further rewritten as a conservation of vorticity. == Tidal analysis and prediction == === Harmonic analysis === Laplace's improvements in theory were substantial, but they still left prediction in an approximate state. This position changed in the 1860s when the local circumstances of tidal phenomena were more fully brought into account by Lord Kelvin's application of Fourier analysis to the tidal motions as harmonic analysis. Thomson's work in this field was further developed and extended by George Darwin, applying the lunar theory current in his time. Darwin's symbols for the tidal harmonic constituents are still used, for example: M: moon/lunar; S: sun/solar; K: moon-sun/lunisolar. Darwin's harmonic developments of the tide-generating forces were later improved when A.T. Doodson, applying the lunar theory of E.W. Brown, developed the tide-generating potential (TGP) in harmonic form, distinguishing 388 tidal frequencies. Doodson's work was carried out and published in 1921. Doodson devised a practical system for specifying the different harmonic components of the tide-generating potential, the Doodson numbers, a system still in use. Since the mid-twentieth century further analysis has generated many more terms than Doodson's 388. About 62 constituents are of sufficient size to be considered for possible use in marine tide prediction, but sometimes many fewer can predict tides to useful accuracy. The calculations of tide predictions using the harmonic constituents are laborious, and from the 1870s to about the 1960s they were carried out using a mechanical tide-predicting machine, a special-purpose form of analog computer. More recently digital computers, using the method of matrix inversion, are used to determine the tidal harmonic constituents directly from tide gauge records. === Tidal constituents === Tidal constituents combine to give an endlessly varying aggregate because of their different and incommensurable frequencies: the effect is visualized in an animation of the American Mathematical Society illustrating the way in which the components used to be mechanically combined in the tide-predicting machine. Amplitudes (half of peak-to-peak amplitude) of tidal constituents are given below for six example locations: Eastport, Maine (ME), Biloxi, Mississippi (MS), San Juan, Puerto Rico (PR), Kodiak, Alaska (AK), San Francisco, California (CA), and Hilo, Hawaii (HI). ==== Semi-diurnal ==== ==== Diurnal ==== ==== Long period ==== ==== Short period ==== === Doodson numbers === In order to specify the different harmonic components of the tide-generating potential, Doodson devised a practical system which is still in use, involving what are called the Doodson numbers based on the six Doodson arguments or Doodson variables. The number of different tidal frequency components is large, but each corresponds to a specific linear combination of six frequencies using small-integer multiples, positive or negative. In principle, these basic angular arguments can be specified in numerous ways; Doodson's choice of his six "Doodson arguments" has been widely used in tidal work. In terms of these Doodson arguments, each tidal frequency can then be specified as a sum made up of a small integer multiple of each of the six arguments. The resulting six small integer multipliers effectively encode the frequency of the tidal argument concerned, and these are the Doodson numbers: in practice all except the first are usually biased upwards by +5 to avoid negative numbers in the notation. (In the case that the biased multiple exceeds 9, the system adopts X for 10, and E for 11.) The Doodson arguments are specified in the following way, in order of decreasing frequency: β 1 = τ = ( θ M + π − s ) {\displaystyle \beta _{1}=\tau =(\theta _{M}+\pi -s)} is mean Lunar time, the Greenwich hour angle of the mean Moon plus 12 hours. β 2 = s = ( F + Ω ) {\displaystyle \beta _{2}=s=(F+\Omega )} is the mean longitude of the Moon. β 3 = h = ( s − D ) {\displaystyle \beta _{3}=h=(s-D)} is the mean longitude of the Sun. β 4 = p = ( s − l ) {\displaystyle \beta _{4}=p=(s-l)} is the longitude of the Moon's mean perigee. β 5 = N ′ = ( − Ω ) {\displaystyle \beta _{5}=N'=(-\Omega )} is the negative of the longitude of the Moon's mean ascending node on the ecliptic. β 6 = p l {\displaystyle \beta _{6}=p_{l}} or p s = ( s − D − l ′ ) {\displaystyle p_{s}=(s-D-l')} is the longitude of the Sun's mean perigee. In these expressions, the symbols l {\displaystyle l} , l ′ {\displaystyle l'} , F {\displaystyle F} and D {\displaystyle D} refer to an alternative set of fundamental angular arguments (usually preferred for use in modern lunar theory), in which:- l {\displaystyle l} is the mean anomaly of the Moon (distance from its perigee). l ′ {\displaystyle l'} is the mean anomaly of the Sun (distance from its perigee). F {\displaystyle F} is the Moon's mean argument of latitude (distance from its node). D {\displaystyle D} is the Moon's mean elongation (distance from the sun). It is possible to define several auxiliary variables on the basis of combinations of these. In terms of this system, each tidal constituent frequency can be identified by its Doodson numbers. The strongest tidal constituent "M2" has a frequency of 2 cycles per lunar day, its Doodson numbers are usually written 255.555, meaning that its frequency is composed of twice the first Doodson argument, and zero times all of the others. The second strongest tidal constituent "S2" is influenced by the sun, and its Doodson numbers are 273.555, meaning that its frequency is composed of twice the first Doodson argument, +2 times the second, -2 times the third, and zero times each of the other three. This aggregates to the angular equivalent of mean solar time +12 hours. These two strongest component frequencies have simple arguments for which the Doodson system might appear needlessly complex, but each of the hundreds of other component frequencies can be briefly specified in a similar way, showing in the aggregate the usefulness of the encoding. == See also == Long-period tides Lunar node § Effect on tides Kelvin wave Tide table == Notes == == References == == External links == Contributions of satellite laser ranging to the studies of earth tides Archived 28 July 2013 at the Wayback Machine Dynamic Theory of Tides Tidal Observations Publications from NOAA's Center for Operational Oceanographic Products and Services Understanding Tides 150 Years of Tides on the Western Coast Our Relentless Tides GeoTide Tidal Analysis System
Wikipedia/Theory_of_tides
In fluid dynamics, the Boussinesq approximation for water waves is an approximation valid for weakly non-linear and fairly long waves. The approximation is named after Joseph Boussinesq, who first derived them in response to the observation by John Scott Russell of the wave of translation (also known as solitary wave or soliton). The 1872 paper of Boussinesq introduces the equations now known as the Boussinesq equations. The Boussinesq approximation for water waves takes into account the vertical structure of the horizontal and vertical flow velocity. This results in non-linear partial differential equations, called Boussinesq-type equations, which incorporate frequency dispersion (as opposite to the shallow water equations, which are not frequency-dispersive). In coastal engineering, Boussinesq-type equations are frequently used in computer models for the simulation of water waves in shallow seas and harbours. While the Boussinesq approximation is applicable to fairly long waves – that is, when the wavelength is large compared to the water depth – the Stokes expansion is more appropriate for short waves (when the wavelength is of the same order as the water depth, or shorter). == Boussinesq approximation == The essential idea in the Boussinesq approximation is the elimination of the vertical coordinate from the flow equations, while retaining some of the influences of the vertical structure of the flow under water waves. This is useful because the waves propagate in the horizontal plane and have a different (not wave-like) behaviour in the vertical direction. Often, as in Boussinesq's case, the interest is primarily in the wave propagation. This elimination of the vertical coordinate was first done by Joseph Boussinesq in 1871, to construct an approximate solution for the solitary wave (or wave of translation). Subsequently, in 1872, Boussinesq derived the equations known nowadays as the Boussinesq equations. The steps in the Boussinesq approximation are: a Taylor expansion is made of the horizontal and vertical flow velocity (or velocity potential) around a certain elevation, this Taylor expansion is truncated to a finite number of terms, the conservation of mass (see continuity equation) for an incompressible flow and the zero-curl condition for an irrotational flow are used, to replace vertical partial derivatives of quantities in the Taylor expansion with horizontal partial derivatives. Thereafter, the Boussinesq approximation is applied to the remaining flow equations, in order to eliminate the dependence on the vertical coordinate. As a result, the resulting partial differential equations are in terms of functions of the horizontal coordinates (and time). As an example, consider potential flow over a horizontal bed in the ( x , z ) {\displaystyle (x,z)} plane, with x {\displaystyle x} the horizontal and z {\displaystyle z} the vertical coordinate. The bed is located at z = − h {\displaystyle z=-h} , where h {\displaystyle h} is the mean water depth. A Taylor expansion is made of the velocity potential φ ( x , z , t ) {\displaystyle \varphi (x,z,t)} around the bed level z = − h {\displaystyle z=-h} : φ = φ b + ( z + h ) [ ∂ φ ∂ z ] z = − h + 1 2 ( z + h ) 2 [ ∂ 2 φ ∂ z 2 ] z = − h + 1 6 ( z + h ) 3 [ ∂ 3 φ ∂ z 3 ] z = − h + 1 24 ( z + h ) 4 [ ∂ 4 φ ∂ z 4 ] z = − h + ⋯ , {\displaystyle {\begin{aligned}\varphi \,=\,&\varphi _{b}\,+\,(z+h)\,\left[{\frac {\partial \varphi }{\partial z}}\right]_{z=-h}\,+\,{\frac {1}{2}}\,(z+h)^{2}\,\left[{\frac {\partial ^{2}\varphi }{\partial z^{2}}}\right]_{z=-h}\,\\&+\,{\frac {1}{6}}\,(z+h)^{3}\,\left[{\frac {\partial ^{3}\varphi }{\partial z^{3}}}\right]_{z=-h}\,+\,{\frac {1}{24}}\,(z+h)^{4}\,\left[{\frac {\partial ^{4}\varphi }{\partial z^{4}}}\right]_{z=-h}\,+\,\cdots ,\end{aligned}}} where φ b ( x , t ) {\displaystyle \varphi _{b}(x,t)} is the velocity potential at the bed. Invoking Laplace's equation for φ {\displaystyle \varphi } , as valid for incompressible flow, gives: φ = { φ b − 1 2 ( z + h ) 2 ∂ 2 φ b ∂ x 2 + 1 24 ( z + h ) 4 ∂ 4 φ b ∂ x 4 + ⋯ } + { ( z + h ) [ ∂ φ ∂ z ] z = − h − 1 6 ( z + h ) 3 ∂ 2 ∂ x 2 [ ∂ φ ∂ z ] z = − h + ⋯ } = { φ b − 1 2 ( z + h ) 2 ∂ 2 φ b ∂ x 2 + 1 24 ( z + h ) 4 ∂ 4 φ b ∂ x 4 + ⋯ } , {\displaystyle {\begin{aligned}\varphi \,=\,&\left\{\,\varphi _{b}\,-\,{\frac {1}{2}}\,(z+h)^{2}\,{\frac {\partial ^{2}\varphi _{b}}{\partial x^{2}}}\,+\,{\frac {1}{24}}\,(z+h)^{4}\,{\frac {\partial ^{4}\varphi _{b}}{\partial x^{4}}}\,+\,\cdots \,\right\}\,\\&+\,\left\{\,(z+h)\,\left[{\frac {\partial \varphi }{\partial z}}\right]_{z=-h}\,-\,{\frac {1}{6}}\,(z+h)^{3}\,{\frac {\partial ^{2}}{\partial x^{2}}}\left[{\frac {\partial \varphi }{\partial z}}\right]_{z=-h}\,+\,\cdots \,\right\}\\=\,&\left\{\,\varphi _{b}\,-\,{\frac {1}{2}}\,(z+h)^{2}\,{\frac {\partial ^{2}\varphi _{b}}{\partial x^{2}}}\,+\,{\frac {1}{24}}\,(z+h)^{4}\,{\frac {\partial ^{4}\varphi _{b}}{\partial x^{4}}}\,+\,\cdots \,\right\},\end{aligned}}} since the vertical velocity ∂ φ / ∂ z {\displaystyle \partial \varphi /\partial z} is zero at the – impermeable – horizontal bed z = − h {\displaystyle z=-h} . This series may subsequently be truncated to a finite number of terms. == Original Boussinesq equations == === Derivation === For water waves on an incompressible fluid and irrotational flow in the ( x , z ) {\displaystyle (x,z)} plane, the boundary conditions at the free surface elevation z = η ( x , t ) {\displaystyle z=\eta (x,t)} are: ∂ η ∂ t + u ∂ η ∂ x − w = 0 ∂ φ ∂ t + 1 2 ( u 2 + w 2 ) + g η = 0 , {\displaystyle {\begin{aligned}{\frac {\partial \eta }{\partial t}}\,&+\,u\,{\frac {\partial \eta }{\partial x}}\,-\,w\,=\,0\\{\frac {\partial \varphi }{\partial t}}\,&+\,{\frac {1}{2}}\,\left(u^{2}+w^{2}\right)\,+\,g\,\eta \,=\,0,\end{aligned}}} where: u {\displaystyle u} is the horizontal flow velocity component: u = ∂ φ / ∂ x {\displaystyle u=\partial \varphi /\partial x} , w {\displaystyle w} is the vertical flow velocity component: w = ∂ φ / ∂ z {\displaystyle w=\partial \varphi /\partial z} , g {\displaystyle g} is the acceleration by gravity. Now the Boussinesq approximation for the velocity potential φ {\displaystyle \varphi } , as given above, is applied in these boundary conditions. Further, in the resulting equations only the linear and quadratic terms with respect to η {\displaystyle \eta } and u b {\displaystyle u_{b}} are retained (with u b = ∂ φ b / ∂ x {\displaystyle u_{b}=\partial \varphi _{b}/\partial x} the horizontal velocity at the bed z = − h {\displaystyle z=-h} ). The cubic and higher order terms are assumed to be negligible. Then, the following partial differential equations are obtained: set A – Boussinesq (1872), equation (25) ∂ η ∂ t + ∂ ∂ x [ ( h + η ) u b ] = 1 6 h 3 ∂ 3 u b ∂ x 3 , ∂ u b ∂ t + u b ∂ u b ∂ x + g ∂ η ∂ x = 1 2 h 2 ∂ 3 u b ∂ t ∂ x 2 . {\displaystyle {\begin{aligned}{\frac {\partial \eta }{\partial t}}\,&+\,{\frac {\partial }{\partial x}}\,\left[\left(h+\eta \right)\,u_{b}\right]\,=\,{\frac {1}{6}}\,h^{3}\,{\frac {\partial ^{3}u_{b}}{\partial x^{3}}},\\{\frac {\partial u_{b}}{\partial t}}\,&+\,u_{b}\,{\frac {\partial u_{b}}{\partial x}}\,+\,g\,{\frac {\partial \eta }{\partial x}}\,=\,{\frac {1}{2}}\,h^{2}\,{\frac {\partial ^{3}u_{b}}{\partial t\,\partial x^{2}}}.\end{aligned}}} This set of equations has been derived for a flat horizontal bed, i.e. the mean depth h {\displaystyle h} is a constant independent of position x {\displaystyle x} . When the right-hand sides of the above equations are set to zero, they reduce to the shallow water equations. Under some additional approximations, but at the same order of accuracy, the above set A can be reduced to a single partial differential equation for the free surface elevation η {\displaystyle \eta } : set B – Boussinesq (1872), equation (26) ∂ 2 η ∂ t 2 − g h ∂ 2 η ∂ x 2 − g h ∂ 2 ∂ x 2 ( 3 2 η 2 h + 1 3 h 2 ∂ 2 η ∂ x 2 ) = 0. {\displaystyle {\frac {\partial ^{2}\eta }{\partial t^{2}}}\,-\,gh\,{\frac {\partial ^{2}\eta }{\partial x^{2}}}\,-\,gh\,{\frac {\partial ^{2}}{\partial x^{2}}}\left({\frac {3}{2}}\,{\frac {\eta ^{2}}{h}}\,+\,{\frac {1}{3}}\,h^{2}\,{\frac {\partial ^{2}\eta }{\partial x^{2}}}\right)\,=\,0.} From the terms between brackets, the importance of nonlinearity of the equation can be expressed in terms of the Ursell number. In dimensionless quantities, using the water depth h {\displaystyle h} and gravitational acceleration g {\displaystyle g} for non-dimensionalization, this equation reads, after normalization: ∂ 2 ψ ∂ τ 2 − ∂ 2 ψ ∂ ξ 2 − ∂ 2 ∂ ξ 2 ( 3 ψ 2 + ∂ 2 ψ ∂ ξ 2 ) = 0 , {\displaystyle {\frac {\partial ^{2}\psi }{\partial \tau ^{2}}}\,-\,{\frac {\partial ^{2}\psi }{\partial \xi ^{2}}}\,-\,{\frac {\partial ^{2}}{\partial \xi ^{2}}}\left(\,3\,\psi ^{2}\,+\,{\frac {\partial ^{2}\psi }{\partial \xi ^{2}}}\,\right)\,=\,0,} with: === Linear frequency dispersion === Water waves of different wave lengths travel with different phase speeds, a phenomenon known as frequency dispersion. For the case of infinitesimal wave amplitude, the terminology is linear frequency dispersion. The frequency dispersion characteristics of a Boussinesq-type of equation can be used to determine the range of wave lengths, for which it is a valid approximation. The linear frequency dispersion characteristics for the above set A of equations are: c 2 = g h 1 + 1 6 k 2 h 2 1 + 1 2 k 2 h 2 , {\displaystyle c^{2}\,=\;gh\,{\frac {1\,+\,{\frac {1}{6}}\,k^{2}h^{2}}{1\,+\,{\frac {1}{2}}\,k^{2}h^{2}}},} with: c {\displaystyle c} the phase speed, k {\displaystyle k} the wave number ( k = 2 π / λ {\displaystyle k=2\pi /\lambda } , with λ {\displaystyle \lambda } the wave length). The relative error in the phase speed c {\displaystyle c} for set A, as compared with linear theory for water waves, is less than 4% for a relative wave number k h < π / 2 {\displaystyle kh<\pi /2} . So, in engineering applications, set A is valid for wavelengths λ {\displaystyle \lambda } larger than 4 times the water depth h {\displaystyle h} . The linear frequency dispersion characteristics of equation B are: c 2 = g h ( 1 − 1 3 k 2 h 2 ) . {\displaystyle c^{2}\,=\,gh\,\left(1\,-\,{\frac {1}{3}}\,k^{2}h^{2}\right).} The relative error in the phase speed for equation B is less than 4% for k h < 2 π / 7 {\displaystyle kh<2\pi /7} , equivalent to wave lengths λ {\displaystyle \lambda } longer than 7 times the water depth h {\displaystyle h} , called fairly long waves. For short waves with k 2 h 2 > 3 {\displaystyle k^{2}h^{2}>3} equation B become physically meaningless, because there are no longer real-valued solutions of the phase speed. The original set of two partial differential equations (Boussinesq, 1872, equation 25, see set A above) does not have this shortcoming. The shallow water equations have a relative error in the phase speed less than 4% for wave lengths λ {\displaystyle \lambda } in excess of 13 times the water depth h {\displaystyle h} . == Boussinesq-type equations and extensions == There are an overwhelming number of mathematical models which are referred to as Boussinesq equations. This may easily lead to confusion, since often they are loosely referenced to as the Boussinesq equations, while in fact a variant thereof is considered. So it is more appropriate to call them Boussinesq-type equations. Strictly speaking, the Boussinesq equations is the above-mentioned set B, since it is used in the analysis in the remainder of his 1872 paper. Some directions, into which the Boussinesq equations have been extended, are: varying bathymetry, improved frequency dispersion, improved non-linear behavior, making a Taylor expansion around different vertical elevations, dividing the fluid domain in layers, and applying the Boussinesq approximation in each layer separately, inclusion of wave breaking, inclusion of surface tension, extension to internal waves on an interface between fluid domains of different mass density, derivation from a variational principle. == Further approximations for one-way wave propagation == While the Boussinesq equations allow for waves traveling simultaneously in opposing directions, it is often advantageous to only consider waves traveling in one direction. Under small additional assumptions, the Boussinesq equations reduce to: the Korteweg–de Vries equation for wave propagation in one horizontal dimension, the Kadomtsev–Petviashvili equation for (near uni-directional) wave propagation in two horizontal dimensions, the nonlinear Schrödinger equation (NLS equation) for the complex-valued amplitude of narrowband waves (slowly modulated waves). Besides solitary wave solutions, the Korteweg–de Vries equation also has periodic and exact solutions, called cnoidal waves. These are approximate solutions of the Boussinesq equation. == Numerical models == For the simulation of wave motion near coasts and harbours, numerical models – both commercial and academic – employing Boussinesq-type equations exist. Some commercial examples are the Boussinesq-type wave modules in MIKE 21 and SMS. Some of the free Boussinesq models are Celeris, COULWAVE, and FUNWAVE. Most numerical models employ finite-difference, finite-volume or finite element techniques for the discretization of the model equations. Scientific reviews and intercomparisons of several Boussinesq-type equations, their numerical approximation and performance are e.g. Kirby (2003), Dingemans (1997, Part 2, Chapter 5) and Hamm, Madsen & Peregrine (1993). == Notes == == References == Boussinesq, J. (1871). "Théorie de l'intumescence liquide, applelée onde solitaire ou de translation, se propageant dans un canal rectangulaire". Comptes Rendus de l'Académie des Sciences. 72: 755–759. Boussinesq, J. (1872). "Théorie des ondes et des remous qui se propagent le long d'un canal rectangulaire horizontal, en communiquant au liquide contenu dans ce canal des vitesses sensiblement pareilles de la surface au fond". Journal de Mathématiques Pures et Appliquées. Deuxième Série. 17: 55–108. Dingemans, M.W. (1997). Wave propagation over uneven bottoms. Advanced Series on Ocean Engineering 13. World Scientific, Singapore. ISBN 978-981-02-0427-3. Archived from the original on 2012-02-08. Retrieved 2008-01-21. See Part 2, Chapter 5. Hamm, L.; Madsen, P.A.; Peregrine, D.H. (1993). "Wave transformation in the nearshore zone: A review". Coastal Engineering. 21 (1–3): 5–39. Bibcode:1993CoasE..21....5H. doi:10.1016/0378-3839(93)90044-9. Johnson, R.S. (1997). A modern introduction to the mathematical theory of water waves. Cambridge Texts in Applied Mathematics. Vol. 19. Cambridge University Press. ISBN 0-521-59832-X. Kirby, J.T. (2003). "Boussinesq models and applications to nearshore wave propagation, surfzone processes and wave-induced currents". In Lakhan, V.C. (ed.). Advances in Coastal Modeling. Elsevier Oceanography Series. Vol. 67. Elsevier. pp. 1–41. ISBN 0-444-51149-0. Peregrine, D.H. (1967). "Long waves on a beach". Journal of Fluid Mechanics. 27 (4): 815–827. Bibcode:1967JFM....27..815P. doi:10.1017/S0022112067002605. S2CID 119385147. Peregrine, D.H. (1972). "Equations for water waves and the approximations behind them". In Meyer, R.E. (ed.). Waves on Beaches and Resulting Sediment Transport. Academic Press. pp. 95–122. ISBN 0-12-493250-9.
Wikipedia/Boussinesq_approximation_(water_waves)
The shallow-water equations (SWE) are a set of hyperbolic partial differential equations (or parabolic if viscous shear is considered) that describe the flow below a pressure surface in a fluid (sometimes, but not necessarily, a free surface). The shallow-water equations in unidirectional form are also called (de) Saint-Venant equations, after Adhémar Jean Claude Barré de Saint-Venant (see the related section below). The equations are derived from depth-integrating the Navier–Stokes equations, in the case where the horizontal length scale is much greater than the vertical length scale. Under this condition, conservation of mass implies that the vertical velocity scale of the fluid is small compared to the horizontal velocity scale. It can be shown from the momentum equation that vertical pressure gradients are nearly hydrostatic, and that horizontal pressure gradients are due to the displacement of the pressure surface, implying that the horizontal velocity field is constant throughout the depth of the fluid. Vertically integrating allows the vertical velocity to be removed from the equations. The shallow-water equations are thus derived. While a vertical velocity term is not present in the shallow-water equations, note that this velocity is not necessarily zero. This is an important distinction because, for example, the vertical velocity cannot be zero when the floor changes depth, and thus if it were zero only flat floors would be usable with the shallow-water equations. Once a solution (i.e. the horizontal velocities and free surface displacement) has been found, the vertical velocity can be recovered via the continuity equation. Situations in fluid dynamics where the horizontal length scale is much greater than the vertical length scale are common, so the shallow-water equations are widely applicable. They are used with Coriolis forces in atmospheric and oceanic modeling, as a simplification of the primitive equations of atmospheric flow. Shallow-water equation models have only one vertical level, so they cannot directly encompass any factor that varies with height. However, in cases where the mean state is sufficiently simple, the vertical variations can be separated from the horizontal and several sets of shallow-water equations can describe the state. == Equations == === Conservative form === The shallow-water equations are derived from equations of conservation of mass and conservation of linear momentum (the Navier–Stokes equations), which hold even when the assumptions of shallow-water break down, such as across a hydraulic jump. In the case of a horizontal bed, with negligible Coriolis forces, frictional and viscous forces, the shallow-water equations are: ∂ ( ρ η ) ∂ t + ∂ ( ρ η u ) ∂ x + ∂ ( ρ η v ) ∂ y = 0 , ∂ ( ρ η u ) ∂ t + ∂ ∂ x ( ρ η u 2 + 1 2 ρ g η 2 ) + ∂ ( ρ η u v ) ∂ y = 0 , ∂ ( ρ η v ) ∂ t + ∂ ∂ y ( ρ η v 2 + 1 2 ρ g η 2 ) + ∂ ( ρ η u v ) ∂ x = 0. {\displaystyle {\begin{aligned}{\frac {\partial (\rho \eta )}{\partial t}}&+{\frac {\partial (\rho \eta u)}{\partial x}}+{\frac {\partial (\rho \eta v)}{\partial y}}=0,\\[3pt]{\frac {\partial (\rho \eta u)}{\partial t}}&+{\frac {\partial }{\partial x}}\left(\rho \eta u^{2}+{\frac {1}{2}}\rho g\eta ^{2}\right)+{\frac {\partial (\rho \eta uv)}{\partial y}}=0,\\[3pt]{\frac {\partial (\rho \eta v)}{\partial t}}&+{\frac {\partial }{\partial y}}\left(\rho \eta v^{2}+{\frac {1}{2}}\rho g\eta ^{2}\right)+{\frac {\partial (\rho \eta uv)}{\partial x}}=0.\end{aligned}}} Here η is the total fluid column height (instantaneous fluid depth as a function of x, y and t), and the 2D vector (u,v) is the fluid's horizontal flow velocity, averaged across the vertical column. Further g is acceleration due to gravity and ρ is the fluid density. The first equation is derived from mass conservation, the second two from momentum conservation. === Non-conservative form === Expanding the derivatives in the above using the product rule, the non-conservative form of the shallow-water equations is obtained. Since velocities are not subject to a fundamental conservation equation, the non-conservative forms do not hold across a shock or hydraulic jump. Also included are the appropriate terms for Coriolis, frictional and viscous forces, to obtain (for constant fluid density): ∂ h ∂ t + ∂ ∂ x ( ( H + h ) u ) + ∂ ∂ y ( ( H + h ) v ) = 0 , ∂ u ∂ t + u ∂ u ∂ x + v ∂ u ∂ y − f v = − g ∂ h ∂ x − k u + ν ( ∂ 2 u ∂ x 2 + ∂ 2 u ∂ y 2 ) , ∂ v ∂ t + u ∂ v ∂ x + v ∂ v ∂ y + f u = − g ∂ h ∂ y − k v + ν ( ∂ 2 v ∂ x 2 + ∂ 2 v ∂ y 2 ) , {\displaystyle {\begin{aligned}{\frac {\partial h}{\partial t}}&+{\frac {\partial }{\partial x}}{\Bigl (}(H+h)u{\Bigr )}+{\frac {\partial }{\partial y}}{\Bigl (}(H+h)v{\Bigr )}=0,\\[3pt]{\frac {\partial u}{\partial t}}&+u{\frac {\partial u}{\partial x}}+v{\frac {\partial u}{\partial y}}-fv=-g{\frac {\partial h}{\partial x}}-ku+\nu \left({\frac {\partial ^{2}u}{\partial x^{2}}}+{\frac {\partial ^{2}u}{\partial y^{2}}}\right),\\[3pt]{\frac {\partial v}{\partial t}}&+u{\frac {\partial v}{\partial x}}+v{\frac {\partial v}{\partial y}}+fu=-g{\frac {\partial h}{\partial y}}-kv+\nu \left({\frac {\partial ^{2}v}{\partial x^{2}}}+{\frac {\partial ^{2}v}{\partial y^{2}}}\right),\end{aligned}}} where It is often the case that the terms quadratic in u and v, which represent the effect of bulk advection, are small compared to the other terms. This is called geostrophic balance, and is equivalent to saying that the Rossby number is small. Assuming also that the wave height is very small compared to the mean height (h ≪ H), we have (without lateral viscous forces): ∂ h ∂ t + H ( ∂ u ∂ x + ∂ v ∂ y ) = 0 , ∂ u ∂ t − f v = − g ∂ h ∂ x − k u , ∂ v ∂ t + f u = − g ∂ h ∂ y − k v . {\displaystyle {\begin{aligned}{\frac {\partial h}{\partial t}}&+H\left({\frac {\partial u}{\partial x}}+{\frac {\partial v}{\partial y}}\right)=0,\\[3pt]{\frac {\partial u}{\partial t}}&-fv=-g{\frac {\partial h}{\partial x}}-ku,\\[3pt]{\frac {\partial v}{\partial t}}&+fu=-g{\frac {\partial h}{\partial y}}-kv.\end{aligned}}} == One-dimensional Saint-Venant equations == The one-dimensional (1-D) Saint-Venant equations were derived by Adhémar Jean Claude Barré de Saint-Venant, and are commonly used to model transient open-channel flow and surface runoff. They can be viewed as a contraction of the two-dimensional (2-D) shallow-water equations, which are also known as the two-dimensional Saint-Venant equations. The 1-D Saint-Venant equations contain to a certain extent the main characteristics of the channel cross-sectional shape. The 1-D equations are used extensively in computer models such as TUFLOW, Mascaret (EDF), SIC (Irstea), HEC-RAS, SWMM5, InfoWorks, Flood Modeller, SOBEK 1DFlow, MIKE 11, and MIKE SHE because they are significantly easier to solve than the full shallow-water equations. Common applications of the 1-D Saint-Venant equations include flood routing along rivers (including evaluation of measures to reduce the risks of flooding), dam break analysis, storm pulses in an open channel, as well as storm runoff in overland flow. === Equations === The system of partial differential equations which describe the 1-D incompressible flow in an open channel of arbitrary cross section – as derived and posed by Saint-Venant in his 1871 paper (equations 19 & 20) – is: and where x is the space coordinate along the channel axis, t denotes time, A(x,t) is the cross-sectional area of the flow at location x, u(x,t) is the flow velocity, ζ(x,t) is the free surface elevation and τ(x,t) is the wall shear stress along the wetted perimeter P(x,t) of the cross section at x. Further ρ is the (constant) fluid density and g is the gravitational acceleration. Closure of the hyperbolic system of equations (1)–(2) is obtained from the geometry of cross sections – by providing a functional relationship between the cross-sectional area A and the surface elevation ζ at each position x. For example, for a rectangular cross section, with constant channel width B and channel bed elevation zb, the cross sectional area is: A = B (ζ − zb) = B h. The instantaneous water depth is h(x,t) = ζ(x,t) − zb(x), with zb(x) the bed level (i.e. elevation of the lowest point in the bed above datum, see the cross-section figure). For non-moving channel walls the cross-sectional area A in equation (1) can be written as: A ( x , t ) = ∫ 0 h ( x , t ) b ( x , h ′ ) d h ′ , {\displaystyle A(x,t)=\int _{0}^{h(x,t)}b(x,h')\,dh',} with b(x,h) the effective width of the channel cross section at location x when the fluid depth is h – so b(x, h) = B(x) for rectangular channels. The wall shear stress τ is dependent on the flow velocity u, they can be related by using e.g. the Darcy–Weisbach equation, Manning formula or Chézy formula. Further, equation (1) is the continuity equation, expressing conservation of water volume for this incompressible homogeneous fluid. Equation (2) is the momentum equation, giving the balance between forces and momentum change rates. The bed slope S(x), friction slope Sf(x, t) and hydraulic radius R(x, t) are defined as: S = − d z b d x , {\displaystyle S=-{\frac {\mathrm {d} z_{\mathrm {b} }}{\mathrm {d} x}},} S f = τ ρ g R {\displaystyle S_{\mathrm {f} }={\frac {\tau }{\rho gR}}} and R = A P . {\displaystyle R={\frac {A}{P}}.} Consequently, the momentum equation (2) can be written as: === Conservation of momentum === The momentum equation (3) can also be cast in the so-called conservation form, through some algebraic manipulations on the Saint-Venant equations, (1) and (3). In terms of the discharge Q = Au: where A, I1 and I2 are functions of the channel geometry, described in the terms of the channel width B(σ,x). Here σ is the height above the lowest point in the cross section at location x, see the cross-section figure. So σ is the height above the bed level zb(x) (of the lowest point in the cross section): A ( σ , x ) = ∫ 0 σ B ( σ ′ , x ) d σ ′ , I 1 ( σ , x ) = ∫ 0 σ ( σ − σ ′ ) B ( σ ′ , x ) d σ ′ and I 2 ( σ , x ) = ∫ 0 σ ( σ − σ ′ ) ∂ B ( σ ′ , x ) ∂ x d σ ′ . {\displaystyle {\begin{aligned}A(\sigma ,x)&=\int _{0}^{\sigma }B(\sigma ',x)\;\mathrm {d} \sigma ',\\I_{1}(\sigma ,x)&=\int _{0}^{\sigma }(\sigma -\sigma ')\,B(\sigma ^{\prime },x)\;\mathrm {d} \sigma '\qquad {\text{and}}\\I_{2}(\sigma ,x)&=\int _{0}^{\sigma }(\sigma -\sigma ')\,{\frac {\partial B(\sigma ',x)}{\partial x}}\;\mathrm {d} \sigma '.\end{aligned}}} Above – in the momentum equation (4) in conservation form – A, I1 and I2 are evaluated at σ = h(x,t). The term g I1 describes the hydrostatic force in a certain cross section. And, for a non-prismatic channel, g I2 gives the effects of geometry variations along the channel axis x. In applications, depending on the problem at hand, there often is a preference for using either the momentum equation in non-conservation form, (2) or (3), or the conservation form (4). For instance in case of the description of hydraulic jumps, the conservation form is preferred since the momentum flux is continuous across the jump. === Characteristics === The Saint-Venant equations (1)–(2) can be analysed using the method of characteristics. The two celerities dx/dt on the characteristic curves are: d x d t = u ± c , {\displaystyle {\frac {\mathrm {d} x}{\mathrm {d} t}}=u\pm c,} with c = g A B . {\displaystyle c={\sqrt {\frac {gA}{B}}}.} The Froude number Fr = |u| / c determines whether the flow is subcritical (Fr < 1) or supercritical (Fr > 1). For a rectangular and prismatic channel of constant width B, i.e. with A = B h and c = √gh, the Riemann invariants are: r + = u + 2 g h {\displaystyle r_{+}=u+2{\sqrt {gh}}} and r − = u − 2 g h , {\displaystyle r_{-}=u-2{\sqrt {gh}},} so the equations in characteristic form are: d d t ( u + 2 g h ) = g ( S − S f ) along d x d t = u + g h and d d t ( u − 2 g h ) = g ( S − S f ) along d x d t = u − g h . {\displaystyle {\begin{aligned}&{\frac {\mathrm {d} }{\mathrm {d} t}}\left(u+2{\sqrt {gh}}\right)=g\left(S-S_{f}\right)&&{\text{along}}\quad {\frac {\mathrm {d} x}{\mathrm {d} t}}=u+{\sqrt {gh}}\quad {\text{and}}\\&{\frac {\mathrm {d} }{\mathrm {d} t}}\left(u-2{\sqrt {gh}}\right)=g\left(S-S_{f}\right)&&{\text{along}}\quad {\frac {\mathrm {d} x}{\mathrm {d} t}}=u-{\sqrt {gh}}.\end{aligned}}} The Riemann invariants and method of characteristics for a prismatic channel of arbitrary cross-section are described by Didenkulova & Pelinovsky (2011). The characteristics and Riemann invariants provide important information on the behavior of the flow, as well as that they may be used in the process of obtaining (analytical or numerical) solutions. === Hamiltonian structure for frictionless flow === In case there is no friction and the channel has a rectangular prismatic cross section, the Saint-Venant equations have a Hamiltonian structure. The Hamiltonian H is equal to the energy of the free-surface flow: H = ρ ∫ ( 1 2 A u 2 + 1 2 g B ζ 2 ) d x , {\displaystyle H=\rho \int \left({\frac {1}{2}}Au^{2}+{\frac {1}{2}}gB\zeta ^{2}\right)\mathrm {d} x,} with constant B the channel width and ρ the constant fluid density. Hamilton's equations then are: ρ B ∂ ζ ∂ t + ∂ ∂ x ( ∂ H ∂ u ) = ρ ( B ∂ ζ ∂ t + ∂ ( A u ) ∂ x ) = ρ ( ∂ A ∂ t + ∂ ( A u ) ∂ x ) = 0 , ρ B ∂ u ∂ t + ∂ ∂ x ( ∂ H ∂ ζ ) = ρ B ( ∂ u ∂ t + u ∂ u ∂ x + g ∂ ζ ∂ x ) = 0 , {\displaystyle {\begin{aligned}&\rho B{\frac {\partial \zeta }{\partial t}}+{\frac {\partial }{\partial x}}\left({\frac {\partial H}{\partial u}}\right)=\rho \left(B{\frac {\partial \zeta }{\partial t}}+{\frac {\partial (Au)}{\partial x}}\right)=\rho \left({\frac {\partial A}{\partial t}}+{\frac {\partial (Au)}{\partial x}}\right)=0,\\&\rho B{\frac {\partial u}{\partial t}}+{\frac {\partial }{\partial x}}\left({\frac {\partial H}{\partial \zeta }}\right)=\rho B\left({\frac {\partial u}{\partial t}}+u{\frac {\partial u}{\partial x}}+g{\frac {\partial \zeta }{\partial x}}\right)=0,\end{aligned}}} since ∂A/∂ζ = B). === Derived modelling === ==== Dynamic wave ==== The dynamic wave is the full one-dimensional Saint-Venant equation. It is numerically challenging to solve, but is valid for all channel flow scenarios. The dynamic wave is used for modeling transient storms in modeling programs including Mascaret (EDF), SIC (Irstea), HEC-RAS, Infoworks ICM MIKE 11, Wash 123d and SWMM5. In the order of increasing simplifications, by removing some terms of the full 1D Saint-Venant equations (aka Dynamic wave equation), we get the also classical Diffusive wave equation and Kinematic wave equation. ==== Diffusive wave ==== For the diffusive wave it is assumed that the inertial terms are less than the gravity, friction, and pressure terms. The diffusive wave can therefore be more accurately described as a non-inertia wave, and is written as: g ∂ h ∂ x + g ( S f − S ) = 0. {\displaystyle g{\frac {\partial h}{\partial x}}+g(S_{f}-S)=0.} The diffusive wave is valid when the inertial acceleration is much smaller than all other forms of acceleration, or in other words when there is primarily subcritical flow, with low Froude values. Models that use the diffusive wave assumption include MIKE SHE and LISFLOOD-FP. In the SIC (Irstea) software this options is also available, since the 2 inertia terms (or any of them) can be removed in option from the interface. ==== Kinematic wave ==== For the kinematic wave it is assumed that the flow is uniform, and that the friction slope is approximately equal to the slope of the channel. This simplifies the full Saint-Venant equation to the kinematic wave: S f − S = 0. {\displaystyle S_{f}-S=0.} The kinematic wave is valid when the change in wave height over distance and velocity over distance and time is negligible relative to the bed slope, e.g. for shallow flows over steep slopes. The kinematic wave is used in HEC-HMS. === Derivation from Navier–Stokes equations === The 1-D Saint-Venant momentum equation can be derived from the Navier–Stokes equations that describe fluid motion. The x-component of the Navier–Stokes equations – when expressed in Cartesian coordinates in the x-direction – can be written as: ∂ u ∂ t + u ∂ u ∂ x + v ∂ u ∂ y + w ∂ u ∂ z = − ∂ p ∂ x 1 ρ + ν ( ∂ 2 u ∂ x 2 + ∂ 2 u ∂ y 2 + ∂ 2 u ∂ z 2 ) + f x , {\displaystyle {\frac {\partial u}{\partial t}}+u{\frac {\partial u}{\partial x}}+v{\frac {\partial u}{\partial y}}+w{\frac {\partial u}{\partial z}}=-{\frac {\partial p}{\partial x}}{\frac {1}{\rho }}+\nu \left({\frac {\partial ^{2}u}{\partial x^{2}}}+{\frac {\partial ^{2}u}{\partial y^{2}}}+{\frac {\partial ^{2}u}{\partial z^{2}}}\right)+f_{x},} where u is the velocity in the x-direction, v is the velocity in the y-direction, w is the velocity in the z-direction, t is time, p is the pressure, ρ is the density of water, ν is the kinematic viscosity, and fx is the body force in the x-direction. If it is assumed that friction is taken into account as a body force, then ν {\displaystyle \nu } can be assumed as zero so: ν ( ∂ 2 u ∂ x 2 + ∂ 2 u ∂ y 2 + ∂ 2 u ∂ z 2 ) = 0. {\displaystyle \nu \left({\frac {\partial ^{2}u}{\partial x^{2}}}+{\frac {\partial ^{2}u}{\partial y^{2}}}+{\frac {\partial ^{2}u}{\partial z^{2}}}\right)=0.} Assuming one-dimensional flow in the x-direction it follows that: v ∂ u ∂ y + w ∂ u ∂ z = 0 {\displaystyle v{\frac {\partial u}{\partial y}}+w{\frac {\partial u}{\partial z}}=0} Assuming also that the pressure distribution is approximately hydrostatic it follows that: p = ρ g h {\displaystyle p=\rho gh} or in differential form: ∂ p = ρ g ( ∂ h ) . {\displaystyle \partial p=\rho g(\partial h).} And when these assumptions are applied to the x-component of the Navier–Stokes equations: − ∂ p ∂ x 1 ρ = − 1 ρ ρ g ( ∂ h ) ∂ x = − g ∂ h ∂ x . {\displaystyle -{\frac {\partial p}{\partial x}}{\frac {1}{\rho }}=-{\frac {1}{\rho }}{\frac {\rho g\left(\partial h\right)}{\partial x}}=-g{\frac {\partial h}{\partial x}}.} There are 2 body forces acting on the channel fluid, namely, gravity and friction: f x = f x , g + f x , f {\displaystyle f_{x}=f_{x,g}+f_{x,f}} where fx,g is the body force due to gravity and fx,f is the body force due to friction. fx,g can be calculated using basic physics and trigonometry: F g = sin ⁡ ( θ ) g M {\displaystyle F_{g}=\sin(\theta )gM} where Fg is the force of gravity in the x-direction, θ is the angle, and M is the mass. The expression for sin θ can be simplified using trigonometry as: sin ⁡ θ = opp hyp . {\displaystyle \sin \theta ={\frac {\text{opp}}{\text{hyp}}}.} For small θ (reasonable for almost all streams) it can be assumed that: sin ⁡ θ = tan ⁡ θ = opp adj = S {\displaystyle \sin \theta =\tan \theta ={\frac {\text{opp}}{\text{adj}}}=S} and given that fx represents a force per unit mass, the expression becomes: f x , g = g S . {\displaystyle f_{x,g}=gS.} Assuming the energy grade line is not the same as the channel slope, and for a reach of consistent slope there is a consistent friction loss, it follows that: f x , f = S f g . {\displaystyle f_{x,f}=S_{f}g.} All of these assumptions combined arrives at the 1-dimensional Saint-Venant equation in the x-direction: ∂ u ∂ t + u ∂ u ∂ x + g ∂ h ∂ x + g ( S f − S ) = 0 , {\displaystyle {\frac {\partial u}{\partial t}}+u{\frac {\partial u}{\partial x}}+g{\frac {\partial h}{\partial x}}+g(S_{f}-S)=0,} ( a ) ( b ) ( c ) ( d ) ( e ) {\displaystyle (a)\quad \ \ (b)\quad \ \ \ (c)\qquad \ \ \ (d)\quad (e)\ } where (a) is the local acceleration term, (b) is the convective acceleration term, (c) is the pressure gradient term, (d) is the friction term, and (e) is the gravity term. Terms The local acceleration (a) can also be thought of as the "unsteady term" as this describes some change in velocity over time. The convective acceleration (b) is an acceleration caused by some change in velocity over position, for example the speeding up or slowing down of a fluid entering a constriction or an opening, respectively. Both these terms make up the inertia terms of the 1-dimensional Saint-Venant equation. The pressure gradient term (c) describes how pressure changes with position, and since the pressure is assumed hydrostatic, this is the change in head over position. The friction term (d) accounts for losses in energy due to friction, while the gravity term (e) is the acceleration due to bed slope. == Wave modelling by shallow-water equations == Shallow-water equations can be used to model Rossby and Kelvin waves in the atmosphere, rivers, lakes and oceans as well as gravity waves in a smaller domain (e.g. surface waves in a bath). In order for shallow-water equations to be valid, the wavelength of the phenomenon they are supposed to model has to be much larger than the depth of the basin where the phenomenon takes place. Somewhat smaller wavelengths can be handled by extending the shallow-water equations using the Boussinesq approximation to incorporate dispersion effects. Shallow-water equations are especially suitable to model tides which have very large length scales (over hundreds of kilometers). For tidal motion, even a very deep ocean may be considered as shallow as its depth will always be much smaller than the tidal wavelength. == Turbulence modelling using non-linear shallow-water equations == Shallow-water equations, in its non-linear form, is an obvious candidate for modelling turbulence in the atmosphere and oceans, i.e. geophysical turbulence. An advantage of this, over Quasi-geostrophic equations, is that it allows solutions like gravity waves, while also conserving energy and potential vorticity. However, there are also some disadvantages as far as geophysical applications are concerned - it has a non-quadratic expression for total energy and a tendency for waves to become shock waves. Some alternate models have been proposed which prevent shock formation. One alternative is to modify the "pressure term" in the momentum equation, but it results in a complicated expression for kinetic energy. Another option is to modify the non-linear terms in all equations, which gives a quadratic expression for kinetic energy, avoids shock formation, but conserves only linearized potential vorticity. == See also == Waves and shallow water == Notes == == Further reading == == External links == Derivation of the shallow-water equations from first principles (instead of simplifying the Navier–Stokes equations, some analytical solutions)
Wikipedia/Shallow_water_equations
A general circulation model (GCM) is a type of climate model. It employs a mathematical model of the general circulation of a planetary atmosphere or ocean. It uses the Navier–Stokes equations on a rotating sphere with thermodynamic terms for various energy sources (radiation, latent heat). These equations are the basis for computer programs used to simulate the Earth's atmosphere or oceans. Atmospheric and oceanic GCMs (AGCM and OGCM) are key components along with sea ice and land-surface components. GCMs and global climate models are used for weather forecasting, understanding the climate, and forecasting climate change. Atmospheric GCMs (AGCMs) model the atmosphere and impose sea surface temperatures as boundary conditions. Coupled atmosphere-ocean GCMs (AOGCMs, e.g. HadCM3, EdGCM, GFDL CM2.X, ARPEGE-Climat) combine the two models. The first general circulation climate model that combined both oceanic and atmospheric processes was developed in the late 1960s at the NOAA Geophysical Fluid Dynamics Laboratory AOGCMs represent the pinnacle of complexity in climate models and internalise as many processes as possible. However, they are still under development and uncertainties remain. They may be coupled to models of other processes, such as the carbon cycle, so as to better model feedback effects. Such integrated multi-system models are sometimes referred to as either "earth system models" or "global climate models." Versions designed for decade to century time scale climate applications were created by Syukuro Manabe and Kirk Bryan at the Geophysical Fluid Dynamics Laboratory (GFDL) in Princeton, New Jersey. These models are based on the integration of a variety of fluid dynamical, chemical and sometimes biological equations. == Terminology == The acronym GCM originally stood for General Circulation Model. Recently, a second meaning came into use, namely Global Climate Model. While these do not refer to the same thing, General Circulation Models are typically the tools used for modeling climate, and hence the two terms are sometimes used interchangeably. However, the term "global climate model" is ambiguous and may refer to an integrated framework that incorporates multiple components including a general circulation model, or may refer to the general class of climate models that use a variety of means to represent the climate mathematically. == Atmospheric and oceanic models == Atmospheric (AGCMs) and oceanic GCMs (OGCMs) can be coupled to form an atmosphere-ocean coupled general circulation model (CGCM or AOGCM). With the addition of submodels such as a sea ice model or a model for evapotranspiration over land, AOGCMs become the basis for a full climate model. == Structure == General Circulation Models (GCMs) discretise the equations for fluid motion and energy transfer and integrate these over time. Unlike simpler models, GCMs divide the atmosphere and/or oceans into grids of discrete "cells", which represent computational units. Unlike simpler models which make mixing assumptions, processes internal to a cell—such as convection—that occur on scales too small to be resolved directly are parameterised at the cell level, while other functions govern the interface between cells. Three-dimensional (more properly four-dimensional) GCMs apply discrete equations for fluid motion and integrate these forward in time. They contain parameterisations for processes such as convection that occur on scales too small to be resolved directly. A simple general circulation model (SGCM) consists of a dynamic core that relates properties such as temperature to others such as pressure and velocity. Examples are programs that solve the primitive equations, given energy input and energy dissipation in the form of scale-dependent friction, so that atmospheric waves with the highest wavenumbers are most attenuated. Such models may be used to study atmospheric processes, but are not suitable for climate projections. Atmospheric GCMs (AGCMs) model the atmosphere (and typically contain a land-surface model as well) using imposed sea surface temperatures (SSTs). They may include atmospheric chemistry. AGCMs consist of a dynamical core that integrates the equations of fluid motion, typically for: surface pressure horizontal components of velocity in layers temperature and water vapor in layers radiation, split into solar/short wave and terrestrial/infrared/long wave parameters for: convection land surface processes albedo hydrology cloud cover A GCM contains prognostic equations that are a function of time (typically winds, temperature, moisture, and surface pressure) together with diagnostic equations that are evaluated from them for a specific time period. As an example, pressure at any height can be diagnosed by applying the hydrostatic equation to the predicted surface pressure and the predicted values of temperature between the surface and the height of interest. Pressure is used to compute the pressure gradient force in the time-dependent equation for the winds. OGCMs model the ocean (with fluxes from the atmosphere imposed) and may contain a sea ice model. For example, the standard resolution of HadOM3 is 1.25 degrees in latitude and longitude, with 20 vertical levels, leading to approximately 1,500,000 variables. AOGCMs (e.g. HadCM3, GFDL CM2.X) combine the two submodels. They remove the need to specify fluxes across the interface of the ocean surface. These models are the basis for model predictions of future climate, such as are discussed by the IPCC. AOGCMs internalise as many processes as possible. They have been used to provide predictions at a regional scale. While the simpler models are generally susceptible to analysis and their results are easier to understand, AOGCMs may be nearly as hard to analyse as the climate itself. === Grid === The fluid equations for AGCMs are made discrete using either the finite difference method or the spectral method. For finite differences, a grid is imposed on the atmosphere. The simplest grid uses constant angular grid spacing (i.e., a latitude/longitude grid). However, non-rectangular grids (e.g., icosahedral) and grids of variable resolution  are more often used. The LMDz model can be arranged to give high resolution over any given section of the planet. HadGEM1 (and other ocean models) use an ocean grid with higher resolution in the tropics to help resolve processes believed to be important for the El Niño Southern Oscillation (ENSO). Spectral models generally use a Gaussian grid, because of the mathematics of transformation between spectral and grid-point space. Typical AGCM resolutions are between 1 and 5 degrees in latitude or longitude: HadCM3, for example, uses 3.75 in longitude and 2.5 degrees in latitude, giving a grid of 96 by 73 points (96 x 72 for some variables); and has 19 vertical levels. This results in approximately 500,000 "basic" variables, since each grid point has four variables (u,v, T, Q), though a full count would give more (clouds; soil levels). HadGEM1 uses a grid of 1.875 degrees in longitude and 1.25 in latitude in the atmosphere; HiGEM, a high-resolution variant, uses 1.25 x 0.83 degrees respectively. These resolutions are lower than is typically used for weather forecasting. Ocean resolutions tend to be higher, for example, HadCM3 has 6 ocean grid points per atmospheric grid point in the horizontal. For a standard finite difference model, uniform gridlines converge towards the poles. This would lead to computational instabilities (see CFL condition) and so the model variables must be filtered along lines of latitude close to the poles. Ocean models suffer from this problem too, unless a rotated grid is used in which the North Pole is shifted onto a nearby landmass. Spectral models do not suffer from this problem. Some experiments use geodesic grids and icosahedral grids, which (being more uniform) do not have pole-problems. Another approach to solving the grid spacing problem is to deform a Cartesian cube such that it covers the surface of a sphere. === Flux buffering === Some early versions of AOGCMs required an ad hoc process of "flux correction" to achieve a stable climate. This resulted from separately prepared ocean and atmospheric models that each used an implicit flux from the other component different than that component could produce. Such a model failed to match observations. However, if the fluxes were 'corrected', the factors that led to these unrealistic fluxes might be unrecognised, which could affect model sensitivity. As a result, the vast majority of models used in the current round of IPCC reports do not use them. The model improvements that now make flux corrections unnecessary include improved ocean physics, improved resolution in both atmosphere and ocean, and more physically consistent coupling between the atmosphere and ocean submodels. Improved models now maintain stable, multi-century simulations of surface climate that are considered to be of sufficient quality to allow their use for climate projections. === Convection === Moist convection releases latent heat and is important to the Earth's energy budget. Convection occurs on too small a scale to be resolved by climate models, and hence it must be handled via parameters. This has been done since the 1950s. Akio Arakawa did much of the early work, and variants of his scheme are still used, although a variety of different schemes are now in use. Clouds are also typically handled with a parameter, for a similar lack of scale. Limited understanding of clouds has limited the success of this strategy, but not due to some inherent shortcomings of the method. === Software === Most models include software to diagnose a wide range of variables for comparison with observations or study of atmospheric processes. An example is the 2-metre temperature, which is the standard height for near-surface observations of air temperature. This temperature is not directly predicted from the model but is deduced from surface and lowest-model-layer temperatures. Other software is used for creating plots and animations. == Projections == Coupled AOGCMs use transient climate simulations to project/predict climate changes under various scenarios. These can be idealised scenarios (most commonly, CO2 emissions increasing at 1%/yr) or based on recent history (usually the "IS92a" or more recently the SRES scenarios). Which scenarios are most realistic remains uncertain. The 2001 IPCC Third Assessment Report Figure 9.3 shows the global mean response of 19 different coupled models to an idealised experiment in which emissions increased at 1% per year. Figure 9.5 shows the response of a smaller number of models to more recent trends. For the 7 climate models shown there, the temperature change to 2100 varies from 2 to 4.5 °C with a median of about 3 °C. Future scenarios do not include unknown events – for example, volcanic eruptions or changes in solar forcing. These effects are believed to be small in comparison to greenhouse gas (GHG) forcing in the long term, but large volcanic eruptions, for example, can exert a substantial temporary cooling effect. Human GHG emissions are a model input, although it is possible to include an economic/technological submodel to provide these as well. Atmospheric GHG levels are usually supplied as an input, though it is possible to include a carbon cycle model that reflects vegetation and oceanic processes to calculate such levels. === Emissions scenarios === For the six SRES marker scenarios, IPCC (2007:7–8) gave a "best estimate" of global mean temperature increase (2090–2099 relative to the period 1980–1999) of 1.8 °C to 4.0 °C. Over the same time period, the "likely" range (greater than 66% probability, based on expert judgement) for these scenarios was for a global mean temperature increase of 1.1 to 6.4 °C. In 2008 a study made climate projections using several emission scenarios. In a scenario where global emissions start to decrease by 2010 and then decline at a sustained rate of 3% per year, the likely global average temperature increase was predicted to be 1.7 °C above pre-industrial levels by 2050, rising to around 2 °C by 2100. In a projection designed to simulate a future where no efforts are made to reduce global emissions, the likely rise in global average temperature was predicted to be 5.5 °C by 2100. A rise as high as 7 °C was thought possible, although less likely. Another no-reduction scenario resulted in a median warming over land (2090–99 relative to the period 1980–99) of 5.1 °C. Under the same emissions scenario but with a different model, the predicted median warming was 4.1 °C. === Model accuracy === AOGCMs internalise as many processes as are sufficiently understood. However, they are still under development and significant uncertainties remain. They may be coupled to models of other processes in Earth system models, such as the carbon cycle, so as to better model feedback. Most recent simulations show "plausible" agreement with the measured temperature anomalies over the past 150 years, when driven by observed changes in greenhouse gases and aerosols. Agreement improves by including both natural and anthropogenic forcings. Imperfect models may nevertheless produce useful results. GCMs are capable of reproducing the general features of the observed global temperature over the past century. A debate over how to reconcile climate model predictions that upper air (tropospheric) warming should be greater than observed surface warming, some of which appeared to show otherwise, was resolved in favour of the models, following data revisions. Cloud effects are a significant area of uncertainty in climate models. Clouds have competing effects on climate. They cool the surface by reflecting sunlight into space; they warm it by increasing the amount of infrared radiation transmitted from the atmosphere to the surface. In the 2001 IPCC report possible changes in cloud cover were highlighted as a major uncertainty in predicting climate. Climate researchers around the world use climate models to understand the climate system. Thousands of papers have been published about model-based studies. Part of this research is to improve the models. In 2000, a comparison between measurements and dozens of GCM simulations of ENSO-driven tropical precipitation, water vapor, temperature, and outgoing longwave radiation found similarity between measurements and simulation of most factors. However, the simulated change in precipitation was about one-fourth less than what was observed. Errors in simulated precipitation imply errors in other processes, such as errors in the evaporation rate that provides moisture to create precipitation. The other possibility is that the satellite-based measurements are in error. Either indicates progress is required in order to monitor and predict such changes. The precise magnitude of future changes in climate is still uncertain; for the end of the 21st century (2071 to 2100), for SRES scenario A2, the change of global average SAT change from AOGCMs compared with 1961 to 1990 is +3.0 °C (5.4 °F) and the range is +1.3 to +4.5 °C (+2.3 to 8.1 °F). The IPCC's Fifth Assessment Report asserted "very high confidence that models reproduce the general features of the global-scale annual mean surface temperature increase over the historical period". However, the report also observed that the rate of warming over the period 1998–2012 was lower than that predicted by 111 out of 114 Coupled Model Intercomparison Project climate models. == Relation to weather forecasting == The global climate models used for climate projections are similar in structure to (and often share computer code with) numerical models for weather prediction, but are nonetheless logically distinct. Most weather forecasting is done on the basis of interpreting numerical model results. Since forecasts are typically a few days or a week and sea surface temperatures change relatively slowly, such models do not usually contain an ocean model but rely on imposed SSTs. They also require accurate initial conditions to begin the forecast – typically these are taken from the output of a previous forecast, blended with observations. Weather predictions are required at higher temporal resolutions than climate projections, often sub-hourly compared to monthly or yearly averages for climate. However, because weather forecasts only cover around 10 days the models can also be run at higher vertical and horizontal resolutions than climate mode. Currently the ECMWF runs at 9 km (5.6 mi) resolution as opposed to the 100-to-200 km (62-to-124 mi) scale used by typical climate model runs. Often local models are run using global model results for boundary conditions, to achieve higher local resolution: for example, the Met Office runs a mesoscale model with an 11 km (6.8 mi) resolution covering the UK, and various agencies in the US employ models such as the NGM and NAM models. Like most global numerical weather prediction models such as the GFS, global climate models are often spectral models instead of grid models. Spectral models are often used for global models because some computations in modeling can be performed faster, thus reducing run times. == Computations == Climate models use quantitative methods to simulate the interactions of the atmosphere, oceans, land surface and ice. All climate models take account of incoming energy as short wave electromagnetic radiation, chiefly visible and short-wave (near) infrared, as well as outgoing energy as long wave (far) infrared electromagnetic radiation from the earth. Any imbalance results in a change in temperature. The most talked-about models of recent years relate temperature to emissions of greenhouse gases. These models project an upward trend in the surface temperature record, as well as a more rapid increase in temperature at higher altitudes. Three (or more properly, four since time is also considered) dimensional GCM's discretise the equations for fluid motion and energy transfer and integrate these over time. They also contain parametrisations for processes such as convection that occur on scales too small to be resolved directly. Atmospheric GCMs (AGCMs) model the atmosphere and impose sea surface temperatures as boundary conditions. Coupled atmosphere-ocean GCMs (AOGCMs, e.g. HadCM3, EdGCM, GFDL CM2.X, ARPEGE-Climat) combine the two models. Models range in complexity: A simple radiant heat transfer model treats the earth as a single point and averages outgoing energy This can be expanded vertically (radiative-convective models), or horizontally Finally, (coupled) atmosphere–ocean–sea ice global climate models discretise and solve the full equations for mass and energy transfer and radiant exchange. Box models treat flows across and within ocean basins. Other submodels can be interlinked, such as land use, allowing researchers to predict the interaction between climate and ecosystems. == Comparison with other climate models == === Earth-system models of intermediate complexity (EMICs) === The Climber-3 model uses a 2.5-dimensional statistical-dynamical model with 7.5° × 22.5° resolution and time step of 1/2 a day. An oceanic submodel is MOM-3 (Modular Ocean Model) with a 3.75° × 3.75° grid and 24 vertical levels. === Radiative-convective models (RCM) === One-dimensional, radiative-convective models were used to verify basic climate assumptions in the 1980s and 1990s. === Earth system models === GCMs can form part of Earth system models, e.g. by coupling ice sheet models for the dynamics of the Greenland and Antarctic ice sheets, and one or more chemical transport models (CTMs) for species important to climate. Thus a carbon chemistry transport model may allow a GCM to better predict anthropogenic changes in carbon dioxide concentrations. In addition, this approach allows accounting for inter-system feedback: e.g. chemistry-climate models allow the effects of climate change on the ozone hole to be studied. == History == In 1956, Norman Phillips developed a mathematical model that could realistically depict monthly and seasonal patterns in the troposphere. It became the first successful climate model. Following Phillips's work, several groups began working to create GCMs. The first to combine both oceanic and atmospheric processes was developed in the late 1960s at the NOAA Geophysical Fluid Dynamics Laboratory. By the early 1980s, the United States' National Center for Atmospheric Research had developed the Community Atmosphere Model; this model has been continuously refined. In 1996, efforts began to model soil and vegetation types. Later the Hadley Centre for Climate Prediction and Research's HadCM3 model coupled ocean-atmosphere elements. The role of gravity waves was added in the mid-1980s. Gravity waves are required to simulate regional and global scale circulations accurately. == See also == Atmospheric Model Intercomparison Project (AMIP) Atmospheric Radiation Measurement (ARM) (in the US) Earth Simulator Global Environmental Multiscale Model Ice-sheet model Intermediate General Circulation Model NCAR Prognostic variable Charney Report == References == IPCC AR4 SYR (2007), Core Writing Team; Pachauri, R.K; Reisinger, A. (eds.), Climate Change 2007: Synthesis Report (SYR), Contribution of Working Groups I, II and III to the Fourth Assessment Report (AR4) of the Intergovernmental Panel on Climate Change, Geneva, Switzerland: IPCC, ISBN 978-92-9169-122-7{{citation}}: CS1 maint: numeric names: authors list (link). == Further reading == Ian Roulstone & John Norbury (2013). Invisible in the Storm: the role of mathematics in understanding weather. Princeton University Press. ISBN 978-0691152721. == External links == IPCC AR5, Evaluation of Climate Models "High Resolution Climate Modeling". – with media including videos, animations, podcasts and transcripts on climate models "Flexible Modeling System (FMS)". Geophysical Fluid Dynamics Laboratory. – GFDL's Flexible Modeling System containing code for the climate models Program for climate model diagnosis and intercomparison (PCMDI/CMIP) National Operational Model Archive and Distribution System (NOMADS) Archived 30 January 2016 at the Wayback Machine Hadley Centre for Climate Prediction and Research – model info NCAR/UCAR Community Climate System Model (CESM) Climate prediction, community modeling NASA/GISS, primary research GCM model EDGCM/NASA: Educational Global Climate Modeling Archived 23 March 2015 at the Wayback Machine NOAA/GFDL Archived 4 March 2016 at the Wayback Machine MAOAM: Martian Atmosphere Observation and Modeling / MPI & MIPT
Wikipedia/General_circulation_model
In fluid dynamics, Airy wave theory (often referred to as linear wave theory) gives a linearised description of the propagation of gravity waves on the surface of a homogeneous fluid layer. The theory assumes that the fluid layer has a uniform mean depth, and that the fluid flow is inviscid, incompressible and irrotational. This theory was first published, in correct form, by George Biddell Airy in the 19th century. Airy wave theory is often applied in ocean engineering and coastal engineering for the modelling of random sea states – giving a description of the wave kinematics and dynamics of high-enough accuracy for many purposes. Further, several second-order nonlinear properties of surface gravity waves, and their propagation, can be estimated from its results. Airy wave theory is also a good approximation for tsunami waves in the ocean, before they steepen near the coast. This linear theory is often used to get a quick and rough estimate of wave characteristics and their effects. This approximation is accurate for small ratios of the wave height to water depth (for waves in shallow water), and wave height to wavelength (for waves in deep water). == Description == Airy wave theory uses a potential flow (or velocity potential) approach to describe the motion of gravity waves on a fluid surface. The use of (inviscid and irrotational) potential flow in water waves is remarkably successful, given its failure to describe many other fluid flows where it is often essential to take viscosity, vorticity, turbulence or flow separation into account. This is due to the fact that for the oscillatory part of the fluid motion, wave-induced vorticity is restricted to some thin oscillatory Stokes boundary layers at the boundaries of the fluid domain. Airy wave theory is often used in ocean engineering and coastal engineering. Especially for random waves, sometimes called wave turbulence, the evolution of the wave statistics – including the wave spectrum – is predicted well over not too long distances (in terms of wavelengths) and in not too shallow water. Diffraction is one of the wave effects which can be described with Airy wave theory. Further, by using the WKBJ approximation, wave shoaling and refraction can be predicted. Earlier attempts to describe surface gravity waves using potential flow were made by, among others, Laplace, Poisson, Cauchy and Kelland. But Airy was the first to publish the correct derivation and formulation in 1841. Soon after, in 1847, the linear theory of Airy was extended by Stokes for non-linear wave motion – known as Stokes' wave theory – correct up to third order in the wave steepness. Even before Airy's linear theory, Gerstner derived a nonlinear trochoidal wave theory in 1802, which however is not irrotational. Airy wave theory is a linear theory for the propagation of waves on the surface of a potential flow and above a horizontal bottom. The free surface elevation η(x,t) of one wave component is sinusoidal, as a function of horizontal position x and time t: η ( x , t ) = a cos ⁡ ( k x − ω t ) {\displaystyle \eta (x,t)=a\cos \left(kx-\omega t\right)} where a is the wave amplitude in metres, cos is the cosine function, k is the angular wavenumber in radians per metre, related to the wavelength λ by k = ⁠2π/λ⁠, ω is the angular frequency in radians per second, related to the period T and frequency f by ω = ⁠2π/T⁠ = 2πf. The waves propagate along the water surface with the phase speed cp: c p = ω k = λ T . {\displaystyle c_{p}={\frac {\omega }{k}}={\frac {\lambda }{T}}.} The angular wavenumber k and frequency ω are not independent parameters (and thus also wavelength λ and period T are not independent), but are coupled. Surface gravity waves on a fluid are dispersive waves – exhibiting frequency dispersion – meaning that each wavenumber has its own frequency and phase speed. Note that in engineering the wave height H – the difference in elevation between crest and trough – is often used: H = 2 a and a = 1 2 H , {\displaystyle H=2a\quad {\text{and}}\quad a={\tfrac {1}{2}}H,} valid in the present case of linear periodic waves. Underneath the surface, there is a fluid motion associated with the free surface motion. While the surface elevation shows a propagating wave, the fluid particles are in an orbital motion. Within the framework of Airy wave theory, the orbits are closed curves: circles in deep water and ellipses in finite depth—with the circles dying out before reaching the bottom of the fluid layer, and the ellipses becoming flatter near the bottom of the fluid layer. So while the wave propagates, the fluid particles just orbit (oscillate) around their average position. With the propagating wave motion, the fluid particles transfer energy in the wave propagation direction, without having a mean velocity. The diameter of the orbits reduces with depth below the free surface. In deep water, the orbit's diameter is reduced to 4% of its free-surface value at a depth of half a wavelength. In a similar fashion, there is also a pressure oscillation underneath the free surface, with wave-induced pressure oscillations reducing with depth below the free surface – in the same way as for the orbital motion of fluid parcels. == Mathematical formulation of the wave motion == === Flow problem formulation === The waves propagate in the horizontal direction, with coordinate x, and a fluid domain bound above by a free surface at z = η(x,t), with z the vertical coordinate (positive in the upward direction) and t being time. The level z = 0 corresponds with the mean surface elevation. The impermeable bed underneath the fluid layer is at z = −h. Further, the flow is assumed to be incompressible and irrotational – a good approximation of the flow in the fluid interior for waves on a liquid surface – and potential theory can be used to describe the flow. The velocity potential Φ(x, z, t) is related to the flow velocity components ux and uz in the horizontal (x) and vertical (z) directions by: u x = ∂ Φ ∂ x and u z = ∂ Φ ∂ z . {\displaystyle u_{x}={\frac {\partial \Phi }{\partial x}}\quad {\text{and}}\quad u_{z}={\frac {\partial \Phi }{\partial z}}.} Then, due to the continuity equation for an incompressible flow, the potential Φ has to satisfy the Laplace equation: Boundary conditions are needed at the bed and the free surface in order to close the system of equations. For their formulation within the framework of linear theory, it is necessary to specify what the base state (or zeroth-order solution) of the flow is. Here, we assume the base state is rest, implying the mean flow velocities are zero. The bed being impermeable, leads to the kinematic bed boundary-condition: In case of deep water – by which is meant infinite water depth, from a mathematical point of view – the flow velocities have to go to zero in the limit as the vertical coordinate goes to minus infinity: z → −∞. At the free surface, for infinitesimal waves, the vertical motion of the flow has to be equal to the vertical velocity of the free surface. This leads to the kinematic free-surface boundary-condition: If the free surface elevation η(x,t) was a known function, this would be enough to solve the flow problem. However, the surface elevation is an extra unknown, for which an additional boundary condition is needed. This is provided by Bernoulli's equation for an unsteady potential flow. The pressure above the free surface is assumed to be constant. This constant pressure is taken equal to zero, without loss of generality, since the level of such a constant pressure does not alter the flow. After linearisation, this gives the dynamic free-surface boundary condition: Because this is a linear theory, in both free-surface boundary conditions – the kinematic and the dynamic one, equations (3) and (4) – the value of Φ and ⁠∂Φ/∂z⁠ at the fixed mean level z = 0 is used. === Solution for a progressive monochromatic wave === For a propagating wave of a single frequency – a monochromatic wave – the surface elevation is of the form: η = a cos ⁡ ( k x − ω t ) . {\displaystyle \eta =a\cos(kx-\omega t).} The associated velocity potential, satisfying the Laplace equation (1) in the fluid interior, as well as the kinematic boundary conditions at the free surface (2), and bed (3), is: Φ = ω k a cosh ⁡ k ( z + h ) sinh ⁡ k h sin ⁡ ( k x − ω t ) , {\displaystyle \Phi ={\frac {\omega }{k}}a{\frac {\cosh k(z+h)}{\sinh kh}}\sin(kx-\omega t),} with sinh and cosh the hyperbolic sine and hyperbolic cosine function, respectively. But η and Φ also have to satisfy the dynamic boundary condition, which results in non-trivial (non-zero) values for the wave amplitude a only if the linear dispersion relation is satisfied: ω 2 = g k tanh ⁡ k h , {\displaystyle \omega ^{2}=gk\tanh kh,} with tanh the hyperbolic tangent. So angular frequency ω and wavenumber k – or equivalently period T and wavelength λ – cannot be chosen independently, but are related. This means that wave propagation at a fluid surface is an eigenproblem. When ω and k satisfy the dispersion relation, the wave amplitude a can be chosen freely (but small enough for Airy wave theory to be a valid approximation). === Table of wave quantities === In the table below, several flow quantities and parameters according to Airy wave theory are given. The given quantities are for a bit more general situation as for the solution given above. Firstly, the waves may propagate in an arbitrary horizontal direction in the x = (x,y) plane. The wavenumber vector is k, and is perpendicular to the cams of the wave crests. Secondly, allowance is made for a mean flow velocity U, in the horizontal direction and uniform over (independent of) depth z. This introduces a Doppler shift in the dispersion relations. At an Earth-fixed location, the observed angular frequency (or absolute angular frequency) is ω. On the other hand, in a frame of reference moving with the mean velocity U (so the mean velocity as observed from this reference frame is zero), the angular frequency is different. It is called the intrinsic angular frequency (or relative angular frequency), denoted σ. So in pure wave motion, with U = 0, both frequencies ω and σ are equal. The wave number k (and wavelength λ) are independent of the frame of reference, and have no Doppler shift (for monochromatic waves). The table only gives the oscillatory parts of flow quantities – velocities, particle excursions and pressure – and not their mean value or drift. The oscillatory particle excursions ξx and ξz are the time integrals of the oscillatory flow velocities ux and uz respectively. Water depth is classified into three regimes: deep water – for a water depth larger than half the wavelength, h > ⁠1/2⁠λ, the phase speed of the waves is hardly influenced by depth (this is the case for most wind waves on the sea and ocean surface), shallow water – for a water depth smaller than 5% of the wavelength, h < ⁠1/20⁠λ, the phase speed of the waves is only dependent on water depth, and no longer a function of period or wavelength; and intermediate depth – all other cases, ⁠1/20⁠λ < h < ⁠1/2⁠λ, where both water depth and period (or wavelength) have a significant influence on the solution of Airy wave theory. In the limiting cases of deep and shallow water, simplifying approximations to the solution can be made. While for intermediate depth, the full formulations have to be used. == Surface tension effects == Due to surface tension, the dispersion relation changes to: Ω 2 ( k ) = ( g + γ ρ k 2 ) k tanh ⁡ k h , {\displaystyle \Omega ^{2}(k)=\left(g+{\frac {\gamma }{\rho }}k^{2}\right)k\,\tanh kh,} with γ the surface tension in newtons per metre. All above equations for linear waves remain the same, if the gravitational acceleration g is replaced by g ~ = g + γ ρ k 2 . {\displaystyle {\tilde {g}}=g+{\frac {\gamma }{\rho }}k^{2}.} As a result of surface tension, the waves propagate faster. Surface tension only has influence for short waves, with wavelengths less than a few decimeters in case of a water–air interface. For very short wavelengths – 2 mm or less, in case of the interface between air and water – gravity effects are negligible. Note that surface tension can be altered by surfactants. The group velocity ⁠∂Ω/∂k⁠ of capillary waves – dominated by surface tension effects – is greater than the phase velocity ⁠Ω/k⁠. This is opposite to the situation of surface gravity waves (with surface tension negligible compared to the effects of gravity) where the phase velocity exceeds the group velocity. == Interfacial waves == Surface waves are a special case of interfacial waves, on the interface between two fluids of different density. === Two layers of infinite depth === Consider two fluids separated by an interface, and without further boundaries. Then their dispersion relation ω2 = Ω2(k) is given through Ω 2 ( k ) = | k | ( ρ − ρ ′ ρ + ρ ′ g + γ ρ + ρ ′ k 2 ) , {\displaystyle \Omega ^{2}(k)=|k|\left({\frac {\rho -\rho '}{\rho +\rho '}}g+{\frac {\gamma }{\rho +\rho '}}k^{2}\right),} where ρ and ρ′ are the densities of the two fluids, below (ρ) and above (ρ′) the interface, respectively. Further γ is the surface tension on the interface. For interfacial waves to exist, the lower layer has to be heavier than the upper one, ρ > ρ′. Otherwise, the interface is unstable and a Rayleigh–Taylor instability develops. === Two layers between horizontal rigid planes === For two homogeneous layers of fluids, of mean thickness h below the interface and h′ above – under the action of gravity and bounded above and below by horizontal rigid walls – the dispersion relationship ω2 = Ω2(k) for gravity waves is provided by: Ω 2 ( k ) = g k ( ρ − ρ ′ ) ρ coth ⁡ k h + ρ ′ coth ⁡ k h ′ , {\displaystyle \Omega ^{2}(k)={\frac {gk(\rho -\rho ')}{\rho \coth kh+\rho '\coth kh'}},} where again ρ and ρ′ are the densities below and above the interface, while coth is the hyperbolic cotangent function. For the case ρ′ is zero this reduces to the dispersion relation of surface gravity waves on water of finite depth h. === Two layers bounded above by a free surface === In this case the dispersion relation allows for two modes: a barotropic mode where the free surface amplitude is large compared with the amplitude of the interfacial wave, and a baroclinic mode where the opposite is the case – the interfacial wave is higher than and in antiphase with the free surface wave. The dispersion relation for this case is of a more complicated form. == Second-order wave properties == Several second-order wave properties, ones that are quadratic in the wave amplitude a, can be derived directly from Airy wave theory. They are of importance in many practical applications, such as forecasts of wave conditions. Using a WKBJ approximation, second-order wave properties also find their applications in describing waves in case of slowly varying bathymetry, and mean-flow variations of currents and surface elevation. As well as in the description of the wave and mean-flow interactions due to time and space-variations in amplitude, frequency, wavelength and direction of the wave field itself. === Table of second-order wave properties === In the table below, several second-order wave properties – as well as the dynamical equations they satisfy in case of slowly varying conditions in space and time – are given. More details on these can be found below. The table gives results for wave propagation in one horizontal spatial dimension. Further on in this section, more detailed descriptions and results are given for the general case of propagation in two-dimensional horizontal space. The last four equations describe the evolution of slowly varying wave trains over bathymetry in interaction with the mean flow, and can be derived from a variational principle: Whitham's averaged Lagrangian method. In the mean horizontal-momentum equation, d(x) is the still water depth, that is, the bed underneath the fluid layer is located at z = −d. Note that the mean-flow velocity in the mass and momentum equations is the mass transport velocity Ũ, including the splash-zone effects of the waves on horizontal mass transport, and not the mean Eulerian velocity (for example, as measured with a fixed flow meter). === Wave energy density === Wave energy is a quantity of primary interest, since it is a primary quantity that is transported with the wave trains. As can be seen above, many wave quantities like surface elevation and orbital velocity are oscillatory in nature with zero mean (within the framework of linear theory). In water waves, the most used energy measure is the mean wave energy density per unit horizontal area. It is the sum of the kinetic and potential energy density, integrated over the depth of the fluid layer and averaged over the wave phase. Simplest to derive is the mean potential energy density per unit horizontal area Epot of the surface gravity waves, which is the deviation of the potential energy due to the presence of the waves: E pot = ∫ − h η ρ g z d z ¯ − ∫ − h 0 ρ g z d z = 1 2 ρ g η 2 ¯ = 1 4 ρ g a 2 . {\displaystyle {\begin{aligned}E_{\text{pot}}&={\overline {\int _{-h}^{\eta }\rho gz\,\mathrm {d} z}}-\int _{-h}^{0}\rho gz\,\mathrm {d} z\\[6px]&={\overline {{\tfrac {1}{2}}\rho g\eta ^{2}}}={\tfrac {1}{4}}\rho ga^{2}.\end{aligned}}} The overbar denotes the mean value (which in the present case of periodic waves can be taken either as a time average or an average over one wavelength in space). The mean kinetic energy density per unit horizontal area Ekin of the wave motion is similarly found to be: E kin = ∫ − h 0 1 2 ρ [ | U + u x | 2 + u z 2 ] d z ¯ − ∫ − h 0 1 2 ρ | U | 2 d z = 1 4 ρ σ 2 k tanh ⁡ k h a 2 , {\displaystyle {\begin{aligned}E_{\text{kin}}&={\overline {\int _{-h}^{0}{\tfrac {1}{2}}\rho \left[\left|\mathbf {U} +\mathbf {u} _{x}\right|^{2}+u_{z}^{2}\right]\,\mathrm {d} z}}-\int _{-h}^{0}{\tfrac {1}{2}}\rho \left|\mathbf {U} \right|^{2}\,\mathrm {d} z\\[6px]&={\tfrac {1}{4}}\rho {\frac {\sigma ^{2}}{k\tanh kh}}a^{2},\end{aligned}}} with σ the intrinsic frequency, see the table of wave quantities. Using the dispersion relation, the result for surface gravity waves is: E kin = 1 4 ρ g a 2 . {\displaystyle E_{\text{kin}}={\tfrac {1}{4}}\rho ga^{2}.} As can be seen, the mean kinetic and potential energy densities are equal. This is a general property of energy densities of progressive linear waves in a conservative system. Adding potential and kinetic contributions, Epot and Ekin, the mean energy density per unit horizontal area E of the wave motion is: E = E pot + E kin = 1 2 ρ g a 2 . {\displaystyle E=E_{\text{pot}}+E_{\text{kin}}={\tfrac {1}{2}}\rho ga^{2}.} In case of surface tension effects not being negligible, their contribution also adds to the potential and kinetic energy densities, giving E pot = E kin = 1 4 ( ρ g + γ k 2 ) a 2 , {\displaystyle E_{\text{pot}}=E_{\text{kin}}={\tfrac {1}{4}}\left(\rho g+\gamma k^{2}\right)a^{2},} so E = E pot + E kin = 1 2 ( ρ g + γ k 2 ) a 2 , {\displaystyle E=E_{\text{pot}}+E_{\text{kin}}={\tfrac {1}{2}}\left(\rho g+\gamma k^{2}\right)a^{2},} with γ the surface tension. === Wave action, wave energy flux and radiation stress === In general, there can be an energy transfer between the wave motion and the mean fluid motion. This means, that the wave energy density is not in all cases a conserved quantity (neglecting dissipative effects), but the total energy density – the sum of the energy density per unit area of the wave motion and the mean flow motion – is. However, there is for slowly varying wave trains, propagating in slowly varying bathymetry and mean-flow fields, a similar and conserved wave quantity, the wave action A = ⁠E/σ⁠: ∂ A ∂ t + ∇ ⋅ [ ( U + c g ) A ] = 0 , {\displaystyle {\frac {\partial {\mathcal {A}}}{\partial t}}+\nabla \cdot \left[\left(\mathbf {U} +\mathbf {c} _{g}\right){\mathcal {A}}\right]=0,} with (U + cg) A the action flux and cg = cgek the group velocity vector. Action conservation forms the basis for many wind wave models and wave turbulence models. It is also the basis of coastal engineering models for the computation of wave shoaling. Expanding the above wave action conservation equation leads to the following evolution equation for the wave energy density: ∂ E ∂ t + ∇ ⋅ [ ( U + c g ) E ] + S : ( ∇ U ) = 0 , {\displaystyle {\frac {\partial E}{\partial t}}+\nabla \cdot \left[\left(\mathbf {U} +\mathbf {c} _{g}\right)E\right]+{\boldsymbol {S}}:\left(\nabla \mathbf {U} \right)=0,} with: (U + cg)E is the mean wave energy density flux, S is the radiation stress tensor and ∇U is the mean-velocity shear rate tensor. In this equation in non-conservation form, the Frobenius inner product S : (∇U) is the source term describing the energy exchange of the wave motion with the mean flow. Only in the case that the mean shear-rate is zero, ∇U = 0, the mean wave energy density E is conserved. The two tensors S and ∇U are in a Cartesian coordinate system of the form: S = ( S x x S x y S y x S y y ) = I ( c g c p − 1 2 ) E + 1 k 2 ( k x k x k x k y k y k x k y k y ) c g c p E , I = ( 1 0 0 1 ) , ∇ U = ( ∂ U x ∂ x ∂ U y ∂ x ∂ U x ∂ y ∂ U y ∂ y ) , {\displaystyle {\begin{aligned}{\boldsymbol {S}}&={\begin{pmatrix}S_{xx}&S_{xy}\\S_{yx}&S_{yy}\end{pmatrix}}={\boldsymbol {I}}\left({\frac {c_{g}}{c_{p}}}-{\frac {1}{2}}\right)E+{\frac {1}{k^{2}}}{\begin{pmatrix}k_{x}k_{x}&k_{x}k_{y}\\[2ex]k_{y}k_{x}&k_{y}k_{y}\end{pmatrix}}{\frac {c_{g}}{c_{p}}}E,\\[6px]{\boldsymbol {I}}&={\begin{pmatrix}1&0\\0&1\end{pmatrix}},\\[6px]\nabla \mathbf {U} &={\begin{pmatrix}\displaystyle {\frac {\partial U_{x}}{\partial x}}&\displaystyle {\frac {\partial U_{y}}{\partial x}}\\[2ex]\displaystyle {\frac {\partial U_{x}}{\partial y}}&\displaystyle {\frac {\partial U_{y}}{\partial y}}\end{pmatrix}},\end{aligned}}} with kx and ky the components of the wavenumber vector k and similarly Ux and Uy the components in of the mean velocity vector U. === Wave mass flux and wave momentum === The mean horizontal momentum per unit area M induced by the wave motion – and also the wave-induced mass flux or mass transport – is: M = ∫ − h η ρ ( U + u x ) d z ¯ − ∫ − h 0 ρ U d z = E c p e k , {\displaystyle {\begin{aligned}\mathbf {M} &={\overline {\int _{-h}^{\eta }\rho \left(\mathbf {U} +\mathbf {u} _{x}\right)\,\mathrm {d} z}}-\int _{-h}^{0}\rho \mathbf {U} \,\mathrm {d} z\\[6px]&={\frac {E}{c_{p}}}\mathbf {e} _{k},\end{aligned}}} which is an exact result for periodic progressive water waves, also valid for nonlinear waves. However, its validity strongly depends on the way how wave momentum and mass flux are defined. Stokes already identified two possible definitions of phase velocity for periodic nonlinear waves: Stokes first definition of wave celerity (S1) – with the mean Eulerian flow velocity equal to zero for all elevations z' below the wave troughs, and Stokes second definition of wave celerity (S2) – with the mean mass transport equal to zero. The above relation between wave momentum M and wave energy density E is valid within the framework of Stokes' first definition. However, for waves perpendicular to a coast line or in closed laboratory wave channel, the second definition (S2) is more appropriate. These wave systems have zero mass flux and momentum when using the second definition. In contrast, according to Stokes' first definition (S1), there is a wave-induced mass flux in the wave propagation direction, which has to be balanced by a mean flow U in the opposite direction – called the undertow. So in general, there are quite some subtleties involved. Therefore also the term pseudo-momentum of the waves is used instead of wave momentum. ==== Mass and momentum evolution equations ==== For slowly varying bathymetry, wave and mean-flow fields, the evolution of the mean flow can de described in terms of the mean mass-transport velocity Ũ defined as: U ~ = U + M ρ h . {\displaystyle {\tilde {\mathbf {U} }}=\mathbf {U} +{\frac {\mathbf {M} }{\rho h}}.} Note that for deep water, when the mean depth h goes to infinity, the mean Eulerian velocity U and mean transport velocity Ũ become equal. The equation for mass conservation is: ∂ ∂ t ( ρ h ) + ∇ ⋅ ( ρ h U ~ ) = 0 , {\displaystyle {\frac {\partial }{\partial t}}\left(\rho h\right)+\nabla \cdot \left(\rho h{\tilde {\mathbf {U} }}\right)=0,} where h(x,t) is the mean water depth, slowly varying in space and time. Similarly, the mean horizontal momentum evolves as: ∂ ∂ t ( ρ h U ~ ) + ∇ ⋅ ( ρ h U ~ ⊗ U ~ + 1 2 ρ g h 2 I + S ) = ρ g h ∇ d , {\displaystyle {\frac {\partial }{\partial t}}\left(\rho h{\tilde {\mathbf {U} }}\right)+\nabla \cdot \left(\rho h{\tilde {\mathbf {U} }}\otimes {\tilde {\mathbf {U} }}+{\tfrac {1}{2}}\rho gh^{2}{\boldsymbol {I}}+{\boldsymbol {S}}\right)=\rho gh\nabla d,} with d the still-water depth (the sea bed is at z = –d), S is the wave radiation-stress tensor, I is the identity matrix and ⊗ is the dyadic product: U ~ ⊗ U ~ = ( U ~ x U ~ x U ~ x U ~ y U ~ y U ~ x U ~ y U ~ y ) . {\displaystyle {\tilde {\mathbf {U} }}\otimes {\tilde {\mathbf {U} }}={\begin{pmatrix}{\tilde {U}}_{x}{\tilde {U}}_{x}&{\tilde {U}}_{x}{\tilde {U}}_{y}\\{\tilde {U}}_{y}{\tilde {U}}_{x}&{\tilde {U}}_{y}{\tilde {U}}_{y}\end{pmatrix}}.} Note that mean horizontal momentum is only conserved if the sea bed is horizontal (the still-water depth d is a constant), in agreement with Noether's theorem. The system of equations is closed through the description of the waves. Wave energy propagation is described through the wave-action conservation equation (without dissipation and nonlinear wave interactions): ∂ ∂ t ( E σ ) + ∇ ⋅ [ ( U + c g ) E σ ] = 0. {\displaystyle {\frac {\partial }{\partial t}}\left({\frac {E}{\sigma }}\right)+\nabla \cdot \left[\left(\mathbf {U} +\mathbf {c} _{g}\right){\frac {E}{\sigma }}\right]=0.} The wave kinematics are described through the wave-crest conservation equation: ∂ k ∂ t + ∇ ω = 0 , {\displaystyle {\frac {\partial \mathbf {k} }{\partial t}}+\nabla \omega =\mathbf {0} ,} with the angular frequency ω a function of the (angular) wavenumber k, related through the dispersion relation. For this to be possible, the wave field must be coherent. By taking the curl of the wave-crest conservation, it can be seen that an initially irrotational wavenumber field stays irrotational. === Stokes drift === When following a single particle in pure wave motion (U = 0), according to linear Airy wave theory, a first approximation gives closed elliptical orbits for water particles. However, for nonlinear waves, particles exhibit a Stokes drift for which a second-order expression can be derived from the results of Airy wave theory (see the table above on second-order wave properties). The Stokes drift velocity ūS, which is the particle drift after one wave cycle divided by the period, can be estimated using the results of linear theory: u ¯ S = 1 2 σ k a 2 cosh ⁡ 2 k ( z + h ) sinh 2 ⁡ k h e k , {\displaystyle {\bar {\mathbf {u} }}_{S}={\tfrac {1}{2}}\sigma ka^{2}{\frac {\cosh 2k(z+h)}{\sinh ^{2}kh}}\mathbf {e} _{k},} so it varies as a function of elevation. The given formula is for Stokes first definition of wave celerity. When ρūS is integrated over depth, the expression for the mean wave momentum M is recovered. == See also == Boussinesq approximation (water waves) – nonlinear theory for waves in shallow water. Capillary wave – surface waves under the action of surface tension Cnoidal wave – nonlinear periodic waves in shallow water, solutions of the Korteweg–de Vries equation Mild-slope equation – refraction and diffraction of surface waves over varying depth Ocean surface wave – real water waves as seen in the ocean and sea Stokes wave – nonlinear periodic waves in non-shallow water Wave power – using ocean and sea waves for power generation. == Notes == == References == === Historical === Airy, G. B. (1841). "Tides and waves". In Hugh James Rose; et al. (eds.). Encyclopædia Metropolitana. Mixed Sciences. Vol. 3 (published 1817–1845). Also: "Trigonometry, On the Figure of the Earth, Tides and Waves", 396 pp. Stokes, G. G. (1847). "On the theory of oscillatory waves". Transactions of the Cambridge Philosophical Society. 8: 441–455.Reprinted in: Stokes, G. G. (1880). Mathematical and Physical Papers, Volume I. Cambridge University Press. pp. 197–229. === Further reading === Craik, A. D. D. (2004). "The origins of water wave theory". Annual Review of Fluid Mechanics. 36: 1–28. Bibcode:2004AnRFM..36....1C. doi:10.1146/annurev.fluid.36.050802.122118. Dean, R. G.; Dalrymple, R. A. (1991). Water wave mechanics for engineers and scientists. Advanced Series on Ocean Engineering. Vol. 2. Singapore: World Scientific. ISBN 978-981-02-0420-4. OCLC 22907242. Dingemans, M. W. (1997). Water wave propagation over uneven bottoms. Advanced Series on Ocean Engineering. Vol. 13. Singapore: World Scientific. ISBN 978-981-02-0427-3. OCLC 36126836. Two parts, 967 pages. Lamb, H. (1994). Hydrodynamics (6th ed.). Cambridge University Press. ISBN 978-0-521-45868-9. OCLC 30070401. Originally published in 1879, the 6th extended edition appeared first in 1932. Landau, L. D.; Lifschitz, E. M. (1986). Fluid mechanics. Course of Theoretical Physics. Vol. 6 (2nd revised ed.). Pergamon Press. ISBN 978-0-08-033932-0. OCLC 15017127. Lighthill, M. J. (1978). Waves in fluids. Cambridge University Press. ISBN 978-0-521-29233-7. OCLC 2966533. 504 pp. Phillips, O. M. (1977). The dynamics of the upper ocean (2nd ed.). Cambridge University Press. ISBN 978-0-521-29801-8. OCLC 7319931. Wehausen, J. V. & Laitone, E. V. (1960), Flügge, S. & Truesdell, C. (eds.), "Surface Waves", Encyclopaedia of Physics, 9, Springer Verlag: 653–667, §27, OCLC 612422741, archived from the original on 2013-05-21, retrieved 2013-05-05 == External links == Linear theory of ocean surface waves on WikiWaves. Water waves at MIT.
Wikipedia/Airy_wave_theory
Ocean acoustic tomography is a technique used to measure temperatures and currents over large regions of the ocean. On ocean basin scales, this technique is also known as acoustic thermometry. The technique relies on precisely measuring the time it takes sound signals to travel between two instruments, one an acoustic source and one a receiver, separated by ranges of 100–5,000 kilometres (54–2,700 nmi). If the locations of the instruments are known precisely, the measurement of time-of-flight can be used to infer the speed of sound, averaged over the acoustic path. Changes in the speed of sound are primarily caused by changes in the temperature of the ocean, hence the measurement of the travel times is equivalent to a measurement of temperature. A 1 °C (1.8 °F) change in temperature corresponds to about 4 metres per second (13 ft/s) change in sound speed. An oceanographic experiment employing tomography typically uses several source-receiver pairs in a moored array that measures an area of ocean. == Motivation == Seawater is an electrical conductor, so the oceans are opaque to electromagnetic energy (e.g., light or radar). The oceans are fairly transparent to low-frequency acoustics, however. The oceans conduct sound very efficiently, particularly sound at low frequencies, i.e., less than a few hundred hertz. These properties motivated Walter Munk and Carl Wunsch to suggest "acoustic tomography" for ocean measurement in the late 1970s. The advantages of the acoustical approach to measuring temperature are twofold. First, large areas of the ocean's interior can be measured by remote sensing. Second, the technique naturally averages over the small scale fluctuations of temperature (i.e., noise) that dominate ocean variability. From its beginning, the idea of observations of the ocean by acoustics was married to estimation of the ocean's state using modern numerical ocean models and the techniques assimilating data into numerical models. As the observational technique has matured, so too have the methods of data assimilation and the computing power required to perform those calculations. == Multipath arrivals and tomography == One of the intriguing aspects of tomography is that it exploits the fact that acoustic signals travel along a set of generally stable ray paths. From a single transmitted acoustic signal, this set of rays gives rise to multiple arrivals at the receiver, the travel time of each arrival corresponding to a particular ray path. The earliest arrivals correspond to the deeper-traveling rays, since these rays travel where sound speed is greatest. The ray paths are easily calculated using computers ("ray tracing"), and each ray path can generally be identified with a particular travel time. The multiple travel times measure the sound speed averaged over each of the multiple acoustic paths. These measurements make it possible to infer aspects of the structure of temperature or current variations as a function of depth. The solution for sound speed, hence temperature, from the acoustic travel times is an inverse problem. == The integrating property of long-range acoustic measurements == Ocean acoustic tomography integrates temperature variations over large distances, that is, the measured travel times result from the accumulated effects of all the temperature variations along the acoustic path, hence measurements by the technique are inherently averaging. This is an important, unique property, since the ubiquitous small-scale turbulent and internal-wave features of the ocean usually dominate the signals in measurements at single points. For example, measurements by thermometers (i.e., moored thermistors or Argo drifting floats) have to contend with this 1-2 °C noise, so that large numbers of instruments are required to obtain an accurate measure of average temperature. For measuring the average temperature of ocean basins, therefore, the acoustic measurement is quite cost effective. Tomographic measurements also average variability over depth as well, since the ray paths cycle throughout the water column. == Reciprocal tomography == "Reciprocal tomography" employs the simultaneous transmissions between two acoustic transceivers. A "transceiver" is an instrument incorporating both an acoustic source and a receiver. The slight differences in travel time between the reciprocally-traveling signals are used to measure ocean currents, since the reciprocal signals travel with and against the current. The average of these reciprocal travel times is the measure of temperature, with the small effects from ocean currents entirely removed. Ocean temperatures are inferred from the sum of reciprocal travel times, while the currents are inferred from the difference of reciprocal travel times. Generally, ocean currents (typically 10 cm/s (3.9 in/s)) have a much smaller effect on travel times than sound speed variations (typically 5 m/s (16 ft/s)), so "one-way" tomography measures temperature to good approximation. == Applications == In the ocean, large-scale temperature changes can occur over time intervals from minutes (internal waves) to decades (oceanic climate change). Tomography has been employed to measure variability over this wide range of temporal scales and over a wide range of spatial scales. Indeed, tomography has been contemplated as a measurement of ocean climate using transmissions over antipodal distances. Tomography has come to be a valuable method of ocean observation, exploiting the characteristics of long-range acoustic propagation to obtain synoptic measurements of average ocean temperature or current. One of the earliest applications of tomography in ocean observation occurred in 1988-9. A collaboration between groups at the Scripps Institution of Oceanography and the Woods Hole Oceanographic Institution deployed a six-element tomographic array in the abyssal plain of the Greenland Sea gyre to study deep water formation and the gyre circulation. Other applications include the measurement of ocean tides, and the estimation of ocean mesoscale dynamics by combining tomography, satellite altimetry, and in situ data with ocean dynamical models. In addition to the decade-long measurements obtained in the North Pacific, acoustic thermometry has been employed to measure temperature changes of the upper layers of the Arctic Ocean basins, which continues to be an area of active interest. Acoustic thermometry was also recently been used to determine changes to global-scale ocean temperatures using data from acoustic pulses sent from one end of the Earth to the other. == Acoustic thermometry == Acoustic thermometry is an idea to observe the world's ocean basins, and the ocean climate in particular, using trans-basin acoustic transmissions. "Thermometry", rather than "tomography", has been used to indicate basin-scale or global scale measurements. Prototype measurements of temperature have been made in the North Pacific Basin and across the Arctic Basin. Starting in 1983, John Spiesberger of the Woods Hole Oceanographic Institution, and Ted Birdsall and Kurt Metzger of the University of Michigan developed the use of sound to infer information about the ocean's large-scale temperatures, and in particular to attempt the detection of global warming in the ocean. This group transmitted sounds from Oahu that were recorded at about ten receivers stationed around the rim of the Pacific Ocean over distances of 4,000 km (2,500 mi). These experiments demonstrated that changes in temperature could be measured with an accuracy of about 20 millidegrees. Spiesberger et al. did not detect global warming. Instead they discovered that other natural climatic fluctuations, such as El Nino, were responsible in part for substantial fluctuations in temperature that may have masked any slower and smaller trends that may have occurred from global warming. The Acoustic Thermometry of Ocean Climate (ATOC) program was implemented in the North Pacific Ocean, with acoustic transmissions from 1996 through fall 2006. The measurements terminated when agreed-upon environmental protocols ended. The decade-long deployment of the acoustic source showed that the observations are sustainable on even a modest budget. The transmissions have been verified to provide an accurate measurement of ocean temperature on the acoustic paths, with uncertainties that are far smaller than any other approach to ocean temperature measurement. Repeating earthquakes acting as naturally-occurring acoustic sources have also been used in acoustic thermometry, which may be particularly useful for inferring temperature variability in the deep ocean which is presently poorly sampled by in-situ instruments. == Acoustic transmissions and marine mammals == The ATOC project was embroiled in issues concerning the effects of acoustics on marine mammals (e.g. whales, porpoises, sea lions, etc.). Public discussion was complicated by technical issues from a variety of disciplines (physical oceanography, acoustics, marine mammal biology, etc.) that makes understanding the effects of acoustics on marine mammals difficult for the experts, let alone the general public. Many of the issues concerning acoustics in the ocean and their effects on marine mammals were unknown. Finally, there were a variety of public misconceptions initially, such as a confusion of the definition of sound levels in air vs. sound levels in water. If a given number of decibels in water are interpreted as decibels in air, the sound level will seem to be orders of magnitude larger than it really is - at one point the ATOC sound levels were erroneously interpreted as so loud the signals would kill 500,000 animals. The sound power employed, 250 W, was comparable those made by blue or fin whales, although those whales vocalize at much lower frequencies. The ocean carries sound so efficiently that sounds do not have to be that loud to cross ocean basins. Other factors in the controversy were the extensive history of activism where marine mammals are concerned, stemming from the ongoing whaling conflict, and the sympathy that much of the public feels toward marine mammals. As a result of this controversy, the ATOC program conducted a $6 million study of the effects of the acoustic transmissions on a variety of marine mammals. The acoustic source was mounted on the bottom about a half mile deep, hence marine mammals, which are bound to the surface, were generally further than a half mile from the source. The source level was modest, less than the sound level of large whales, and the duty cycle was 2% (i.e., the sound is on only 2% of the day). After six years of study the official, formal conclusion from this study was that the ATOC transmissions have "no biologically significant effects". Other acoustics activities in the ocean may not be so benign insofar as marine mammals are concerned. Various types of man-made sounds have been studied as potential threats to marine mammals, such as airgun shots for geophysical surveys, or transmissions by the U.S. Navy for various purposes. The actual threat depends on a variety of factors beyond noise levels: sound frequency, frequency and duration of transmissions, the nature of the acoustic signal (e.g., a sudden pulse, or coded sequence), depth of the sound source, directionality of the sound source, water depth and local topography, reverberation, etc. == Types of transmitted acoustic signals == Tomographic transmissions consist of long coded signals (e.g., "m-sequences") lasting 30 seconds or more. The frequencies employed range from 50 to 1000 Hz and source powers range from 100 to 250 W, depending on the particular goals of the measurements. With precise timing such as from GPS, travel times can be measured to a nominal accuracy of 1 millisecond. While these transmissions are audible near the source, beyond a range of several kilometers the signals are usually below ambient noise levels, requiring sophisticated spread-spectrum signal processing techniques to recover them. == See also == Acoustical oceanography Ray tracing SOFAR channel SOSUS Speed of sound TOPEX/Poseidon satellite altimetry Underwater acoustics == References == == Further reading == B. D. Dushaw, 2013. "Ocean Acoustic Tomography" in Encyclopedia of Remote Sensing, E. G. Njoku, Ed., Springer, Springer-Verlag Berlin Heidelberg, 2013. ISBN 978-0-387-36698-2. W. Munk, P. Worcester, and C. Wunsch (1995). Ocean Acoustic Tomography. Cambridge: Cambridge University Press. ISBN 0-521-47095-1. P. F. Worcester, 2001: "Tomography," in Encyclopedia of Ocean Sciences, J. Steele, S. Thorpe, and K. Turekian, Eds., Academic Press Ltd., 2969–2986. == External links == [1] Oceans toolbox for Matlab by Rich Pawlowicz. Ocean Acoustics Lab (OAL) - the Woods Hole Oceanographic Institution. The North Pacific Acoustic Laboratory (NPAL) - the Scripps Institution of Oceanography, La Jolla, CA. Acoustic Thermometry of Ocean Climate - the Scripps Institution of Oceanography, La Jolla, CA. Discovery of Sound in the Sea - DOSITS is an educational website concerned with acoustics in the ocean. Sounds of acoustic signals employed for tomography - the DOSITS web page. A day in the life of a tomography mooring - University of Washington, Seattle, WA. Sounding Out the Ocean's Secrets - National Academy of Sciences. Sound Measures the Ocean's Secrets - Acoustical Society of America. The Acoustic Thermometry of Ocean Climate/Marine Mammal Research Program Cornell University Laboratory of Ornithology, Bioacoustics Research Program
Wikipedia/Ocean_acoustic_tomography
Ocean surface topography or sea surface topography, also called ocean dynamic topography, are highs and lows on the ocean surface, similar to the hills and valleys of Earth's land surface depicted on a topographic map. These variations are expressed in terms of average sea surface height (SSH) relative to Earth's geoid. The main purpose of measuring ocean surface topography is to understand the large-scale ocean circulation. == Time variations == Unaveraged or instantaneous sea surface height (SSH) is most obviously affected by the tidal forces of the Moon and by the seasonal cycle of the Sun acting on Earth. Over timescales longer than a year, the patterns in SSH can be influenced by ocean circulation. Typically, SSH anomalies resulting from these forces differ from the mean by less than ±1 m (3 ft) at the global scale. Other influences include changing interannual patterns of temperature, salinity, waves, tides and winds. Ocean surface topography can be measured with high accuracy and precision at regional to global scale by satellite altimetry (e.g. TOPEX/Poseidon). Slower and larger variations are due to changes in Earth's gravitational field (geoid) due to melting ice, rearrangement of continents, formation of sea mounts and other redistribution of rock. The combination of satellite gravimetry (e.g. GRACE and GRACE-FO) with altimetry can be used to determine sea level rise and properties such as ocean heat content. == Applications == Ocean surface topography is used to map ocean currents, which move around the ocean's "hills" and "valleys" in predictable ways. A clockwise sense of rotation is found around "hills" in the northern hemisphere and "valleys" in the southern hemisphere. This is because of the Coriolis effect. Conversely, a counterclockwise sense of rotation is found around "valleys" in the northern hemisphere and "hills" in the southern hemisphere. Ocean surface topography is also used to understand how the ocean moves heat around the globe, a critical component of Earth's climate, and for monitoring changes in global sea level. The collection of the data is useful for the long-term information about the ocean and its currents. According to NASA science this data can also be used to provide understanding of weather, climate, navigation, fisheries management, and offshore operations. Observations made about the data are used to study the oceans tides, circulation, and the amount of heat the ocean contains. These observations can help predict short and long term effects of the weather and the earth's climate over time. == Measurement == The sea surface height (SSH) is calculated through altimetry satellites using as a reference surface the ellipsoid, which determine the distance from the satellite to a target surface by measuring the satellite-to-surface round-trip time of a radar pulse. The satellites then measure the distance between their orbit altitude and the surface of the water. Due to the differing depths of the ocean, an approximation is made. This enables data to be taken precisely due to the uniform surface level. The satellite's altitude then has to be calculated with respect to the reference ellipsoid. It is calculated using the orbital parameters of the satellite and various positioning instruments. However, the ellipsoid is not an equipotential surface of the Earth's gravity field, so the measurements must be referenced to a surface that represents the water flow, in this case the geoid. The transformations between geometric heights (ellipsoid) and orthometric heights (geoid) are performed from a geoidal model. The sea surface height is then the difference between the satellite's altitude relative to the reference ellipsoid and the altimeter range. The satellite sends microwave pulses to the ocean surface. The travel time of the pulses ascending to the oceans surface and back provides data of the sea surface height. In the image below you can see the measurement system using by the satellite Jason-1. == Satellite missions == Currently there are nine different satellites calculating the earth ocean topography, Cryosat-2, SARAL, Jason-3, Sentinel-3A and Sentinel-3B, CFOSat, HY-2B and HY-2C, and Sentinel-6 Michael Freilich (also called Jason-CS A). Jason-3 and Sentinel-6 Michael Freilich are currently both in space orbiting Earth in a tandem rotation. They are approximately 330 kilometers apart. Ocean surface topography can be derived from ship-going measurements of temperature and salinity at depth. However, since 1992, a series of satellite altimetry missions, beginning with TOPEX/Poseidon and continued with Jason-1, Ocean Surface Topography Mission on the Jason-2 satellite, Jason-3 and now Sentinel-6 Michael Freilich have measured sea surface height directly. By combining these measurements with gravity measurements from NASA's Grace and ESA's GOCE missions, scientists can determine sea surface topography to within a few centimeters. Jason-1 was launched by a Boeing Delta II rocket in California in 2001 and continued measurements initially collected by TOPEX/Poseidon satellite, which orbited from 1992 up until 2006. NASA and CNES, the French space agency, are joint partners in this mission. The main objectives of the Jason satellites is to collect data on the average ocean circulation around the globe in order to better understand its interaction with the time varying components and the involved mechanisms for initializing ocean models. To monitor the low frequency ocean variability and observe the season cycles and inter-annual variations like El Niño and La Niña, the North Atlantic oscillation, the pacific decadal oscillation, and planetary waves crossing the oceans over a period of months, then they will be modeled over a long period of time due to the precise altimetric observations. It aims to contribute to observations of the mesoscale ocean variability, affecting the whole oceans. This activity is especially intense near western boundary currents. Also monitor the average sea level because it is a large indicator of global warming through the sea level data. Improvement of tide modeling by observing more long period components such as coastal interactions, internal waves, and tidal energy dissipation. Finally the satellite data will supply knowledge to support other types of marine meteorology which is the scientific study of the atmosphere. Jason-2 was launched on June 20, 2008, by a Delta-2 rocket out of the California site in Vandenberg and terminated its mission on October 10, 2019. Jason-3 was launched on January 16, 2016 by a Falcon-9 SpaceX rocket from Vandenberg, as well as Sentinel-6 Michael Freilich, launched on November 21, 2020. The long-term objectives of the Jason satellite series are to provide global descriptions of the seasonal and yearly changes of the circulation and heat storage in the ocean. This includes the study of short-term climatic changes such as El Nino, La Nina. The satellites detect global sea level mean and record the fluctuations. Also detecting the slow change of upper ocean circulation on decadal time scales, every ten years. Studying the transportation of heat and carbon in the ocean and examining the main components that fuel deep water tides. The satellites data collection also helps improve wind speed and height measurements in current time and for long-term studies. Lastly improving our knowledge about the marine geoid. The first seven months Jason-2 was put into use it was flown in extreme close proximity to Jason-1. Only being one minute apart from each other the satellites observed the same area of the ocean. The reason for the close proximity in observation was for cross-calibration. This was meant to calculate any bias in the two altimeters. This multiple month observation proved that there was no bias in the data and both collections of data were consistent. A new satellite mission called the Surface Water Ocean Topography Mission has been proposed to make the first global survey of the topography of all of Earth's surface water—the ocean, lakes and rivers. This study is aimed to provide a comprehensive view of Earth's freshwater bodies from space and more much detailed measurements of the ocean surface than ever before. == See also == Dynamic topography Eddy (fluid dynamics) SARAL Sea surface microlayer == References == == External links == Ocean Surface Topography from Space OSTM Instrument Description
Wikipedia/Ocean_surface_topography
Cuprate superconductors are a family of high-temperature superconducting materials made of layers of copper oxides (CuO2) alternating with layers of other metal oxides, which act as charge reservoirs. At ambient pressure, cuprate superconductors are the highest temperature superconductors known. Cuprates have a structure close to that of a two-dimensional material. Their superconducting properties are determined by electrons moving within weakly coupled copper-oxide (CuO2) layers. Neighbouring layers contain ions such as lanthanum, barium, strontium, or other atoms that act to stabilize the structures and dope electrons or holes onto the copper-oxide layers. The undoped "parent" or "mother" compounds are Mott insulators with long-range antiferromagnetic order at sufficiently low temperatures. Single band models are generally considered to be enough to describe the electronic properties. The cuprate superconductors adopt a perovskite structure. The copper-oxide planes are checkerboard lattices with squares of O2− ions with a Cu2+ ion at the centre of each square. The unit cell is rotated by 45° from these squares. Chemical formulae of superconducting materials contain fractional numbers to describe the doping required for superconductivity. Several families of cuprate superconductors have been identified. They can be categorized by their elements and the number of adjacent copper-oxide layers in each superconducting block. For example, YBCO and BSCCO can be referred to as Y123 and Bi2201/Bi2212/Bi2223 depending on the number of layers in each superconducting block (n). The superconducting transition temperature peaks at an optimal doping value (p=0.16) and an optimal number of layers in each block, typically three. Possible mechanisms for cuprate superconductivity remain the subject of considerable debate and research. Similarities between the low-temperature state of undoped materials and the superconducting state that emerges upon doping, primarily the dx2−y2 orbital state of the Cu2+ ions, suggest that electron–electron interactions are more significant than electron–phonon interactions in cuprates – making the superconductivity unconventional. Recent work on the Fermi surface has shown that nesting occurs at four points in the antiferromagnetic Brillouin zone where spin waves exist and that the superconducting energy gap is larger at these points. The weak isotope effects observed for most cuprates contrast with conventional superconductors that are well described by BCS theory. == Types == === Yttrium–barium cuprate === An yttrium–barium cuprate, YBa2Cu3O7−x (or Y123), was the first superconductor found above liquid nitrogen boiling point. There are two atoms of Barium for each atom of Yttrium. The proportions of the three different metals in the YBa2Cu3O7 superconductor are in the mole ratio of 1 to 2 to 3 for yttrium to barium to copper, respectively: this particular superconductor has also often been referred to as the 123 superconductor. The unit cell of YBa2Cu3O7 consists of three perovskite unit cells, which is pseudocubic, nearly orthorhombic. The other superconducting cuprates have another structure: they have a tetragonal cell. Each perovskite cell contains a Y or Ba atom at the center: Ba in the bottom unit cell, Y in the middle one, and Ba in the top unit cell. Thus, Y and Ba are stacked in the sequence [Ba–Y–Ba] along the c-axis. All corner sites of the unit cell are occupied by Cu, which has two different coordinations, Cu(1) and Cu(2), with respect to oxygen. There are four possible crystallographic sites for oxygen: O(1), O(2), O(3) and O(4). The coordination polyhedra of Y and Ba with respect to oxygen are different. The tripling of the perovskite unit cell leads to nine oxygen atoms, whereas YBa2Cu3O7 has seven oxygen atoms and, therefore, is referred to as an oxygen-deficient perovskite structure. The structure has a stacking of different layers: (CuO)(BaO)(CuO2)(Y)(CuO2)(BaO)(CuO). One of the key feature of the unit cell of YBa2Cu3O7−x (YBCO) is the presence of two layers of CuO2. The role of the Y plane is to serve as a spacer between two CuO2 planes. In YBCO, the Cu–O chains are known to play an important role for superconductivity. Tc is maximal near 92 K (−181.2 °C) when x ≈ 0.15 and the structure is orthorhombic. Superconductivity disappears at x ≈ 0.6, where the structural transformation of YBCO occurs from orthorhombic to tetragonal. === Other cuprates === The preparation of other cuprates is more difficult than the YBCO preparation. They also have a different crystal structure: they are tetragonal where YBCO is orthorhombic. Problems in these superconductors arise because of the existence of three or more phases having a similar layered structure. Moreover, the crystal structure of other tested cuprate superconductors are very similar. Like YBCO, the perovskite-type feature and the presence of simple copper oxide (CuO2) layers also exist in these superconductors. However, unlike YBCO, Cu–O chains are not present in these superconductors. The YBCO superconductor has an orthorhombic structure, whereas the other high-Tc superconductors have a tetragonal structure. There are three main classes of superconducting cuprates: bismuth-based, thallium-based and mercury-based. The second cuprate by practical importance is currently BSCCO, a compound of Bi–Sr–Ca–Cu–O. The content of bismuth and strontium creates some chemical issues. It has three superconducting phases forming a homologous series as Bi2Sr2Can−1CunO4+2n+x (n=1, 2 and 3). These three phases are Bi-2201, Bi-2212 and Bi-2223, having transition temperatures of 20 K (−253.2 °C), 85 K (−188.2 °C) and 110 K (−163 °C), respectively, where the numbering system represent number of atoms for Bi Sr, Ca and Cu respectively. The two phases have a tetragonal structure which consists of two sheared crystallographic unit cells. The unit cell of these phases has double Bi–O planes which are stacked in a way that the Bi atom of one plane sits below the oxygen atom of the next consecutive plane. The Ca atom forms a layer within the interior of the CuO2 layers in both Bi-2212 and Bi-2223; there is no Ca layer in the Bi-2201 phase. The three phases differ with each other in the number of cuprate planes; Bi-2201, Bi-2212 and Bi-2223 phases have one, two and three CuO2 planes, respectively. The c axis lattice constants of these phases increases with the number of cuprate planes (see table below). The coordination of the Cu atom is different in the three phases. The Cu atom forms an octahedral coordination with respect to oxygen atoms in the 2201 phase, whereas in 2212, the Cu atom is surrounded by five oxygen atoms in a pyramidal arrangement. In the 2223 structure, Cu has two coordinations with respect to oxygen: one Cu atom is bonded with four oxygen atoms in square planar configuration and another Cu atom is coordinated with five oxygen atoms in a pyramidal arrangement. Cuprate of Tl–Ba–Ca The first series of the Tl-based superconductor containing one Tl–O layer has the general formula TlBa2Can−1CunO2n+3, whereas the second series containing two Tl–O layers has a formula of Tl2Ba2Can−1CunO2n+4 with n =1, 2 and 3. In the structure of Tl2Ba2CuO6 (Tl-2201), there is one CuO2 layer with the stacking sequence (Tl–O) (Tl–O) (Ba–O) (Cu–O) (Ba–O) (Tl–O) (Tl–O). In Tl2Ba2CaCu2O8 (Tl-2212), there are two Cu–O layers with a Ca layer in between. Similar to the Tl2Ba2CuO6 structure, Tl–O layers are present outside the Ba–O layers. In Tl2Ba2Ca2Cu3O10 (Tl-2223), there are three CuO2 layers enclosing Ca layers between each of these. In Tl-based superconductors, Tc is found to increase with the increase in CuO2 layers. However, the value of Tc decreases after four CuO2 layers in TlBa2Can−1CunO2n+3, and in the Tl2Ba2Can−1CunO2n+4 compound, it decreases after three CuO2 layers. Cuprate of Hg–Ba–Ca The crystal structure of HgBa2CuO4 (Hg-1201), HgBa2CaCu2O6 (Hg-1212) and HgBa2Ca2Cu3O8 (Hg-1223) is similar to that of Tl-1201, Tl-1212 and Tl-1223, with Hg in place of Tl. It is noteworthy that the Tc of the Hg compound (Hg-1201) containing one CuO2 layer is much larger as compared to the one-CuO2-layer compound of thallium (Tl-1201). In the Hg-based superconductor, Tc is also found to increase as the CuO2 layer increases. For Hg-1201, Hg-1212 and Hg-1223, the values of Tc are 94, 128, and the record value at ambient pressure 134 K (−139 °C), respectively, as shown in table below. The observation that the Tc of Hg-1223 increases to 153 K (−120 °C) under high pressure indicates that the Tc of this compound is very sensitive to the structure of the compound. == Preparation and manufacturing == The simplest method for preparing ceramic superconductors is a solid-state thermochemical reaction involving mixing, calcination and sintering. The appropriate amounts of precursor powders, usually oxides and carbonates, are mixed thoroughly using a Ball mill. Solution chemistry processes such as coprecipitation, freeze-drying and sol–gel methods are alternative ways for preparing a homogeneous mixture. These powders are calcined in the temperature range from 1,070 to 1,220 K (800 to 950 °C) for several hours. The powders are cooled, reground and calcined again. This process is repeated several times to get homogeneous material. The powders are subsequently compacted to pellets and sintered. The sintering environment such as temperature, annealing time, atmosphere and cooling rate play a very important role in getting good high-Tc superconducting materials. The YBa2Cu3O7−x compound is prepared by calcination and sintering of a homogeneous mixture of Y2O3, BaCO3 and CuO in the appropriate atomic ratio. Calcination is done at 1,070 to 1,220 K (800 to 950 °C), whereas sintering is done at 1,220 K (950 °C) in an oxygen atmosphere. The oxygen stoichiometry in this material is very crucial for obtaining a superconducting YBa2Cu3O7−x compound. At the time of sintering, the semiconducting tetragonal YBa2Cu3O6 compound is formed, which, on slow cooling in oxygen atmosphere, turns into superconducting YBa2Cu3O7−x. The uptake and loss of oxygen are reversible in YBa2Cu3O7−x. A fully oxygenated orthorhombic YBa2Cu3O7−x sample can be transformed into tetragonal YBa2Cu3O6 by heating in a vacuum at temperature above 973 K (700 °C). The preparation of Bi-, Tl- and Hg-based high-Tc superconductors is more difficult than the YBCO preparation. Problems in these superconductors arise because of the existence of three or more phases having a similar layered structure. Thus, syntactic intergrowth and defects such as stacking faults occur during synthesis and it becomes difficult to isolate a single superconducting phase. For Bi–Sr–Ca–Cu–O, it is relatively simple to prepare the Bi-2212 (Tc ≈ 85 K) phase, whereas it is very difficult to prepare a single phase of Bi-2223 (Tc ≈ 110 K). The Bi-2212 phase appears only after few hours of sintering at 1,130–1,140 K (860–870 °C), but the larger fraction of the Bi-2223 phase is formed after a long reaction time of more than a week at 1,140 K (870 °C). Although the substitution of Pb in the Bi–Sr–Ca–Cu–O compound has been found to promote the growth of the high-Tc phase, a long sintering time is still required. == History == The first cuprate superconductor was found in 1986 in the non-stoichiometric cuprate lanthanum barium copper oxide by IBM researchers Georg Bednorz and Karl Alex Müller. The critical temperature for this material was 35K, well above the previous record of 23 K. The discovery led to a sharp increase in research on the cuprates, resulting in thousands of publications between 1986 and 2001. Bednorz and Müller were awarded the Nobel Prize in Physics in 1987, only a year after their discovery. From 1986, many cuprate superconductors were identified, and can be put into three groups on a phase diagram critical temperature vs. oxygen hole content and copper hole content: lanthanum barium copper oxide (LB–CO), TC = −240 °C (35 K). yttrium barium copper oxide (YB–CO), TC = −215 °C (93 K). bismuth strontium calcium copper oxide (BiSC–CO), TC = −180 °C (95 K). thallium barium calcium copper oxide (TBC–CO), TC = −150 °C (125 K). mercury barium calcium copper oxide (HGBC–CO) 1993, with TC = −140 °C (133 K), currently the highest cuprate critical temperature. In 2018, the full three dimensional Fermi surface structure was derived from soft x-ray ARPES. == Structure == Cuprates are layered materials, consisting of superconducting planes of copper oxide, separated by layers containing ions such as lanthanum, barium, strontium, which act as a charge reservoir, doping electrons or holes into the copper-oxide planes. Thus the structure is described as a superlattice of superconducting CuO2 layers separated by spacer layers, resulting in a structure often closely related to the perovskite structure. Superconductivity takes place within the copper-oxide (CuO2) sheets, with only weak coupling between adjacent CuO2 planes, making the properties close to that of a two-dimensional material. Electrical currents flow within the CuO2 sheets, resulting in a large anisotropy in normal conducting and superconducting properties, with a much higher conductivity parallel to the CuO2 plane than in the perpendicular direction. Critical superconducting temperatures depend on the chemical compositions, cations substitutions and oxygen content. Chemical formulae of superconducting materials generally contain fractional numbers to describe the doping required for superconductivity. There are several families of cuprate superconductors which can be categorized by the elements they contain and the number of adjacent copper-oxide layers in each superconducting block. For example, YBCO and BSCCO can alternatively be referred to as Y123 and Bi2201/Bi2212/Bi2223 depending on the number of layers in each superconducting block (n). The superconducting transition temperature has been found to peak at an optimal doping value (p=0.16) and an optimal number of layers in each superconducting block, typically n=3. The undoped "parent" or "mother" compounds are Mott insulators with long-range antiferromagnetic order at sufficiently low temperatures. Single band models are generally considered to be enough to describe the electronic properties. Cuprate superconductors usually feature copper oxides in both the oxidation states 3+ and 2+. For example, YBa2Cu3O7 is described as Y3+(Ba2+)2(Cu3+)(Cu2+)2(O2−)7. The copper 2+ and 3+ ions tend to arrange themselves in a checkerboard pattern, a phenomenon known as charge ordering. All superconducting cuprates are layered materials having a complex structure described as a superlattice of superconducting CuO2 layers separated by spacer layers, where the misfit strain between different layers and dopants in the spacers induce a complex heterogeneity that in the superstripes scenario is intrinsic for high-temperature superconductivity. == Superconducting mechanism == Superconductivity in the cuprates is considered unconventional and is not explained by BCS theory. Possible pairing mechanisms for cuprate superconductivity continue to be the subject of considerable debate and further research. Similarities between the low-temperature antiferromagnetic state in undoped materials and the low-temperature superconducting state that emerges upon doping, primarily the dx2−y2 orbital state of the Cu2+ ions, suggest that electron-phonon coupling is less relevant in cuprates. Recent work on the Fermi surface has shown that nesting occurs at four points in the antiferromagnetic Brillouin zone where spin waves exist and that the superconducting energy gap is larger at these points. The weak isotope effects observed for most cuprates contrast with conventional superconductors that are well described by BCS theory. In 1987, Philip Anderson proposed that superexchange could act as a high-temperature superconductor pairing mechanism. In 2016, Chinese physicists found a correlation between a cuprate's critical temperature and the size of the charge transfer gap in that cuprate, providing support for the superexchange hypothesis. A 2022 study found that the varying density of actual Cooper pairs in a bismuth strontium calcium copper oxide superconductor matched with numerical predictions based on superexchange. But so far there is no consensus on the mechanism, and the search for an explanation continues. Similarities and differences in the properties of hole-doped and electron-doped cuprates: Presence of a pseudogap phase up to at least optimal doping. Different trends in the Uemura plot relating transition temperature to superfluid density. The inverse square of the London penetration depth appears to be proportional to the critical temperature for a large number of underdoped cuprate superconductors, but the constant of proportionality is different for hole- and electron-doped cuprates. The linear trend implies that the physics of these materials is strongly two-dimensional. Universal hourglass-shaped feature in the spin excitations of cuprates measured using inelastic neutron diffraction. Nernst effect evident in both the superconducting and pseudogap phases. The electronic structure of superconducting cuprates is highly anisotropic. Therefore, the Fermi surface of HTS is close to the Fermi surface of the doped CuO2 plane (or multi-planes, in case of multi-layer cuprates) and can be presented on the 2‑D reciprocal space (or momentum space) of the CuO2 lattice. The typical Fermi surface within the first CuO2 Brillouin zone is sketched in Figure 1 (left). It can be derived from the band structure calculations or measured by angle resolved photoemission spectroscopy (ARPES). Figure 1 shows the Fermi surface of BSCCO measured by ARPES. In a wide range of charge carrier concentration (doping level), in which the hole-doped HTS are superconducting, the Fermi surface is hole-like (i.e. open, as shown in Figure 1). This results in an inherent in-plane anisotropy of the electronic properties of HTS. The structure of superconductor cuprates are often closely related to that of perovskites. Their structure has been described as a distorted, oxygen deficient, multi-layered, perovskite structure. One of the crystal structure properties of oxide superconductors is an alternating multi-layer of CuO2 planes with superconductivity between these layers. The more layers of CuO2, the higher Tc. This structure causes a large anisotropy in normal conducting and superconducting properties, since electrical currents are carried by holes induced in the oxygen sites of the CuO2 sheets. The electrical conduction features a much higher conductivity parallel to the CuO2 plane than in the perpendicular direction. Critical temperatures depend on the chemical compositions, cations substitutions and oxygen content. They can be classified as superstripes; i.e., particular realizations of superlattices at atomic limit made of superconducting atomic layers, wires, and dots separated by spacer layers, that together gives multiband and multigap superconductivity. == Applications == BSCCO superconductors already have large-scale applications. For example, tens of kilometers of BSCCO-2223 at 77 K superconducting wires are being used in the current leads of the Large Hadron Collider at CERN (but the main field coils are using metallic lower temperature superconductors, mainly based on niobium–tin). == See also == Thallium barium calcium copper oxide Lanthanum barium copper oxide Bismuth strontium calcium copper oxide Superconducting wire == Bibliography == Rybicki et al, Perspective on the phase diagram of cuprate high-temperature superconductors, University of Leipzig, 2015 doi:10.1038/ncomms11413 == References ==
Wikipedia/Cuprate_superconductor
The Fermi energy is a concept in quantum mechanics usually referring to the energy difference between the highest and lowest occupied single-particle states in a quantum system of non-interacting fermions at absolute zero temperature. In a Fermi gas, the lowest occupied state is taken to have zero kinetic energy, whereas in a metal, the lowest occupied state is typically taken to mean the bottom of the conduction band. The term "Fermi energy" is often used to refer to a different yet closely related concept, the Fermi level (also called electrochemical potential). There are a few key differences between the Fermi level and Fermi energy, at least as they are used in this article: The Fermi energy is only defined at absolute zero, while the Fermi level is defined for any temperature. The Fermi energy is an energy difference (usually corresponding to a kinetic energy), whereas the Fermi level is a total energy level including kinetic energy and potential energy. The Fermi energy can only be defined for non-interacting fermions (where the potential energy or band edge is a static, well defined quantity), whereas the Fermi level remains well defined even in complex interacting systems, at thermodynamic equilibrium. Since the Fermi level in a metal at absolute zero is the energy of the highest occupied single particle state, then the Fermi energy in a metal is the energy difference between the Fermi level and lowest occupied single-particle state, at zero-temperature. == Context == In quantum mechanics, a group of particles known as fermions (for example, electrons, protons and neutrons) obey the Pauli exclusion principle. This states that two fermions cannot occupy the same quantum state. Since an idealized non-interacting Fermi gas can be analyzed in terms of single-particle stationary states, we can thus say that two fermions cannot occupy the same stationary state. These stationary states will typically be distinct in energy. To find the ground state of the whole system, we start with an empty system, and add particles one at a time, consecutively filling up the unoccupied stationary states with the lowest energy. When all the particles have been put in, the Fermi energy is the kinetic energy of the highest occupied state. As a consequence, even if we have extracted all possible energy from a Fermi gas by cooling it to near absolute zero temperature, the fermions are still moving around at a high speed. The fastest ones are moving at a velocity corresponding to a kinetic energy equal to the Fermi energy. This speed is known as the Fermi velocity. Only when the temperature exceeds the related Fermi temperature, do the particles begin to move significantly faster than at absolute zero. The Fermi energy is an important concept in the solid state physics of metals and superconductors. It is also a very important quantity in the physics of quantum liquids like low temperature helium (both normal and superfluid 3He), and it is quite important to nuclear physics and to understanding the stability of white dwarf stars against gravitational collapse. == Formula and typical values == The Fermi energy for a three-dimensional, non-relativistic, non-interacting ensemble of identical spin-1⁄2 fermions is given by E F = ℏ 2 2 m 0 ( 3 π 2 N V ) 2 / 3 , {\displaystyle E_{\text{F}}={\frac {\hbar ^{2}}{2m_{0}}}\left({\frac {3\pi ^{2}N}{V}}\right)^{2/3},} where N is the number of particles, m0 the rest mass of each fermion, V the volume of the system, and ℏ {\displaystyle \hbar } the reduced Planck constant. === Metals === Under the free electron model, the electrons in a metal can be considered to form a Fermi gas. The number density N / V {\displaystyle N/V} of conduction electrons in metals ranges between approximately 1028 and 1029 electrons/m3, which is also the typical density of atoms in ordinary solid matter. This number density produces a Fermi energy of the order of 2 to 10 electronvolts. === White dwarfs === Stars known as white dwarfs have mass comparable to the Sun, but have about a hundredth of its radius. The high densities mean that the electrons are no longer bound to single nuclei and instead form a degenerate electron gas. Their Fermi energy is about 0.3 MeV. === Nucleus === Another typical example is that of the nucleons in the nucleus of an atom. The radius of the nucleus admits deviations, so a typical value for the Fermi energy is usually given as 38 MeV. == Related quantities == Using this definition of above for the Fermi energy, various related quantities can be useful. The Fermi temperature is defined as T F = E F k B , {\displaystyle T_{\text{F}}={\frac {E_{\text{F}}}{k_{\text{B}}}},} where k B {\displaystyle k_{\text{B}}} is the Boltzmann constant, and E F {\displaystyle E_{\text{F}}} the Fermi energy. The Fermi temperature can be thought of as the temperature at which thermal effects are comparable to quantum effects associated with Fermi statistics. The Fermi temperature for a metal is a couple of orders of magnitude above room temperature. Other quantities defined in this context are Fermi momentum p F = 2 m 0 E F {\displaystyle p_{\text{F}}={\sqrt {2m_{0}E_{\text{F}}}}} and Fermi velocity v F = p F m 0 . {\displaystyle v_{\text{F}}={\frac {p_{\text{F}}}{m_{0}}}.} These quantities are respectively the momentum and group velocity of a fermion at the Fermi surface. The Fermi momentum can also be described as p F = ℏ k F , {\displaystyle p_{\text{F}}=\hbar k_{\text{F}},} where k F = ( 3 π 2 n ) 1 / 3 {\displaystyle k_{\text{F}}=(3\pi ^{2}n)^{1/3}} , called the Fermi wavevector, is the radius of the Fermi sphere. n {\displaystyle n} is the electron density. These quantities may not be well-defined in cases where the Fermi surface is non-spherical. == See also == Fermi–Dirac statistics: the distribution of electrons over stationary states for non-interacting fermions at non-zero temperature. Fermi level Quasi Fermi level == Notes == == References == == Further reading == Kroemer, Herbert; Kittel, Charles (1980). Thermal Physics (2nd ed.). W. H. Freeman Company. ISBN 978-0-7167-1088-2.
Wikipedia/Fermi_energy
A phenomenological model is a scientific model that describes the empirical relationship of phenomena to each other, in a way which is consistent with fundamental theory, but is not directly derived from theory. In other words, a phenomenological model is not derived from first principles. A phenomenological model forgoes any attempt to explain why the variables interact the way they do, and simply attempts to describe the relationship, with the assumption that the relationship extends past the measured values. Regression analysis is sometimes used to create statistical models that serve as phenomenological models. == Examples of use == Phenomenological models have been characterized as being completely independent of theories, though many phenomenological models, while failing to be derivable from a theory, incorporate principles and laws associated with theories. The liquid drop model of the atomic nucleus, for instance, portrays the nucleus as a liquid drop and describes it as having several properties (surface tension and charge, among others) originating in different theories (hydrodynamics and electrodynamics, respectively). Certain aspects of these theories—though usually not the complete theory—are then used to determine both the static and dynamical properties of the nucleus. == See also == Phenomenology (physics) == References ==
Wikipedia/Phenomenological_model
In solid state physics, a particle's effective mass (often denoted m ∗ {\textstyle m^{*}} ) is the mass that it seems to have when responding to forces, or the mass that it seems to have when interacting with other identical particles in a thermal distribution. One of the results from the band theory of solids is that the movement of particles in a periodic potential, over long distances larger than the lattice spacing, can be very different from their motion in a vacuum. The effective mass is a quantity that is used to simplify band structures by modeling the behavior of a free particle with that mass. For some purposes and some materials, the effective mass can be considered to be a simple constant of a material. In general, however, the value of effective mass depends on the purpose for which it is used, and can vary depending on a number of factors. For electrons or electron holes in a solid, the effective mass is usually stated as a factor multiplying the rest mass of an electron, me (9.11 × 10−31 kg). This factor is usually in the range 0.01 to 10, but can be lower or higher—for example, reaching 1,000 in exotic heavy fermion materials, or anywhere from zero to infinity (depending on definition) in graphene. As it simplifies the more general band theory, the electronic effective mass can be seen as an important basic parameter that influences measurable properties of a solid, including everything from the efficiency of a solar cell to the speed of an integrated circuit. == Simple case: parabolic, isotropic dispersion relation == At the highest energies of the valence band in many semiconductors (Ge, Si, GaAs, ...), and the lowest energies of the conduction band in some semiconductors (GaAs, ...), the band structure E(k) can be locally approximated as E ( k ) = E 0 + ℏ 2 k 2 2 m ∗ {\displaystyle E(\mathbf {k} )=E_{0}+{\frac {\hbar ^{2}\mathbf {k} ^{2}}{2m^{*}}}} where E(k) is the energy of an electron at wavevector k in that band, E0 is a constant giving the edge of energy of that band, and m* is a constant (the effective mass). It can be shown that the electrons placed in these bands behave as free electrons except with a different mass, as long as their energy stays within the range of validity of the approximation above. As a result, the electron mass in models such as the Drude model must be replaced with the effective mass. One remarkable property is that the effective mass can become negative, when the band curves downwards away from a maximum. As a result of the negative mass, the electrons respond to electric and magnetic forces by gaining velocity in the opposite direction compared to normal; even though these electrons have negative charge, they move in trajectories as if they had positive charge (and positive mass). This explains the existence of valence-band holes, the positive-charge, positive-mass quasiparticles that can be found in semiconductors. In any case, if the band structure has the simple parabolic form described above, then the value of effective mass is unambiguous. Unfortunately, this parabolic form is not valid for describing most materials. In such complex materials there is no single definition of "effective mass" but instead multiple definitions, each suited to a particular purpose. The rest of the article describes these effective masses in detail. == Intermediate case: parabolic, anisotropic dispersion relation == In some important semiconductors (notably, silicon) the lowest energies of the conduction band are not symmetrical, as the constant-energy surfaces are now ellipsoids, rather than the spheres in the isotropic case. Each conduction band minimum can be approximated only by E ( k ) = E 0 + ℏ 2 2 m x ∗ ( k x − k 0 , x ) 2 + ℏ 2 2 m y ∗ ( k y − k 0 , y ) 2 + ℏ 2 2 m z ∗ ( k z − k 0 , z ) 2 {\displaystyle E\left(\mathbf {k} \right)=E_{0}+{\frac {\hbar ^{2}}{2m_{x}^{*}}}\left(k_{x}-k_{0,x}\right)^{2}+{\frac {\hbar ^{2}}{2m_{y}^{*}}}\left(k_{y}-k_{0,y}\right)^{2}+{\frac {\hbar ^{2}}{2m_{z}^{*}}}\left(k_{z}-k_{0,z}\right)^{2}} where x, y, and z axes are aligned to the principal axes of the ellipsoids, and m*x, m*y and m*z are the inertial effective masses along these different axes. The offsets k0,x, k0,y, and k0,z reflect that the conduction band minimum is no longer centered at zero wavevector. (These effective masses correspond to the principal components of the inertial effective mass tensor, described later.) In this case, the electron motion is no longer directly comparable to a free electron; the speed of an electron will depend on its direction, and it will accelerate to a different degree depending on the direction of the force. Still, in crystals such as silicon the overall properties such as conductivity appear to be isotropic. This is because there are multiple valleys (conduction-band minima), each with effective masses rearranged along different axes. The valleys collectively act together to give an isotropic conductivity. It is possible to average the different axes' effective masses together in some way, to regain the free electron picture. However, the averaging method turns out to depend on the purpose: == General case == In general the dispersion relation cannot be approximated as parabolic, and in such cases the effective mass should be precisely defined if it is to be used at all. Here a commonly stated definition of effective mass is the inertial effective mass tensor defined below; however, in general it is a matrix-valued function of the wavevector, and even more complex than the band structure. Other effective masses are more relevant to directly measurable phenomena. === Inertial effective mass tensor === A classical particle under the influence of a force accelerates according to Newton's second law, a = m−1F, or alternatively, the momentum changes according to ⁠d/dt⁠p = F. This intuitive principle appears identically in semiclassical approximations derived from band structure when interband transitions can be ignored for sufficiently weak external fields. The force gives a rate of change in crystal momentum pcrystal: F = d ⁡ p crystal d ⁡ t = ℏ d ⁡ k d ⁡ t , {\displaystyle \mathbf {F} ={\frac {\operatorname {d} \mathbf {p} _{\text{crystal}}}{\operatorname {d} t}}=\hbar {\frac {\operatorname {d} \mathbf {k} }{\operatorname {d} t}},} where ħ = h/2π is the reduced Planck constant. Acceleration for a wave-like particle becomes the rate of change in group velocity: a = d d ⁡ t v g = d d ⁡ t ( ∇ k ω ( k ) ) = ∇ k d ⁡ ω ( k ) d ⁡ t = ∇ k ( d ⁡ k d ⁡ t ⋅ ∇ k ω ( k ) ) , {\displaystyle \mathbf {a} ={\frac {\operatorname {d} }{\operatorname {d} t}}\,\mathbf {v} _{\text{g}}={\frac {\operatorname {d} }{\operatorname {d} t}}\left(\nabla _{k}\,\omega \left(\mathbf {k} \right)\right)=\nabla _{k}{\frac {\operatorname {d} \omega \left(\mathbf {k} \right)}{\operatorname {d} t}}=\nabla _{k}\left({\frac {\operatorname {d} \mathbf {k} }{\operatorname {d} t}}\cdot \nabla _{k}\,\omega (\mathbf {k} )\right),} where ∇k is the del operator in reciprocal space. The last step follows from using the chain rule for a total derivative for a quantity with indirect dependencies, because the direct result of the force is the change in k(t) given above, which indirectly results in a change in E(k)=ħω(k). Combining these two equations yields a = ∇ k ( F ℏ ⋅ ∇ k E ( k ) ℏ ) = 1 ℏ 2 ( ∇ k ( ∇ k E ( k ) ) ) ⋅ F = M inert − 1 ⋅ F {\displaystyle \mathbf {a} =\nabla _{k}\left({\frac {\mathbf {F} }{\hbar }}\cdot \nabla _{k}\,{\frac {E(\mathbf {k} )}{\hbar }}\right)={\frac {1}{\hbar ^{2}}}\left(\nabla _{k}\left(\nabla _{k}\,E(\mathbf {k} )\right)\right)\cdot \mathbf {F} =M_{\text{inert}}^{-1}\cdot \mathbf {F} } using the dot product rule with a uniform force (∇kF=0). ∇ k ( ∇ k E ( k ) ) {\displaystyle \nabla _{k}\left(\nabla _{k}\,E(\mathbf {k} )\right)} is the Hessian matrix of E(k) in reciprocal space. We see that the equivalent of the Newtonian reciprocal inertial mass for a free particle defined by a = m−1F has become a tensor quantity M inert − 1 = 1 ℏ 2 ∇ k ( ∇ k E ( k ) ) . {\displaystyle M_{\text{inert}}^{-1}={\frac {1}{\hbar ^{2}}}\nabla _{k}\left(\nabla _{k}\,E(\mathbf {k} )\right).} whose elements are [ M inert − 1 ] i j = 1 ℏ 2 [ ∇ k ( ∇ k E ( k ) ) ] i j = 1 ℏ 2 ∂ 2 E ∂ k i ∂ k j . {\displaystyle \left[M_{\text{inert}}^{-1}\right]_{ij}={\frac {1}{\hbar ^{2}}}\left[\nabla _{k}\left(\nabla _{k}\,E(\mathbf {k} )\right)\right]_{ij}={\frac {1}{\hbar ^{2}}}{\frac {\partial ^{2}E}{\partial k_{i}\partial k_{j}}}\,.} This tensor allows the acceleration and force to be in different directions, and for the magnitude of the acceleration to depend on the direction of the force. For parabolic bands, the off-diagonal elements of Minert−1 are zero, and the diagonal elements are constants For isotropic bands the diagonal elements must all be equal and the off-diagonal elements must all be equal. For parabolic isotropic bands, Minert−1 = ⁠1/m*⁠I, where m* is a scalar effective mass and I is the identity. In general, the elements of Minert−1 are functions of k. The inverse, Minert = (Minert−1)−1, is known as the effective mass tensor. Note that it is not always possible to invert Minert−1 For bands with linear dispersion E ∝ k {\displaystyle E\propto k} such as with photons or electrons in graphene, the group velocity is fixed, i.e. electrons travelling with parallel with k to the force direction F cannot be accelerated and the diagonal elements of Minert−1 are obviously zero. However, electrons travelling with a component perpendicular to the force can be accelerated in the direction of the force, and the off-diagonal elements of Minert−1 are non-zero. In fact the off-diagonal elements scale inversely with k, i.e. they diverge (become infinite) for small k. This is why the electrons in graphene are sometimes said to have infinite mass (due to the zeros on the diagonal of Minert−1) and sometimes said to be massless (due to the divergence on the off-diagonals). === Cyclotron effective mass === Classically, a charged particle in a magnetic field moves in a helix along the magnetic field axis. The period T of its motion depends on its mass m and charge e, T = | 2 π m e B | {\displaystyle T=\left\vert {\frac {2\pi m}{eB}}\right\vert } where B is the magnetic flux density. For particles in asymmetrical band structures, the particle no longer moves exactly in a helix, however its motion transverse to the magnetic field still moves in a closed loop (not necessarily a circle). Moreover, the time to complete one of these loops still varies inversely with magnetic field, and so it is possible to define a cyclotron effective mass from the measured period, using the above equation. The semiclassical motion of the particle can be described by a closed loop in k-space. Throughout this loop, the particle maintains a constant energy, as well as a constant momentum along the magnetic field axis. By defining A to be the k-space area enclosed by this loop (this area depends on the energy E, the direction of the magnetic field, and the on-axis wavevector kB), then it can be shown that the cyclotron effective mass depends on the band structure via the derivative of this area in energy: m ∗ ( E , B ^ , k B ^ ) = ℏ 2 2 π ⋅ ∂ ∂ E A ( E , B ^ , k B ^ ) {\displaystyle m^{*}\left(E,{\hat {B}},k_{\hat {B}}\right)={\frac {\hbar ^{2}}{2\pi }}\cdot {\frac {\partial }{\partial E}}A\left(E,{\hat {B}},k_{\hat {B}}\right)} Typically, experiments that measure cyclotron motion (cyclotron resonance, De Haas–Van Alphen effect, etc.) are restricted to only probe motion for energies near the Fermi level. In two-dimensional electron gases, the cyclotron effective mass is defined only for one magnetic field direction (perpendicular) and the out-of-plane wavevector drops out. The cyclotron effective mass therefore is only a function of energy, and it turns out to be exactly related to the density of states at that energy via the relation g ( E ) = g v m ∗ π ℏ 2 {\displaystyle \scriptstyle g(E)\;=\;{\frac {g_{v}m^{*}}{\pi \hbar ^{2}}}} , where gv is the valley degeneracy. Such a simple relationship does not apply in three-dimensional materials. === Density of states effective masses (lightly doped semiconductors) === In semiconductors with low levels of doping, the electron concentration in the conduction band is in general given by n e = N C exp ⁡ ( − E C − E F k T ) {\displaystyle n_{\text{e}}=N_{\text{C}}\exp \left(-{\frac {E_{\text{C}}-E_{\text{F}}}{kT}}\right)} where EF is the Fermi level, EC is the minimum energy of the conduction band, and NC is a concentration coefficient that depends on temperature. The above relationship for ne can be shown to apply for any conduction band shape (including non-parabolic, asymmetric bands), provided the doping is weak (EC − EF ≫ kT); this is a consequence of Fermi–Dirac statistics limiting towards Maxwell–Boltzmann statistics. The concept of effective mass is useful to model the temperature dependence of NC, thereby allowing the above relationship to be used over a range of temperatures. In an idealized three-dimensional material with a parabolic band, the concentration coefficient is given by N C = 2 ( 2 π m e ∗ k T h 2 ) 3 2 {\displaystyle \quad N_{\text{C}}=2\left({\frac {2\pi m_{\text{e}}^{*}kT}{h^{2}}}\right)^{\frac {3}{2}}} In semiconductors with non-simple band structures, this relationship is used to define an effective mass, known as the density of states effective mass of electrons. The name "density of states effective mass" is used since the above expression for NC is derived via the density of states for a parabolic band. In practice, the effective mass extracted in this way is not quite constant in temperature (NC does not exactly vary as T3/2). In silicon, for example, this effective mass varies by a few percent between absolute zero and room temperature because the band structure itself slightly changes in shape. These band structure distortions are a result of changes in electron–phonon interaction energies, with the lattice's thermal expansion playing a minor role. Similarly, the number of holes in the valence band, and the density of states effective mass of holes are defined by: n h = N V exp ⁡ ( − E F − E V k T ) , N V = 2 ( 2 π m h ∗ k T h 2 ) 3 2 {\displaystyle n_{\text{h}}=N_{\text{V}}\exp \left(-{\frac {E_{\text{F}}-E_{\text{V}}}{kT}}\right),\quad N_{\text{V}}=2\left({\frac {2\pi m_{\text{h}}^{*}kT}{h^{2}}}\right)^{\frac {3}{2}}} where EV is the maximum energy of the valence band. Practically, this effective mass tends to vary greatly between absolute zero and room temperature in many materials (e.g., a factor of two in silicon), as there are multiple valence bands with distinct and significantly non-parabolic character, all peaking near the same energy. == Determination == === Experimental === Traditionally effective masses were measured using cyclotron resonance, a method in which microwave absorption of a semiconductor immersed in a magnetic field goes through a sharp peak when the microwave frequency equals the cyclotron frequency f c = e B 2 π m ∗ {\displaystyle \scriptstyle f_{c}\;=\;{\frac {eB}{2\pi m^{*}}}} . In recent years effective masses have more commonly been determined through measurement of band structures using techniques such as angle-resolved photoemission spectroscopy (ARPES) or, most directly, the de Haas–van Alphen effect. Effective masses can also be estimated using the coefficient γ of the linear term in the low-temperature electronic specific heat at constant volume c v {\displaystyle \scriptstyle c_{v}} . The specific heat depends on the effective mass through the density of states at the Fermi level and as such is a measure of degeneracy as well as band curvature. Very large estimates of carrier mass from specific heat measurements have given rise to the concept of heavy fermion materials. Since carrier mobility depends on the ratio of carrier collision lifetime τ {\displaystyle \tau } to effective mass, masses can in principle be determined from transport measurements, but this method is not practical since carrier collision probabilities are typically not known a priori. The optical Hall effect is an emerging technique for measuring the free charge carrier density, effective mass and mobility parameters in semiconductors. The optical Hall effect measures the analogue of the quasi-static electric-field-induced electrical Hall effect at optical frequencies in conductive and complex layered materials. The optical Hall effect also permits characterization of the anisotropy (tensor character) of the effective mass and mobility parameters. === Theoretical === A variety of theoretical methods including density functional theory, k·p perturbation theory, and others are used to supplement and support the various experimental measurements described in the previous section, including interpreting, fitting, and extrapolating these measurements. Some of these theoretical methods can also be used for ab initio predictions of effective mass in the absence of any experimental data, for example to study materials that have not yet been created in the laboratory. == Significance == The effective mass is used in transport calculations, such as transport of electrons under the influence of fields or carrier gradients, but it also is used to calculate the carrier density and density of states in semiconductors. These masses are related but, as explained in the previous sections, are not the same because the weightings of various directions and wavevectors are different. These differences are important, for example in thermoelectric materials, where high conductivity, generally associated with light mass, is desired at the same time as high Seebeck coefficient, generally associated with heavy mass. Methods for assessing the electronic structures of different materials in this context have been developed. Certain group III–V compounds such as gallium arsenide (GaAs) and indium antimonide (InSb) have far smaller effective masses than tetrahedral group IV materials like silicon and germanium. In the simplest Drude picture of electronic transport, the maximum obtainable charge carrier velocity is inversely proportional to the effective mass: v → = ‖ μ ‖ ⋅ E → {\textstyle {\vec {v}}\;=\;\left\Vert \mu \right\Vert \cdot {\vec {E}}} , where ‖ μ ‖ = e τ / ‖ m ∗ ‖ {\textstyle \left\Vert \mu \right\Vert \;=\;{e\tau }/{\left\Vert m^{*}\right\Vert }} with e {\textstyle e} being the electronic charge. The ultimate speed of integrated circuits depends on the carrier velocity, so the low effective mass is the fundamental reason that GaAs and its derivatives are used instead of Si in high-bandwidth applications like cellular telephony. In April 2017, researchers at Washington State University claimed to have created a fluid with negative effective mass inside a Bose–Einstein condensate, by engineering the dispersion relation. == See also == Models of solids and crystals: Tight-binding model Free electron model Nearly free electron model == Footnotes == == References == Pastori Parravicini, G. (1975). Electronic States and Optical Transitions in Solids. Pergamon Press. ISBN 978-0-08-016846-3. This book contains an exhaustive but accessible discussion of the topic with extensive comparison between calculations and experiment. S. Pekar, The method of effective electron mass in crystals, Zh. Eksp. Teor. Fiz. 16, 933 (1946). == External links == NSM archive
Wikipedia/Effective_mass_(solid-state_physics)
Heureka is a science center in the Tikkurila district of Vantaa, Finland, north of Helsinki, designed by Heikkinen – Komonen Architects. It is located at the intersection of the Finnish Main Line and the river Keravanjoki. The aim of the science centre, which opened its doors to the public in 1989, is to popularise scientific information and to develop the methods used to teach science and scientific concepts. The science centre provides opportunities to become familiar with science and technology through varying exhibitions, a planetarium, an idea workshop, educational programs and events. Heureka is one of the largest leisure centres in Finland, with about 300 thousand visitors per year. The name "Heureka" (eureka in English) refers to the Greek exclamation, presumably uttered by Archimedes, to mean "I've found it!" (made a discovery). The Science Centre Heureka features both indoor and outdoor interactive exhibitions with exhibits that enable visitors to independently test different concepts and ideas. There is also a digital planetarium with 135 seats. The Heureka Science Centre is a non-profit organization run by the Finnish Science Centre Foundation. The Finnish Science Centre Foundation is a broadly based co-operation organization that includes the Finnish scientific community, education sector, trade and industry, and national and local government. The ten background organisations of the Foundation support, develop and actively participate in the activities of Heureka. The foundation's highest body is the Board of Trustees, whose decisions are implemented by the Governing Board. Everyday activities are the responsibility of Heureka's director assisted by a management team and other staff. Since September 2020, the director of Heureka has been Mikko Myllykoski. == History == The roots of the Finnish Science Centre Heureka can be traced back to the University of Helsinki and scientists, who had become acquainted with different science centres located around the world. The initial spark was lit by Adjunct Professors Tapio Markkanen, Hannu I. Miettinen and Heikki Oja. It all began with the Physics 82 exhibition held at the House of the Estates in Helsinki on 20–26 May 1982. During autumn of that same year, the science centre project was launched with the initial support of the Academy of Finland, the Ministry of Education, and various foundations. The project led to the establishment of the Finnish Science Centre Foundation during 1983-1984. The original founding members of the foundation included the University of Helsinki, the Helsinki University of Technology, the Federation of Finnish Learned Societies, and the Confederation of Industries. In 1984, the City of Vantaa offered to be the host city and partial financier for the Science Centre, and also designated a property lot located in the southern end of Tikkurila as the future site of the centre. The total cost of the building was 80 million Finnish markka, or about 13.5 million euro. An architectural competition, held in 1985, turned out two first prizes from which the winning design was selected; namely the "Heureka" design submitted by Mikko Heikkinen, Markku Komonen and Lauri Anttila. That's how the Finnish Science Centre Heureka got its apt name! Before the building was completed, a number of test exhibitions were set up at other sites: Fysiikka -82 at the House of Estates, the medical exhibition Pulssi at the Tali tennis centre in spring 1985, Vipunen about the Finnish people and language at the House of Estates in autumn 1985, the aquatic exhibition AQUA 86 at Messukeskus Helsinki in spring 1986 and the technical and scientific exhibition Teknorama at Messukeskus Helsinki in spring 1987. The interior plan for the Science Centre was completed in 1986. The foundation for the building was laid in October 1987, and the construction work was completed one year later. The overall area of the building is 8,200 m², of which 2,800 m² is exhibition space. The Finnish Science Centre Heureka opened its doors to the public on 28 April 1989. == Building == The building consists of an auditorium, meeting rooms, a planetarium, a restaurant and a shop. The facade of the building facing the railway track makes use of mirror glass, intended to shield the building from the noise caused by the trains. The structures in the facade are phased into 31 parts of a whole of a hundred metres of length, with the corresponding spectral colours based on a laboratory analysis and special paints made based on it. The coating of the outer walls of the pillar hall uses pretensed white concrete slabs one inch thick. The shell elements have been sandblasted on their surface and are 60 × 120 cm in area. The central inner space and architectural focus point of the science centre is its 14-metre-tall cylindrical exhibition hall. The pillar module of the main exhibition space is 9.6 metres in both height and width. Its pillars consist of sub-pillars, with four in the central area, two on each edge and one in each corner. This structure is intended to visualise the relative distribution of load. The main exhibition space is surrounded by the pillar hall and an arc hall supported by laminated beam arcs. The sphere of the planetarium and the sector-shaped auditorium intersect and enter into the modular pillar hall, and partly into each other. Each part of the building is fitted with a structural system developed specifically for it. In terms of structures and materials, the science centre is a kind of a conglomeration. It contains concrete, steel and wood constructs. The building has mostly been built from pre-built components, but it also contains some parts built in place. The base floor of the exhibition spaces contains a semi-heated service space for the building services engineering elements needed by the exhibition. The outlets and connection possibilities have been systematically placed in a 2.4 metre grid into the entire building. This allows for enough flexibility to host exhibitions. === Interior === The cylindrical main exhibition hall was inspired by an exhibition designed by Gunnar Asplund at the Stockholm city library and houses about 200 exhibits related to different fields of science. Stable exhibits in the area include a Foucault pendulum and a wire wheel at the ceiling. The content of the main exhibition is renewed every couple of years. The main exhibition was renewed completely in 1999, but there are small changes taking place in the main exhibition hall each year as well. The cylindrical main exhibition hall houses many exhibits related to various fields of science. The topics include, for example, digestion and the functions of the intestines, the production of money and traffic. The exhibition "The Wind in the Bowels" has been designed in co-operation with the Finnish Medical Society Duodecim. The exhibition "About a Coin" was implemented through collaboration with the Mint of Finland to mark the company's 150th anniversary. The exhibition Intelligent City is about utilising information technology in improving the functionality, safety, energy effectiveness and environmental friendliness of a city. As an extension of the main exhibition, the Heureka Classics exhibition was opened in 2009 in honour of Heureka's 20th anniversary, and hosts a collection of favourites both from Heureka and from other science centres. The exhibition shows various prominent physical phenomena, which can be experienced by the entire body. Illusion exhibits show how the cooperation between the brain and the senses create amazing phenomena inside our heads. From the beginning of August 2009, Heureka has also had the Science on a Sphere exhibit on display. This exhibit is a large sphere created by the American National Oceanic and Atmospheric Administration (NOAA), with various demonstrations of the climate, the oceans, the landmasses and astronomical objects projected on its surface. In addition to the main exhibition, Heureka generally also houses two temporary exhibitions. The topics of past temporary exhibitions have included, for example, dinosaurs, humans, sports, forests, the art of film, flying and ancient cultures. Since Heureka's opening, the most successful exhibitions have been the dinosaur exhibitions. The 2001 exhibition about the family life of dinosaurs, for example, attracted 406,000 visitors. Many of the exhibitions independently produced by Heureka have made guest appearances in numerous science centres all over the world. Heureka also features exhibitions imported from abroad. === Outdoor areas === Heureka's outdoor exhibition area, Science Park Galilei, opened in 2002. This area of the centre can be visited annually during the summer season. Galilei is a sort of "scientific playground". The 7,500 m² area holds dozens of exhibits, many of which feature water as the primary element. The exhibits are based on mathematical, physical and musical phenomena. The outdoor park also contains moving works of art, such as the sand plotter created by well-known Finnish artist Osmo Valtonen. Galilei also features an arboretum with species of conifers from the northern hemisphere. The area in front of Heureka features a permanent bedrock exhibition, which contains both common and rare types of rocks found in Finland's bedrock. The rocks are situated to reflect their distribution throughout different geographical provinces of Finland. The purpose of the bedrock exhibition is to show visitors of the science centre that the message of the science centre is not limited to technical achievements, but also extends to the long cycle of nature and culture. Leading up to the front entrance, visitors are also greeted by perennial gardens that were planted in accordance with the historical classification system designed by Carolus Linnaeus. The front of the entrance is tiled in Penrose tiling. === Planetarium === The hemispheric-shaped planetarium primarily presents films dealing with astronomy. Until 2007, the theatre was called the Verne Theatre, and it ran super films and multimedia programmes made with special slide projectors that took advantage of the entire 500 m² surface of the hemispheric screen. At the end of 2007, the theatre was entirely renovated; the digital Sky Skan equipment of the current planetarium allows for projecting moving images to the entire surface of the hemispheric screen. The planetarium also has a traditional star sky projector for special programs. There are altogether 135 seats in Heureka's planetarium, and it is often used for planetarium films with an outer space theme. === Other daily programmes === In addition to the exhibitions and planetarium films, Heureka also offers the opportunity to view daily science theatre shows, to participate in supervised programmes and to watch basketball games played by rats. Furthermore, a number of other individual events, such as Science Days, science holidays and science camps in the summer are organised at Heureka. Public lectures with different themes are also regularly held in Heureka's auditorium. Public lectures are given in the planetarium as well. Other services at Heureka include a science shop and a restaurant, as well as conference facilities and a 220-seat auditorium for meetings. == Visitors == From 1989 to 2011, an average of about 285,000 people have visited Heureka each year. The total number of visitors exceeded six million in May 2010. From spring 1989 to the end of 2020 over 8.9 million people have visited Heureka. Altogether almost 29 million people have viewed Heureka's exhibitions on display both in Finland and abroad. Of the average 285,000 people who visit Heureka each year, more than half represent families, one fourth school students, about 10% are corporate visits, and the rest are individual visitors. About 6-10% of the visitors arrive from abroad, with the highest percentage coming from Russia and Estonia. The number of visitors is affected by, for example, the general economic situation, the weather and the excursion funds available to school groups. In 2019 a total of 423,229 people visited Heureka, thanks to the popular Giant Dinosaurs exhibition. Because of the lock-down caused by the COVID-19 pandemic, the number of visitors in 2020 was only 165,428. Heureka's most active year was its inaugural year 1989, when 431,244 people visited the science centre in a bit over eight months. In terms of number of visitors, Heureka is one of the most popular for-pay attractions in Finland, and it has been the most popular museum in Finland on many years. When the science centre was originally being planned in the 1980s, it was estimated to attract 250,000 people per year. The long-time average has risen slightly higher than this estimate, to about 280,000 visitors per year. The number of visitors varies considerably each year because of the services offered and the economic situation of Heureka and its competitors, which affects the demand for various visitor groups and consumption decisions. == The Finnish Science Centre Foundation and funding == Heureka is run by the Finnish Science Centre Foundation, whose original members include the University of Helsinki, the Helsinki University of Technology (nowadays Aalto University), the Federation of Finnish Learned Societies, and the Confederation of Industries (nowadays Confederation of Finnish Industries, EK), the City of Vantaa, the Ministry of Education (nowadays Ministry of Education and Culture), the Ministry of Trade and Industry (nowadays Ministry of Employment and the Economy), the Ministry of Finance, the Central Organisation of Finnish Trade Unions (SAK), and the Trade Union of Education in Finland (OAJ). Heureka's funding is provided through subsidies from the City of Vantaa and the Ministry of Education and Culture, as well as through its own operational revenue: admission and rental fees, fundraising and exhibition export. Heureka's overall funding is approximately ten million euro, of which the revenue from own operations is about one half. The share of the funding provided by the City of Vantaa and the Ministry of Education and Culture amounts to the other half. The public support is notably less than for many other cultural institutions. Part of the funding also comes through corporate co-operation. The temporary exhibitions are often sponsored by main partners and other partners. Heureka also has two companies owned entirely by the Foundation, namely the Science Shop Magneetti Oy, which runs the Heureka Shop at Heureka and at the Kamppi Center in central Helsinki, and Heureka Overseas Productions Oy Ltd, which manages the export activities of Heureka. == Leadership == Professor Per-Edvin Persson served as the president of Heureka from 1991 to until his retirement in 2013, where he was replaced by Anneli Pauli. Tapio Koivu started as the president of Heureka in August 2014. Mikko Myllykoski was chosen as the president of Heureka in September 2020, having previously served as the experience director of Heureka for many years. The foundation had a staff of 60 to 70 full-time salaried employees and 20 to 40 part-time or fixed-term employees. The total number of person-years has varied between 70 and 80. Additionally 7 206 hours were carried out by volunteers. There have been volunteers in Heureka since 1998, and there are currently about 60-70 volunteers in the service of Heureka. The center is a member of three associations of science centers: ASTC (Association of Science-Technology Centers) ECSITE (The European Collaborative for Science, Industry and Technology) NSCF (Nordisk Science Centerförbundet) The Science Centre Foundation has donated the office of a director of science centre pedagogics to the department of applied education science at the faculty of behavioural sciences of the University of Helsinki, with the chief of research and development of Heureka, professor Hannu Salmi elected to the position. == See also == List of science centers == Sources == Douglas E. Graf, "Heureka: Formal Analysis", Datutop 18, Tampere, 1996. === References === == External links == The homepage of Heureka
Wikipedia/Heureka_(science_center)
The Sunday Telegraph is a British broadsheet newspaper, first published on 5 February 1961 and published by the Telegraph Media Group, a division of Press Holdings. It is the sister paper of The Daily Telegraph, also published by the Telegraph Media Group. The Sunday Telegraph was originally a separate operation with a different editorial staff, but since 2013 the Telegraph has been a seven-day operation. However, The Sunday Telegraph still has its own editor, different from that of The Daily Telegraph. According to the Audit Bureau of Circulations, the Sunday Telegraph had an average circulation of 214,711 copies per week in the first half of 2021. == See also == Journalism portal == References == == External links == Official website
Wikipedia/Sunday_Telegraph
The Montreal Science Centre (French: Centre des sciences de Montréal) is a science museum in Montreal, Quebec, Canada. It is located on the Quai King-Edward in the Old Port of Montreal. Established in 2000 and originally known as the iSci Centre, the museum changed its name to the Montreal Science Centre in 2002. The museum is managed by the Old Port of Montreal Corporation (a division of the Canada Lands Company, a crown corporation of the Government of Canada). The museum is home to interactive exhibitions on science and technology as well as an IMAX theatre. == History == The King Edward Quay was built in 1901 to 1903 as King Edward Wharf for cargo ships, but the port area began to change in the 1970s as port activity moved to the new Port of Montreal. By the 1990s King Edward Quay was re-developed along with the rest of the Old Port. == See also == Space for Life, a related museum district situated in and adjacent to Montreal's former Olympic Park List of science centers == References == == External links == Media related to Montreal Science Centre at Wikimedia Commons Official website
Wikipedia/Montreal_Science_Centre
Physics of the Future: How Science Will Shape Human Destiny and Our Daily Lives by the Year 2100 is a 2011 book by theoretical physicist Michio Kaku, author of Hyperspace and Physics of the Impossible. In it Kaku speculates about possible future technological development over the next 100 years. He interviews notable scientists about their fields of research and lays out his vision of coming developments in medicine, computing, artificial intelligence, nanotechnology, and energy production. The book was on the New York Times Bestseller List for five weeks. Kaku writes how he hopes his predictions for 2100 will be as successful as science fiction writer Jules Verne's 1863 novel Paris in the Twentieth Century. Kaku contrasts Verne's foresight against U.S. Postmaster General John Wanamaker, who in 1893 predicted that mail would still be delivered by stagecoach and horseback in 100 years' time, and IBM chairman Thomas J. Watson, who in 1943 is alleged to have said "I think there is a world market for maybe five computers." Kaku points to this long history of failed predictions against progress to underscore his notion "that it is very dangerous to bet against the future". == Contents == Each chapter is sorted into three sections: Near future (2000-2030), Midcentury (2030-2070), and Far future (2070-2100). Kaku notes that the time periods are only rough approximations, but show the general time frame for the various trends in the book. === Future of the Computer: Mind over Matter === Kaku begins with Moore's law, and compares a chip that sings "Happy Birthday" with the Allied forces in 1945, stating that the chip contains much more power, and that "Hitler, Churchill, or Roosevelt might have killed to get that chip." He predicts that computer power will increase to the point where computers, like electricity, paper, and water, "disappear into the fabric of our lives, and computer chips will be planted in the walls of buildings." He also predicts that glasses and contact lenses will be connected to the internet, using similar technology to virtual retinal displays. Cars will become driverless due to the power of the GPS system. This prediction is supported by the results of the Urban Challenge. The Pentagon hopes to make 1⁄3 of the United States ground forces automated by 2015. Technology similar to BrainGate will eventually allow humans to control computers with tiny brain sensors, and "like a magician, move objects around with the power of our minds." === Future of AI: Rise of the Machines === Kaku discusses robotic body parts, modular robots, unemployment caused by robots, surrogates and avatars (like their respective movies), and reverse engineering the brain. Kaku goes over the three laws of robotics and their contradictions. He endorses a "chip in robot brains to automatically shut them off if they have murderous thoughts", and believes that the most likely scenario is one in which robots are free to wreak havoc and destruction, but are designed to desire benevolence. === Future of Medicine: Perfection and Beyond === Kaku believes that in the future, reprogramming one's genes can be done by using a specially programmed virus, which can activate genes that slow the aging process. Nanotech sensors in a room will check for various diseases and cancer, nanobots will be able to inject drugs into individual cells when diseases are found, and advancements in extracting stem cells will be manifest in the art of growing new organs. The idea of resurrecting an extinct species might now be biologically possible. === Nanotechnology: Everything from Nothing? === Kaku discusses programmable matter, quantum computers, carbon nanotubes, and the possibility of replicators. He also expects a variety of nanodevices that search and destroy cancer cells cleanly, leaving normal cells intact. === Future of Energy: Energy from the Stars === Kaku discusses the draining of oil on the planet by pointing to the Hubbert curve, and the rising problem of immigrants who wish to live the American dream of wasteful energy consumption. He predicts that hydrogen and solar energy will be the future, noting how Henry Ford and Thomas Edison bet on whether oil or electricity would dominate, and describing fusion with lasers or magnetic fields, and dismisses cold fusion as "a dead end". Kaku suggests that nations are reluctant to deal with global warming because the extravagance of oil, being the cheapest source of energy, encourages economic growth. Kaku believes that in the far future, room-temperature superconductors will usher the era of magnet-powered floating cars and trains. === Future of Space Travel: To the Stars === Unlike conventional chemical rockets which use Newton's third law of motion, solar sails take advantage of radiation pressure from stars. Kaku believes that after sending a gigantic solar sail into orbit, one could install lasers on the moon, which would hit the sail and give it extra momentum. Another alternative is to send thousands of nanoships, of which only a few would reach their destination. "Once arriving on a nearby moon, they could create a factory to make unlimited copies of themselves," says Kaku. Nanoships would require very little fuel to accelerate. They could visit the stellar neighborhood by floating on the magnetic fields of other planets. === Future of Wealth: Winners and Losers === Kaku discusses how Moore's law of robotics will affect the future of capitalism, which nations will survive and grow, and how the United States is "brain-draining" off of immigrants to fuel their economy. === Future of Humanity: Planetary Civilization === Kaku ranks the civilization of the future, with classifications based on energy consumption, entropy, and information processing. Kaku stated that humans with an average economic growth may attain planetary civilization status in 100 years, "unless there is a natural catastrophe or some calamitous act of folly, it is inevitable that we will enter this phase of our collective history". == Reception == The book got mixed reviews, and was generally regarded as having interesting insights that were delivered in an over-optimistic but bland style. Kirkus Reviews stated "The author’s scientific expertise will engage readers too sophisticated for predictions based on psychic powers or astrology." Reviewers at Library Journal have stated, "This work is highly recommended for fans of Kaku’s previous books and for readers interested in science and robotics." The Wall Street Journal considers it a "largely optimistic view of the future". The Telegraph complained that "[Physics of the Future] is partisan about technology in a way that smacks of Gerard K. O'Neill’s deliriously technocratic vision of space exploration, The High Frontier." The Guardian stated "despite the relentless technological optimism, Kaku does conjure up a genuinely exciting panorama of revolutionary science and magical technology". The New York Times complained the book's style is often "dull and charmless", but acknowledged it had the ability to "enthrall and frighten" as well. Writing in Physics Today, physicist Neil Gershenfeld said that the book has “an appealing premise” but describes “a kind of future by committee” populated by “science-fiction staples”. Gershenfeld said, “Such a forecast could have been accomplished with less effort by collating covers from popular science magazines.” Gershenfeld criticizes Kaku for “some surprising physics errors”, such as ignoring air friction on maglev vehicles. Kaku is praised for raising “profound questions”, such as the effect of affluence in the future, or the decoupling of sensory experience from reality. However, Gershenfeld laments that these questions are asked in the margins and not given a deep treatment. “It would have been more relevant to learn the author’s perspective on these questions than to find out where and to whom he’s presented lectures,” Gershenfeld said. The Economist is skeptical about prediction in general pointing out that unforeseen "unknown unknowns" led to many disruptive technologies over the century just past. == References == == External links == Michio Kaku's official website Michio Kaku (2012). Physics of the Future: The Inventions that Will Transform Our Lives. Penguin. ISBN 978-0-14-104424-8.
Wikipedia/Physics_of_the_Future
The Cité des Sciences et de l'Industrie (French pronunciation: [site de sjɑ̃s e də lɛ̃dystʁi], "City of Science and Industry", abbreviated la CSI) or simply CSI is a large science museum in Europe. Located in the Parc de la Villette in Paris, France, it is one of the three dozen French Cultural Centers of Science, Technology and Industry (CCSTI), promoting science and science culture. About five million people visit the Cité each year. Attractions include a planetarium, a submarine (the Argonaute), an IMAX theatre (La Géode) and special areas for children and teenagers. The CSI is classified as a public establishment of an industrial and commercial character, an establishment specialising in the fostering of scientific and technical culture. Created on the initiative of President Giscard d'Estaing, the goal of the Cité is to spread scientific and technical knowledge among the public, particularly for youth, and to promote public interest in science, research and industry. The most notable features of the "bioclimatic facade" facing the park are Les Serres – three greenhouse spaces each 32 metres high, 32 metres wide and 8 metres deep. The facades of Les Serres were the first structural glass walls to be constructed without framing or supporting fins. Between 30 May, and 1 June 2008, the museum hosted the 3rd International Salon for Peace Initiatives. In 2009, the Cité des Sciences and the Palais de la Découverte were brought together in a common establishment, named Universcience, with EPIC status. == Features == Explore (levels 1, 2, and 3) The library of science and industry (Médiathèque, level −1) City of children (level 0) Auditorium and things (level 0) Louis Lumière theatre (level 0) Planetarium (located between exhibits on level 2) Numeric crossroads (level −1) City of careers (level −1) City of health (level −1) Meeting place (level −1) Aquarium (level −2) Jean bertin hall (level 0) Condorcet hall (level 0) Picnic area (level 0) Post office (level 0) Store for scientific books and toys (level 0) Restaurants (level −2) Argonaute museum ship == History == The building is constructed around the vast steel trusses of an abattoir sales hall on which construction had halted in 1973. The transformation, commissioned on 15 September 1980, was designed by the architect Adrien Fainsilber and the engineering firm Rice Francis Ritchie (RFR Engineers). It was opened on 13 March 1986, inaugurated by François Mitterrand upon the occasion of the encounter of the Giotto space probe with Halley's Comet. == Floor directory == == Access == It is accessible by Métro Line 7 at the Porte de la Villette station and by bus lines 60, 71, 75, 139, 150, 151, 152 and 170. The tramway T3b was opened in December 2012. == See also == Cité de la musique, City of Music La Géode, an IMAX domed theatre List of museums in Paris Le Zénith, a concert arena in Parc de la Villette Parc de la Villette List of tourist attractions in Paris == References == == External links == Official website (in English) including light version 48 photos of the Cité
Wikipedia/Cité_des_Sciences_et_de_l'Industrie
MinutePhysics is an educational YouTube channel created by Henry Reich in 2011. The channel's videos use whiteboard animation to explain physics-related topics. Early videos on the channel were approximately one minute long. As of May 2025, the channel has over 5.85 million subscribers. == Background and video content == MinutePhysics was created by Henry Reich in 2011. Reich attended Grinnell College, where he studied mathematics and physics. He then attended the Perimeter Institute for Theoretical Physics, where he earned his Master's degree in theoretical physics from the institute's Perimeter Scholars International program. The video content on MinutePhysics deals with concepts in physics. Examples of videos Reich has uploaded onto the channel include one dealing with the concept of "touch" in regards to electromagnetism. Another deals with the concept of dark matter. The most viewed MinutePhysics video, with more than 20 million views, discusses whether it is more suitable to walk or to run when trying to avoid rain. Reich also has uploaded a series of three videos explaining the Higgs Boson. In March 2020, Reich produced a video that explained exponential projection of statistics as data is being collected, using the evolving record related to COVID-19 data. === Collaborations === MinutePhysics has collaborated with Vsauce, as well as the director of the Perimeter Institute for Theoretical Physics, Neil Turok, and Destin Sandlin (Smarter Every Day). MinutePhysics also has made two videos that were narrated by Neil deGrasse Tyson and one video narrated by Tom Scott. The channel also collaborated with physicist Sean M. Carroll in a five-part video series on time and entropy and with Grant Sanderson on a video about a lost lecture of physicist Richard Feynman, as well as a video about Bell's Theorem. In 2015, Reich collaborated with Randall Munroe on a video titled "How To Go To Space", which was animated similarly to the style found in Munroe's webcomic xkcd. Google tapped Reich for their 2017 "Be Internet Awesome" campaign, a video series aimed at creating a safer Internet space for children. === Related channels === In October 2011, Reich, along with his father Peter and brother Alex, started MinuteEarth. The channel features a similar style to MinutePhysics videos, with a focus on the Earth sciences, medicine, and general health. MinuteEarth's team has since expanded to additional members. In March 2022, MinuteFood was launched by MinuteEarth staffers Kate Yoshida and Arcadi Garcia. Its videos focus on food science. == Production and release == Neptune Studios is the parent company of Reich's channels. MinutePhysics videos can be viewed through YouTube EDU. Videos from the channel published prior to April 2016 are also made available to download as a podcast. Some videos of Reich's receive the sponsorship of organizations. For example, a 2017 MinutePhysics video describing the characteristics of neutrino oscillation was sponsored by the Heising-Simons Foundation. MinutePhysics was one of the original founders of the Standard creator community along with Dave Wiskus, CGP Grey, Philipp Dettmer and many other creators. Through Standard, MinutePhysics has released most of his content on Standard's Nebula streaming service, mostly the same videos he posts on Youtube but ad and sponsorship free, but he also releases some Nebula Originals only on the platform, including two exclusive Nebula Originals MinuteBody and The Illegal Alien. == Reception == Reich's channels have amassed a considerable following online. By 2015, the National Center for Science Education (NCSE) described MinutePhysics and MinuteEarth as "definitely well known and well received" among an audience of science communicators. His 2014 "Evolution vs Natural Selection" video on the MinutePhysics channel received criticism from the NCSE. Writing for the NCSE, Stephanie Keep expressed issue with the video's content, stating "not all evolution occurs by natural selection. To think it does lends itself to a hyper-adaptive view of life." == References == == External links == Reich, Henry. "Making Minute Physics". Sixty Symbols. Brady Haran for the University of Nottingham.
Wikipedia/MinutePhysics
A science museum is a museum devoted primarily to science. Older science museums tended to concentrate on static displays of objects related to natural history, paleontology, geology, industry and industrial machinery, etc. Modern trends in museology have broadened the range of subject matter and introduced many interactive exhibits. Modern science museums, increasingly referred to as 'science centres' or 'discovery centres', also feature technology. While the mission statements of science centres and modern museums may vary, they are commonly places that make science accessible and encourage the excitement of discovery. == History == As early as the Renaissance period, aristocrats collected curiosities for display. Universities, and in particular medical schools, also maintained study collections of specimens for their students. Scientists and collectors displayed their finds in private cabinets of curiosities. Such collections were the predecessors of modern natural history museums. In 1683, the first purpose-built museum covering natural philosophy, the original Ashmolean museum (now called the Museum of the History of Science) in Oxford, England, was opened, although its scope was mixed. This was followed in 1752 by the first dedicated science museum, the Museo de Ciencias Naturales, in Madrid, which almost did not survive Francoist Spain. Today, the museum works closely with the Spanish National Research Council (Consejo Superior de Investigaciones Científicas). The Utrecht University Museum, established in 1836, and the Netherlands' foremost research museum, displays an extensive collection of 18th-century animal and human "rarities" in its original setting. More science museums developed during the Industrial Revolution, when great national exhibitions showcased the triumphs of both science and industry. An example is the Great Exhibition in 1851 at The Crystal Palace, London, England, surplus items from which contributed to the Science Museum, London, founded in 1857. In the United States of America, various natural history Societies established collections in the early 19th century. These later evolved into museums. A notable example is the New England Museum of Natural History (now the Museum of Science) which opened in Boston in 1864. Another was the Academy of Science, St. Louis, founded in 1856, the first scientific organisation west of the Mississippi. (Although the organisation managed scientific collections for several decades, a formal museum was not created until the mid-20th century.) == Modern science museums == The modern interactive science museum appears to have been pioneered by Munich's Deutsches Museum (German Museum of Masterpieces of Science and Technology) in the early 20th century. This museum had moving exhibits where visitors were encouraged to push buttons and work levers. The concept was taken to the United States by Julius Rosenwald, chairman of Sears, Roebuck and Company, who visited the Deutsches Museum with his young son in 1911. He was so captivated by the experience that he decided to build a similar museum in his home town. The Ampère Museum, close to Lyon, was created in 1931 and is the first interactive scientific museum in France. Chicago's Museum of Science and Industry opened in phases between 1933 and 1940. In 1959, the Museum of Science and Natural History (now the Saint Louis Science Center) was formally created by the Academy of Science of Saint Louis, featuring many interactive science and history exhibits, and in August 1969, Frank Oppenheimer dedicated his new Exploratorium in San Francisco almost completely to interactive science exhibits, building on the experience by publishing 'Cookbooks' that explain how to construct versions of the Exploratorium's exhibits. The Ontario Science Centre, which opened in September 1969, continued the trend of featuring interactive exhibits rather than static displays. In 1973, the first Omnimax cinema opened at the Reuben H. Fleet Space Theater and Science Center in San Diego's Balboa Park. The tilted-dome Space Theater doubled as a planetarium. The Science Centre was an exploratorium-style museum included as a small part of the complex. This combination of interactive science museum, planetarium and Omnimax theater pioneered a configuration that many major science museums now follow. Also in 1973, the Association of Science-Technology Centers (ASTC) was founded as an international organisation to provide a collective voice, professional support, and programming opportunities for science centres, museums and related institutions. The massive Cité des Sciences et de l'Industrie (City of Science and Industry) opened in Paris in 1986, and national centres soon followed in Denmark (Experimentarium), Sweden (Tom Tits Experiment), Finland (Heureka), and Spain (Museu de les Ciencies Principe Felipe). In the United Kingdom, the first interactive centres also opened in 1986 on a modest scale, with further developments more than a decade later, funded by the National Lottery for projects to celebrate the Millennium. Since the 1990s, science museums and centres have been created or greatly expanded in Asia. Examples are Thailand's National Science Museum and Japan's Minato Science Museum. == Science centres == Museums that brand themselves as science centres emphasise a hands-on approach, featuring interactive exhibits that encourage visitors to experiment and explore. Recently, there has been a push for science museums to be more involved in science communication and educating the public about the scientific process. Microbiologist and science communicator Natalia Pasternak Taschner stated, "I believe that science museums can promote critical thinking, especially in teenagers and young adults, by teaching them about the scientific method and the process of science, and how by using this to develop knowledge and technology, we can be less wrong." Urania was a science centre founded in Berlin in 1888. Most of its exhibits were destroyed during World War II, as were those of a range of German technical museums. The Academy of Science of Saint Louis (founded in 1856) created the Saint Louis Museum of Science and Natural History in 1959 (Saint Louis Science Center), but generally science centres are a product of the 1960s and later. In the United Kingdom, many were founded as Millennium projects, with funding from the National Lotteries Fund. The first 'science centre' in the United States was the Science Center of Pinellas County, founded in 1959. The Pacific Science Center (one of the first large organisations to call itself a 'science centre' rather than a museum), opened in a Seattle World's Fair building in 1962. In 1969, Oppenheimer's Exploratorium opened in San Francisco, California, and the Ontario Science Centre opened near Toronto, Ontario, Canada. By the early 1970s, COSI Columbus, then known as the Center of Science and Industry in Columbus, Ohio, had run its first 'camp-in'. In 1983, the Smithsonian Institution invited visitors to the Discovery Room in the newly opened National Museum of Natural History Museum Support Center in Suitland, Maryland, where they could touch and handle formerly off-limits specimens. The new-style museums banded together for mutual support. In 1971, 16 museum directors gathered to discuss the possibility of starting a new association; one more specifically tailored to their needs than the existing American Association of Museums (now the American Alliance of Museums). As a result of this, the Association of Science-Technology Centers was formally established in 1973, headquartered in Washington DC, but with an international organisational membership. The corresponding European organisation is Ecsite, and in the United Kingdom, the Association of Science and Discovery Centres represents the interests of over 60 major science engagement organisations. The Asia Pacific Network of Science and Technology Centres (ASPAC) is an association initiated in 1997 with over 50 members from 20 countries across Asia and Australia (2022). Their regional sister organisations are the Network for the Popularization of Science and Technology in Latin America and The Caribbean (RedPOP), the North Africa and Middle East science centres (NAMES), and the Southern African Association of Science and Technology Centres (SAASTEC). In India, the National Council of Science Museums runs science centres at several places including Delhi, Bhopal, Nagpur and Ranchi. There are also a number of private Science Centres, including the Birla Science Museum and The Science Garage in Hyderabad. == See also == List of science museums Science education Science festival Science outreach Physics Outreach List of natural history museums == References == == General references == Kaushik, R.,1996, "Effectiveness of Indian science centres as learning environments : a study of educational objectives in the design of museum experiences", Unpublished PhD thesis, University of Leicester, UK Kaushik, R.,1996, "Non-science-adult-visitors in science centres: what is there for them to do?", Museological Review, Vol. 2, No. 1, p. 72–84. Kaushik, R.,1996, "Health matters in science museums: a review" in Pearce, S. (ed.) New Research in Museum Studies, Vol. 6, Athlone Press, London/Atlantic Highlands, p. 186–193. Kaushik, R.,1997, "Attitude development in science museums/centres", in Proceedings of the Nova Scotian Institute of Science, Vol. 40, No. 2, p. 1–12. == Further reading == Holland, William Jacob (1911). "Museums of Science" . In Chisholm, Hugh (ed.). Encyclopædia Britannica. Vol. 19 (11th ed.). Cambridge University Press. pp. 64–69. == External links ==
Wikipedia/Science_museums
The World Science Festival is an annual science festival hosted by the World Science Foundation, a 501(c)(3) nonprofit organization based in New York City. There is also an Asia-Pacific event held in Brisbane, Australia. The foundation's goal is to create a broad public that is "informed about science, inspired by its wonder, convinced of its value, and willing to consider its impact on the future." == History == The festival was founded and established by Brian Greene, professor of mathematics and physics at Columbia University and author of several science books (including The Elegant Universe, and The Hidden Reality); and Tracy Day, a four-time National News Emmy Award-winning journalist, who has produced live and documentary programs for the nation's most prominent television news departments. Greene is now chairman of the World Science Foundation and Day is executive of the World Science Festival. The festival's events are rooted in science, but also meet the production standards of professional television and live theater events. Founding benefactors include the Simons Foundation, the Alfred P. Sloan Foundation, and the John Templeton Foundation. == Board of directors == The founding benefactors were the Simons Foundation, the Alfred P. Sloan Foundation, and the John Templeton Foundation. == Inaugural festival == The inaugural festival took place May 28 - June 1, 2008, at 22 venues in New York City. The festival, described by The New York Times as a "new cultural institution," included 46 events, a street festival, and, on the first day, the day-long World Science Summit at Columbia University. Among the more than 150 participants, speakers and artists were 11 Nobel Prize laureates. Venues included the American Museum of Natural History, Abyssinian Baptist Church, and New York University's Skirball Center for the Performing Arts at Gould Plaza. The total audience was more than 120,000. == World Science Festival venues == === New York City === Over the past 10 years, the festival has attracted more than two million visitors, with millions more viewing the programs online. Programming includes discussions, debates, plays, interactive explorations, musical performances, intimate salons and large outdoor events in parks, museums, galleries and performing arts venues throughout New York City. For a complete list of program, visit the festival's official website, launched in 2004. === Brisbane === Since 2016, another event has been held each year in Brisbane, Australia. It is organized by Queensland Museum Network, which holds the exclusive license for in the Asia Pacific region from 2016 to 2021. == Past participants == Past participants have included: Nobel Laureates The following Nobel Laureates have participated: The full list of participants can be found on the festival's official website. == Education == The World Science Festival maintains educational programs for students and adults around the world in many scientific disciplines. === World Science Scholars === The prestigious World Science Scholars program allows "high school students with exceptional mathematical talent to be mentored by world-renowned scientists and connect both online and in person with an elite group of peers." Scholars work together on projects, internships, exercises and discussions on topics such as particle physics, computational thinking, astrobiology, and string theory. The free two-year program is funded by the John Templeton Foundation. Previous World Science Scholars have included Brian Greene, Mandë Holford, Miguel Nicolelis, Stephen Wolfram, Cumrun Vafa, and Suzana Herculano-Houzel. === World Science U === World Science U offers everyone from high school students to adults the opportunity to explore science topics with researchers and educators. == See also == List of festivals in the United States == References == == External links == worldsciencefestival.com, the festival's official website
Wikipedia/World_Science_Festival
Physics of the Impossible: A Scientific Exploration Into the World of Phasers, Force Fields, Teleportation, and Time Travel is a book by theoretical physicist Michio Kaku. Kaku uses discussion of speculative technologies to introduce topics of fundamental physics to the reader. The topic of invisibility becomes a discussion on why the speed of light is slower in water than in vacuum, that electromagnetism is similar to ripples in a pond, and Kaku discusses newly developed composite materials. The topic of Star Trek phasers becomes a lesson on how lasers work and how laser-based research is conducted. The cover of his book depicts a TARDIS, a device used in the British science fiction television show Doctor Who to travel in space and time, in its disguise as a police box, continuously passing through a time loop. With each discussion of science fiction technology topics he also "explains the hurdles to realizing these science fiction concepts as reality". == Concept == According to Kaku, technological advances that we take for granted today were declared impossible 150 years ago. William Thomson Kelvin (1824–1907), a mathematical physicist and creator of the Kelvin scale said publicly that “heavier than air” flying machines were impossible: “He thought X-rays were a hoax, and that radio had no future.” Likewise, Ernest Rutherford (1871–1937), a physicist who experimentally described the atom, thought the atom bomb was impossible and he compared it to moonshine (a crazy or foolish idea). Televisions, computers, and the Internet would seem incredibly fantastic to the people of the turn of the 20th century. Black holes were considered science fiction and even Albert Einstein showed that black holes could not exist. 19th century science had determined that it was impossible for the earth to be billions of years old. Even in the 1920s and 1930s, Robert Goddard was scoffed at because it was believed that rockets would never be able to go into space. Such advances were considered impossible because the basic laws of physics and science were not understood as well as they would subsequently be. Kaku writes: "As a physicist I learned that the impossible is often a relative term.” By this definition of "impossible", he poses the question "Is it not plausible to think that we might someday build space ships that can travel distances of light years, or think that we might teleport ourselves from one place to the other?" == Types of impossibilities == Each chapter is named by a possible, or improbable, technology of the future. After a look at the development of today's technology, there is discussion as to how this advanced technology might become a reality. Chapters become somewhat more general toward the end of the book. Some of our present day technologies are explained, and then extrapolated into futuristic applications. In the future, current technologies are still recognizable, but in a slightly altered form. For example, when discussing force fields of the future, Dr. Kaku writes about cutting edge laser technology, and newly developed plasma windows. These are two of several technologies, which he sees as required for creating a force field. To create a force field these would be combined in a slightly altered form, such as more precise or more powerful. Furthermore, this discussion on force fields, as well as on the pantheon of highly advanced technologies, remains as true to the original concepts (as in how the public generally imagines advanced technologies) as possible, while remaining practical. Kaku concludes his book with a short epilogue detailing the newest frontiers in physics and how there is still much more to be learned about physics and our universe. Kaku writes that since scientists understand the basic laws of physics today they can imagine a basic outline of future technologies that might work: "Physicists today understand the basic laws [of physics] extending over a staggering forty three orders of magnitude, from the interior of the proton out to the expanding universe." He goes on to say that physicists can discern between future technologies that are merely improbable and those technologies that are truly impossible. He uses a system of Class I, Class II, and Class III to classify these science-fictional future technologies that are believed to be impossible today. === Class I === Class I Impossibilities are "technologies that are impossible today, but that do not violate the known laws of physics". Kaku speculates that these technologies may become available in some limited form in a century or two. Shields up! One of the commands used by Captain Kirk in the TV series Star Trek. Force fields are vital for surviving any battle in the fictional world, but what exactly are force fields? In science fiction force fields are very straight forward, but to make a repulsive force does appear impossible to make in a lab. Gravity appears in the four force list in Kaku's book. Gravity acts as the exact opposite of a force field, but has many similar properties. The whole planet keeps us standing on the ground and we cannot counter the force by jumping. A future technology that may be seen within a lifetime is a new advanced stealth technology. This is a Class I impossibility. In 2006, Duke University and Imperial College bent microwaves around an object so that it would appear invisible in the microwave range. The object is like a boulder in a stream. Downstream the water has converged in such a way that there is no evidence of a boulder upstream. Likewise, the microwaves converge in such a way that, to an observer from any direction, there is no evidence of an object. In 2008, two groups, one at Caltech and the other at Karlsruhe Institute of Technology, bent red light and blue-green of the visible spectrum. This made the object appear invisible in the red and blue green light range at the microscopic level. Teleportation is a class I impossibility, in that it does not violate the laws of physics, and could possibly exist on the time scale of a century. In 1988, researchers first teleported information at the quantum level. As of 2008 information can be teleported from Atom A to Atom B, for example. But this is nothing like beaming Captain Kirk down to a planet and back. In order to do that, a person would have to be dissolved atom by atom then rematerialized at the other end. On the scale of a decade, it will probably be possible to teleport the first molecule, and maybe even a virus. === Class II === Class II Impossibilities are “technologies that sit at the very edge of our understanding of the physical world", possibly taking thousands or millions of years to become available. Such a technology is time travel. Einstein’s equations do show that time travel is possible. This would not be developed for a time scale of centuries or even millennia from now. This would make it a Class II impossibility. The two major physical hurdles are energy and stability. Traveling through time would require the entire energy of a star or black hole. Questions of stability are: will the radiation from such a journey kill you and will the “aperture” remain open so you can get back? According to Dr. Kaku in an interview, “the serious study of the impossible has frequently opened up rich and unexpected domains of science”. === Class III === Class III Impossibilities are “technologies that violate the known laws of physics". Kaku covers two of these, perpetual motion machines and precognition. Development of these technologies would represent a fundamental shift in human understanding of physics. == Reception == Bryan Appleyard considers this book a demonstration of renewed confidence in the possibilities of physics. He also sees the book as a depiction of how the public believes in an especially optimistic view of the future: "Kaku, when on home territory, is an effective and gifted dramatiser of highly complex ideas. If you want to know what the implications would be of room-temperature superconductors, or all about tachyons, particles that travel faster than the speed of light and pass through all points of the universe simultaneously, then this is the place to find out." To Appleyard, the book's use of sci-fi technology to open the door to real science was interesting and had the added effect of making discoveries that might otherwise end up being obscure as giving us a feeling of being closer to that optimistic future. When bending microwaves around an object, rather than an obscure physics experiment, it creates a feeling that a Star Trek cloaking device is just around the corner. An equally obscure subatomic experiment means that soon we will be saying, "Beam me up Scotty". In this regard he book helps to “sustain our sense of an increasing acceleration into a future that must be radically different from the present". According to Appleyard this radically different and better future "... is what lies at the core of this type of book. The future, conceived as some realm in which contemporary problems have been resolved, is the primary, though usually unacknowledged, faith" that people have always had." == See also == The Physics of Star Trek, a similar book by the physicist Lawrence M. Krauss Metamaterial List of physicists Quantum teleportation Interstellar travel Superstring theory Supersymmetry List of theoretical physicists == References == M. Kaku (2008). Physics of the Impossible: A Scientific Exploration Into the World of Phasers, Force Fields, Teleportation, and Time Travel. Doubleday. ISBN 978-0-385-52069-0. == External links == Review in The Independent Review in NewScientist.com 2057: Time Travel with Dr. Kaku Sci-Fi Science : Physics of the Impossible Series with Dr. Kaku
Wikipedia/Physics_of_the_Impossible
The Tao of Physics: An Exploration of the Parallels Between Modern Physics and Eastern Mysticism is a 1975 book by physicist Fritjof Capra. A bestseller in the United States, it has been translated into 23 languages. Capra summarized his motivation for writing the book: “Science does not need mysticism and mysticism does not need science. But man needs both.” == Origin == According to the preface of the first edition, reprinted in subsequent editions, Capra struggled to reconcile theoretical physics and Eastern mysticism and was at first "helped on my way by 'power plants'" or psychedelics, with the first experience "so overwhelming that I burst into tears, at the same time, not unlike Castaneda, pouring out my impressions to a piece of paper". (p. 12, 4th ed.) Capra later discussed his ideas with Werner Heisenberg in 1972, as he mentioned in the following interview excerpt: I had several discussions with Heisenberg. I lived in England then [circa 1972], and I visited him several times in Munich and showed him the whole manuscript chapter by chapter. He was very interested and very open, and he told me something that I think is not known publicly because he never published it. He said that he was well aware of these parallels. While he was working on quantum theory he went to India to lecture and was a guest of Tagore. He talked a lot with Tagore about Indian philosophy. Heisenberg told me that these talks had helped him a lot with his work in physics, because they showed him that all these new ideas in quantum physics were in fact not all that crazy. He realized there was, in fact, a whole culture that subscribed to very similar ideas. Heisenberg said that this was a great help for him. Niels Bohr had a similar experience when he went to China. Bohr adopted the yin yang symbol as part of his coat of arms when he was knighted in 1947; it is claimed in the book that it was a result of orientalist influences. The Tao of Physics was followed by other books of the same genre like The Hidden Connection, The Turning Point and The Web of Life in which Capra extended the argument of how Eastern mysticism and scientific findings of today relate, and how Eastern mysticism might also have the linguistic and philosophical tools required to undertake to some of the biggest scientific challenges remaining. == Afterword to the third edition == In the afterword to the third edition (published in 1982, pp 360–368 of the 1991 edition) Capra offers six suggestions for a new paradigm in science. Consider the part and the whole as more symmetrically conditioning one another. Replace thinking in terms of structure with thinking in terms of process. Replace ‘objective science’ with ‘epistemic science’, where the approach to decide what counts as knowledge adapts to the subject studied. Replace the idea of knowledge as buildings based on foundations with an idea of knowledge as networks. Abandon the quest for truth with a quest for better approximations. Abandon the ideas of domination of nature with one of cooperation and nonviolence. Capra reconnects this new paradigm to the theories of living and self-organizing systems that has emerged from cybernetics. Here he quotes Ilya Prigogine, Gregory Bateson, Humberto Maturana and Francisco Varela (p. 372 of the 1991 edition). == Acclaim and criticism == According to Capra, Werner Heisenberg was in agreement with the main idea of the book:I showed the manuscript to him chapter by chapter, briefly summarizing the content of each chapter and emphasizing especially the topics related to his own work. Heisenberg was most interested in the entire manuscript and very open to hearing my ideas. I told him that I saw two basic themes running through all the theories of modern physics, which were also the two basic themes of all mystical traditions-the fundamental interrelatedness and interdependence of all phenomena and the intrinsically dynamic nature of reality. Heisenberg agreed with me as far as physics was concerned and he also told me that he was well aware of the emphasis on interconnectedness in Eastern thought. However, he had been unaware of the dynamic aspect of the Eastern world view and was intrigued when I showed him with numerous examples from my manuscript that the principal Sanskrit terms used in Hindu and Buddhist philosophy-brahman, rta, lila, karma, samsara, etc.-had dynamic connotations. At the end of my rather long presentation of the manuscript Heisenberg said simply: "Basically, I am in complete agreement with you."The book was a best-seller in the United States. It received a positive review from New York magazine: A brilliant best-seller.... Lucidly analyzes the tenets of Hinduism, Buddhism, and Taoism to show their striking parallels with the latest discoveries in cyclotrons. Victor N. Mansfield, a professor of physics and astronomy at Colgate University who wrote many papers and books of his own connecting physics to Buddhism and also to Jungian psychology, complimented The Tao of Physics in Physics Today: "Fritjof Capra, in The Tao of Physics, seeks ... an integration of the mathematical world view of modern physics and the mystical visions of Buddha and Krishna. Where others have failed miserably in trying to unite these seemingly different world views, Capra, a high-energy theorist, has succeeded admirably. I strongly recommend the book to both layman and scientist." However, it is not without its critics. Jeremy Bernstein, a professor of physics at the Stevens Institute of Technology, chastised The Tao of Physics: At the heart of the matter is Mr. Capra's methodology – his use of what seem to me to be accidental similarities of language as if these were somehow evidence of deeply rooted connections. Thus I agree with Capra when he writes, "Science does not need mysticism and mysticism does not need science but man needs both." What no one needs, in my opinion, is this superficial and profoundly misleading book. Leon M. Lederman, a Nobel Prize-winning physicist and current Director Emeritus of Fermilab, criticized both The Tao of Physics and Gary Zukav's The Dancing Wu Li Masters in his 1993 book The God Particle: If the Universe Is the Answer, What Is the Question? Starting with reasonable descriptions of quantum physics, he constructs elaborate extensions, totally bereft of the understanding of how carefully experiment and theory are woven together and how much blood, sweat, and tears go into each painful advance. Philosopher of science Eric Scerri criticizes both Capra and Zukav and similar books. Peter Woit, a mathematical physicist at Columbia University, criticized Capra for continuing to build his case for physics-mysticism parallels on the bootstrap model of strong-force interactions set out at the end of the book, long after the Standard Model had become thoroughly accepted by physicists as a better model: The Tao of Physics was completed in December 1974, and the implications of the November Revolution one month earlier that led to the dramatic confirmations of the standard-model quantum field theory clearly had not sunk in for Capra (like many others at that time). What is harder to understand is that the book has now gone through several editions, and in each of them Capra has left intact the now out-of-date physics, including new forewords and afterwords that with a straight face deny what has happened. The foreword to the second edition of 1983 claims, "It has been very gratifying for me that none of these recent developments has invalidated anything I wrote seven years ago. In fact, most of them were anticipated in the original edition," a statement far from any relation to the reality that in 1983 the standard model was nearly universally accepted in the physics community, and the bootstrap theory was a dead idea ... Even now, Capra's book, with its nutty denials of what has happened in particle theory, can be found selling well at every major bookstore. It has been joined by some other books on the same topic, most notably Gary Zukav's The Dancing Wu-Li Masters. The bootstrap philosophy, despite its complete failure as a physical theory, lives on as part of an embarrassing New Age cult, with its followers refusing to acknowledge what has happened. In a 2019 commemoration in honour of physicist Geoffrey Chew, one of bootstrap's "fathers", Capra replied to criticisms such as Woit's: However, the standard model does not include gravity, and hence fails to integrate all known particles and forces into a single mathematical framework. The currently most popular candidate for such a framework is string theory, which pictures all particles as different vibrations of mathematical "strings" in an abstract 9-dimensional space. The mathematical elegance of string theory is compelling, but the theory has serious deficiencies. If these difficulties persist, and if a theory of "quantum gravity" continues to remain elusive, the bootstrap idea may well be revived someday, in some mathematical formulation or other. == Editions == The Tao of Physics, Fritjof Capra, Shambhala Publications, 1975 Shambhala, 2nd edition 1983: ISBN 0-394-71612-4 Bantam reprint 1985: ISBN 0-553-26379-X Shambhala, 3rd edition 1991: ISBN 0-87773-594-8 Shambhala, 4th edition 2000: ISBN 1-57062-519-0 Shambhala, 5th edition 2010: ISBN 978-1590308356 Audio Renaissance, 1990 audio cassette tape: ISBN 1-55927-089-6 Audio Renaissance, 2004 audio compact disc (abridged) ISBN 1-55927-999-0 == See also == Quantum mysticism Quantum Reality The Dancing Wu Li Masters The Turning Point War of the Worldviews == Notes == == References == The Holographic Paradigm and Other Paradoxes, edited by Ken Wilber, Boulder, Colorado: Shambhala, 1982, ISBN 0-394-71237-4 Woit, Peter (2006). Not Even Wrong – the Failure of String Theory and the Search for Unity in Physical Law. Basic Books. ISBN 0-465-09275-6. Siu, R. G. H., The Tao of Science: an Essay on Western Knowledge and Eastern Wisdom, Cambridge, Massachusetts: MIT Press, 1957, ISBN 978-0-262-69004-1 / LCCN 57--13460
Wikipedia/The_Tao_of_Physics
The Science & Entertainment Exchange is a program run and developed by the United States National Academy of Sciences (NAS) to increase public awareness, knowledge, and understanding of science and advanced science technology through its representation in television, film, and other media. It serves as a pro-science movement with the main goal of re-cultivating how science and scientists truly are in order to rid the public of false perceptions on these topics. The Exchange provides entertainment industry professionals with access to credible and knowledgeable scientists and engineers who help to encourage and create effective representations of science and scientists in the media, whether it be on television, in films, plays, etc. The Exchange also helps the science community understand the needs and requirements of the entertainment industry, while making sure science is conveyed in a correct and positive manner to the target audience. Officially launched in November 2008, the Exchange can be thought of as a partnership between NAS and Hollywood, as it arranges direct consultations between scientists and entertainment professionals who develop science-themed content. This collaboration allows for industry professionals to accurately portray the science that they wish to capture and include in their media production. It also provides scientists and science organizations with the opportunity to communicate effectively with a large audience that may otherwise be hard to reach such as through innovative physics outreach. It also provides a variety of other services, including scheduling briefings, brainstorming sessions, screenings, and salons. The Exchange is based in Los Angeles, California. == Examples == === Watchmen === In one of its first acts of business, The Science & Entertainment Exchange connected Alex McDowell, the production designer for the 2009 film adaptation of the graphic novel Watchmen, with University of Minnesota physics professor James Kakalios. Kakalios is the author of the book The Physics of Superheroes, and was selected as a science consultant in part because of his extensive experience incorporating comic book superheroes into his writings and lectures as a way to motivate the public to take an interest in science. In the run up to the theatrical release of Watchmen, Kakalios and the University of Minnesota produced a short video (with more than 1,500,000 views as of April 29, 2009) explaining the science behind Dr. Manhattan's super powers to increase public awareness of the science behind the film. === Fringe === The TV series Fringe is using the Exchange to identify scientists able to address technical questions regarding scripts in development. A rapid-response team of specialists in neuroscience, epidemiology, and genetics—themes frequently featured in the series—has been gathered to assist the scriptwriters. === Thor === Thor screenwriters connected through The Exchange with Kevin Hand from Caltech's Jet Propulsion Lab, who helped them turn the comic book's mythological worlds into believable cinematic scenery. In addition, the collaboration resulted in the film featuring Natalie Portman as Jane Foster, a female physicist. This worked towards eliminating the stereotype that many hold viewing scientists strictly as men. This action also served to portray scientists in a positive, relatable manner which so many other media productions miss the mark on completely by making scientists out to be mean, evil, and cruel. This outcome illustrates a main goal of the Exchange, to use popular entertainment media to better communicate to the public about the true realities of scientists and science information in general. == Implications on public conceptions == George Gerbner's Cultivation theory identifies television and film as a main source of information and storytelling in today's world, and Gebner researched the potential effect of portrayals of science in the media on public perceptions of science and scientists. Gerbner's research used cultivation analysis to understand and examine the response patterns of 1,631 respondents' group which includes light and heavy viewers. They were presented with five propositions - science makes our way of life change too fast; makes our lives healthier, easier, and more comfortable; breaks down people's ideas of right and wrong; is more likely to cause problems than to find solutions; and the growth of science means that a few people can control our lives. The research estimated the percentage of positive responses to science based on two groups divided by sex and education. The study suggested that "the exposure to science through television shows cultivate less favorable orientation towards science, especially in high status groups whose light-viewer members are its greatest supporters, and lower status groups have a generally low opinion of science." These observations can be understood through the concept of mainstreaming. Furthermore, entertainment media have often portrayed scientists as evil, mean, and cruel, notably including the Mad scientist stock character. Gerbner suggests that, with movies and television portraying a larger ratio of villainous scientists in a major roles than doctors or law enforcers, these media effects have cultivated a false perception of scientists, and science in general. Although trust in science has always been high, with movies and television continuously negatively portraying scientists, the effects of the entertainment industry can cause people's perception of scientists to be skewed. In addition, the more people watch television, the more they think that scientists are odd, peculiar, have few interests outside of work and spend little time with their families. These negatives stereotypes associated with scientists can lead a detrimental impact on the attitude towards science as heavy TV viewers are more likely to have lower or no appreciation to the benefits of science. == Science as social context == In a study of the audience effects for the 2004 blockbuster The Day After Tomorrow, viewers of the film, after controlling for education, gender, age, and political views, were significantly more concerned about global climate change, more likely to take action to reduce greenhouse gas emissions, and more trusting of government agencies such as NASA and NOAA. As news agenda-setters, film and television can also have an important indirect influence. These films provide dramatic "news pegs" for journalists seeking to either sustain or generate new coverage of an issue. For example, studies comparing the news attention sparked by the 2001 release of the Third IPCC report on climate change with the amount of coverage generated by the 2004 release of The Day After Tomorrow and the 2006 release of Al Gore's An Inconvenient Truth, find that both films far surpassed the IPCC report in media publicity. This illustrates the great power that the entertainment media industry has in communicating with a lay audience. Science is a topic that most laymen seem to disregard, and do not show great interest or concern in. By incorporating it into a more mainstream media environment, such as television series and films versus scientific journals and newspapers, the Exchange allows scientific information and news to be communicated in an accurate and clear manner to those that would otherwise ignore the subject. In another example, the 1998 releases of the blockbusters Deep Impact and Armageddon galvanized news attention to the potential problem of Near Earth Objects, a science policy issue that otherwise rarely, if ever, receives news attention. Scientific verisimilitude in movies and television has been positively correlated with commercial success, providing "realism" and "legitimacy" to which audiences respond. == Advisory board == == References == == External links == Official website
Wikipedia/Science_&_Entertainment_Exchange
In quantum chromodynamics, heavy quark effective theory (HQET) is an effective field theory describing the physics of heavy (that is, of mass far greater than the QCD scale) quarks. It is used in studying the properties of hadrons containing a single charm or bottom quark. The effective theory was formalised in 1990 by Howard Georgi, Estia Eichten and Christopher Hill, building upon the works of Nathan Isgur and Mark Wise, Voloshin and Shifman, and others. Quantum chromodynamics (QCD) is the theory of strong force, through which quarks and gluons interact. HQET is the limit of QCD with the quark mass taken to infinity while its four-velocity is held fixed. This approximation enables non-perturbative (in the strong interaction coupling) treatment of quarks that are much heavier than the QCD mass scale. The mass scale is of order 200 MeV. Hence the heavy quarks include charm, bottom and top quarks, whereas up, down and strange quarks are considered light. Since the top quark is extremely short-lived, only the charm and bottom quarks are of significant interest to HQET, of which only the latter has mass sufficiently high that the effective theory can be applied without major perturbative corrections. == References == == Further reading == Shifman, M. A. (1999). "Lectures on Heavy Quarks in Quantum Chromodynamics". ITEP Lectures on Particle Physics and Field Theory. World Scientific Lecture Notes in Physics. Vol. 62. pp. 1–109. arXiv:hep-ph/9510377. doi:10.1142/9789812798961_0001. ISBN 978-981-02-3947-3. ISSN 1793-1436. S2CID 18892623. Sommer, Rainer (2015). "Non-perturbative Heavy Quark Effective Theory: Introduction and Status". Nuclear and Particle Physics Proceedings. 261–262: 338–367. arXiv:1501.03060. Bibcode:2015NPPP..261..338S. doi:10.1016/j.nuclphysbps.2015.03.022. ISSN 2405-6014. S2CID 53354994.
Wikipedia/Heavy_quark_effective_field_theory
Text and conversation is a theory in the field of organizational communication illustrating how communication makes up an organization. In the theory's simplest explanation, an organization is created and defined by communication. Communication "is" the organization and the organization exists because communication takes place. The theory is built on the notion that an organization is not seen as a physical unit holding communication. Text and conversation theory puts communication processes at the heart of organizational communication and postulates, an organization doesn't contain communication as a "causal influence", but is formed by the communication within. This theory is not intended for direct application, but rather to explain how communication exists. The theory provides a framework for better understanding organizational communication. Since the foundation of organizations are in communication, an organization cannot exist without communication, and the organization is defined as the result of communications happening within its context. Communications begin with individuals within the organization discussing beliefs, goals, structures, plans and relationships. These communicators achieve this through constant development, delivery, and translation of "text and conversation". The theory proposes mechanisms of communications are "text and conversation". == Definitions == The foundation of this theory is the concepts of text and conversation. Text is defined as the content of interaction, or what is said in an interaction. Text is the meaning made available to individuals through a face-to-face or electronic mode of communication. Conversation is defined as what is happening behaviorally between two or more participants in the communication process. Conversation is the exchange or interaction itself. The process of the text and conversation exchange is reciprocal: text needs conversation and vice versa for the process of communication to occur. Text, or content, must have context to be effective and a conversation, or discourse, needs to have a beginning, middle and end. Individuals create the beginning, middle and end by using punctuation, bracketing or framing. When conversation is coupled with text, or meaning, communication occurs. Taylor submits this process is a translation process of: translation of text to conversation and the translation of conversation into text. "text" = content and meaning "conversation" = discourse and exchange == Theorist == James R. Taylor, introduced text and conversation theory in 1996 with François Cooren, Giroux and Robichaud and then further explored the theory in 1999. Taylor drew on the work of sociologist and educator John Dewey's pragmatic view society exists not "by" but "in" communication. Taylor followed the same principle, putting communication as the essence of an organization. He was born in 1928 and is Professor Emeritus at the Department of Communication of the Université de Montréal, which he founded in the early 1970s. Drawing from research in fields of organizational psychology (Karl E. Weick), ethnomethodology (Harold Garfinkel), Deirdre Boden), phenomenology (Alfred Schütz) and collective minding (Edwin Hutchins), Taylor formed the original text and conversation theory. This line of thought has come to be known as "The Montreal School" of organizational communication, sometimes referred to as TMS, and has been acknowledged as an original theory by authors such as Haridimos Tsoukas, Linda Putman, and Karl E. Weick. Taylor said,"...organization emerges in communication, which thus furnishes not only the site of its appearance to its members, but also the surface on which members read the meaning of the organization to them." Taylor argues communication is the "site and emergence of organization." == Foundational theories == === Structuration theory === "Structuration theory" identifies h text and conversation theory evolved from this communication construct. Proposed by Anthony Giddens (1984) in ‘’The Constitution on Society,’’ structuration theory, originated in the discipline of sociology. Giddens’ theory has been adapted to the field of communication, particularly organizational communication; specifically, how and why structural changes are possible and the duality of formal and informal communication. This theory is based on concepts of structure and agency. structure is defined as rules and resources of an organization; agency is the free will to choose to do otherwise than prescribed through structure. "structure": is rules and resources, the reason we do things because of the structure of how we were raised (culture, sociological and physiological). Giddens (1984) explains these rules as recipes or procedures for accomplishing tasks within an organization. Resources have two subsets: allocative and authoritative, which can be leveraged to accomplish desired outcomes. Allocative are quantitative resources, while authoritative are qualitative. "agency": is the free will to choose to do otherwise. Agency is the reason people do things, because they have a choice This is the process individuals internalize actions and make choices, rather than making decisions because the structure says they should. Structure is based on the formal organization and accepted policy. Agency is informal communication and individually based. "Dualism": mutually exclusive answer (i.e., either/or) "Duality": mutually constitutive answer (i.e., both/and) "Structuration": society itself is located in a duality of structure in which the enactments of agency become structures that, across time, produce possibilities for agency enactment. Another way explain it is structure is the context. Structuration theory identifies structure and agency as coexisting. Formal rules and resources impact informal communication and discourse. This duality and coexistence ensures a cyclical nature between structure and agency, which has a cause and effect: new structure and agency is created from the causal relationships of previous structure and agency decisions. The concept to understanding structuration is to understand to duality of structure The similarity of Giddens’ theory and conversation and text theory is a mutual-existing and causal relationship of communication. The main difference, between the two, is structuration theory explains how communication impacts the organization, text and conversation, by means of structure and agency. Giddens' construct of structuration explains how mutually causal relationships constitute the essence of an organization. This concept illustrates how communication within an organization depends on the translation of meaning. === Conversation theory === "Conversation theory", proposed by Gordon Pask in the 1970s, identifies a framework to explain how scientific theory and interactions formulate the "construction of knowledge" Conversation Theory is based on the idea social systems are symbolic and language-oriented. Additionally, these systems are based on responses and interpretations, and the meaning interpreted by individuals via communication This theory is based on interaction between two or more individuals, with unlike perspectives The significance of having unlike perspectives is that it enables a distinctive standpoint: it permits the ability to study how people identify differences and understand meaning. Additionally, these differences create shared and consensual pockets of interactions and communications as discussed in Structure-Organization-Process. Another idea of conversation theory is learning happens by exchanges about issues, which assists in making knowledge explicit. In order for this to happen, Pask organized three levels of conversation, according to: "Natural language": general discussion "Object languages": for discussing the subject matter "Metalanguages": for talking about learning/language Additionally, to facilitate learning, Pask proposed two types of learning strategies. "Serialists": progress through a structure in a sequential fashion "Holists": look for higher order relations Ultimately, Pask found versatile learners neither favor one approach over the other. Rather, they understand how both approaches are integrated into the structure of learning. The similarities of conversation theory and text and conversation theory are they both focus on the foundational aspects of meaning. Specifically, how and why meaning is established and interpreted amongst individuals. However, the difference between the two theories is conversation theory specifically focuses on the dynamics of two people. Text and conversation theory is typically applied to at least two people. Conversation theory emphasizes the construct of knowledge of meaning and the cause and effect relationship that occurs as a result of self-learning from communication, based on meaning. == Factors == === Meaning === "Meaning management" is the control of "context" and "message" to accomplish a desired communication effect. According to Fairhurst, leaders are change agents Leaders define the value of the organization and shape communication by implementing unique organizational communication approaches. Within an organization, leaders and managers establish the framework for communication, which helps to manage meaning. "Leaders" provide information to followers, such as the organizations’ mission, vision, values, as well as its collective identity Contrary to leaders, "managers" are responsible for day to day problem solving. Their core framing tasks are solving problems and stimulating others to find solutions. Individuals, regardless of positional authority, can manage meaning. Meaning management is to communicate with a specific goal by controlling the context and message Individuals utilizing meaning management are communicating and shaping the meaning by using the power of framing. === Culture === "Culture" is a unique set of behaviors, including language, belief and customs learnt from being raised in social groups or by joining a particular group throughout time. Culture defines context and is the social totality that defines behavior, knowledge, beliefs and social learning. It is a set of shared values characterizing a specific organization. Fairhurst identifies culture as defining events, people, objects, and concepts. Communication and culture are intertwined. Shared language of a group links together individuals and joins common cultures. Culture influences mental models. "Mental models" are the images in your mind about other people, yourself, substance and events. Additionally, culture defines social interactions and how individuals and groups interpret and apply context. Organizations with good communication foundation are able to interpret and differentiate individuals’ cultural discourses, as well as creatively combine and constrain these discourses. It defines the ideological basis for people and lays the foundation for how they frame and can be observed and described, but not controlled. It is defined by the group or individual accepting the specific patterns of behavior, knowledge, or beliefs Individuals can shape culture and make changes over time, as long as they are clear about specific attitudes and behaviors that are desired As Weick and Sutcliffe (2007) discussed, culture can be changed through symbols, values, and content — organizations shape culture. An organizational culture emerges from a set of expectations that matter to people, from things like inclusion, exclusion, praise, positive feelings, social support, isolation, care, indifference, excitement and anger Individuals are shaped by an organization's culture. However, an organization has its own culture. According to Martin (1985), within that organizational culture, three forms of culture can result: integration, differentiation and fragmentation. "Integration" (bring people together) "Differentiation" (act or process by which people undergo change toward more specialized function) "Fragmentation" (process of state of breaking or being broken into smaller parts) With Integration, all organizational members consistently share values and assumptions about work. As a result, the members of the organization share uniquely organizational experiences and thus, a unique culture If differentiation occurs, cultures are not unitary. Sub-groups consistently share values and assumptions about work. Members tend to operate in different areas, different projects and at different levels of the hierarchy. Cultures are often ambiguous if fragmentation happens. Individuals are interconnected with some members and disconnected with others. This creates inconsistently shared values and assumptions about the organization As a result, friendship/romantic as well as enemy/competitor type relationships are cut across an organization's sub-groups. === Structure === Individuals who understand the structure and inner working of their organizations can leverage knowledge toward achieving communication goals. Likewise, organizations can also leverage their hierarchical structures to achieve targeted outcomes. Two types of structures exist within an organization. "Hierarchical" (formal hierarchical structure, typical flow/pyramid chart) "Network" (informal structure, based on relationships, go to people, subject-matter experts) Goldsmith and Katzenback (2008) explained organizations must understand the informal organization. For example, of being a part of an informal or formal structure, it is important for managers to learn to recognize signs of trouble in order to shape context as they attempt to coordinate meaning and solve day-to-day problems. Specific implications for organizational learning include enhanced performance, coordinated activity and structure, division of labor and collective goal setting While a formal organization is visually represented by a typical hierarchical structure, it visually shows how formal responsibilities are spread, as well as job dispersal and the flow of information In contrast, the informal organization embodies how people network to accomplish the job, via social relationships and connections or subject-matter experts that are not represented on the organizational chart By leveraging this informal organization, people within the organization are able to use their social network to access and shape the decision-making processes quicker, as well as establish cross-structural collaboration amongst themselves. Additionally, by understanding and using both structures, leaders and managers are able to learn more about their people. Interpreting all forms of communication, verbal and visual, whether you are a supervisor or a subordinate is invaluable. The hierarchical and network structures can allow an organization to recognize signs of trouble from people, accomplish core framing tasks, and to be able to communicate with mindfulness and meaning. By unlocking the value of an organization's structure, leaders and managers can use this knowledge to boost performance or achieve specific goals. Signs of trouble can be emotional, hidden, physical, or in plain sight. === Knowledge === Knowing individuals’ personalities, conflict tendencies, as well as their unique circumstances help an organization to understand its mental models and cultural discourse. Additionally, by noticing abnormalities and not being blind to details, an organization should be able to recognize signs of trouble within day-to-day operations and management, whether it is fraud, lack of maintenance standards, sexual harassment, or even a poor framework for communication. Understanding and the ability to recognize signs of trouble empower managers to employ the rules of reality construction: control the context, define the situation, apply ethics, interpret uncertainty, and design the response, which leads to communicating by a structured way of thinking. Ultimately, by understanding how an organization works, you enhance communication collectively. Additionally, by knowing how employees and relationships are shaped and the context that defines how each person interacts with one another, you can shape contagious emotions. Basic building blocks of Taylor's theories is the relationship of text and conversation, and how that relationship requires a "two-step translational process" translation One: From text to conversation translation Two: From conversation to text Following this translational process, text and conversation is transferred to organizational communication. If context, or text, defines the organization then ongoing introductions and meaning are crucial to define what is meant by the term organization. To examine this further, Taylor defined "six degrees of separation" to understand organizational communication: First Degree of Separation: Intent of speaker is translated into action and embedded in conversation. Second Degree of Separation: Events of the conversation are translated into a narrative representation, making it possible to understand the meaning of the exchange. Third Degree of Separation: The text is transcribed (objectified) on some permanent or semi-permanent medium (e.g., the minutes of a meeting are taken down in writing). Fourth Degree of Separation: A specialized language is developed to encourage and channel subsequent texts and conversations(e.g., lawyers develop specific ways of talking in court, with each other, and in documents). Fifth Degree of Separation: The texts and conversations are transformed into material and physical frames (e.g., laboratories, conference rooms, organizational charts, procedural manuals). Sixth Degree of Separation: The standardized form is disseminated and diffused to a broader public (e.g., media reports and representations of organizational forms and practices). == Impact == This theory uses interactions of text and conversation to construct networks of relationships. By doing so, the theory enables a deep understanding of personal communication within an organization. Additionally, it explains how that communication ends up actually defining the organization, rather than the individuals within the organization. Taylor's theory places more importance on personal communication, rather than individuals. The practical application, as a result, is communication behaviors can constitute how and what we think of an organization. Additionally, by manipulating communication processes, not only could structure be altered, but the entire organization could be changed as well whether change is beneficial or negative, is based on desired meaning, or context and message, people within the organization want to exchange and translate. Taylor stresses the importance and impact of dialogue, specifically relating to how people interact with one another and interpret context. Taylor explains in Heath et al. (2006) that virtuous reasoning embodies entire discussions. Additionally, he points out dialogue should not prevent issues that arise from debate Since 1993, Taylor's theory has been the focus of more than six organizational communication books. Additionally, Taylor's ideas are referred to as "The Montreal School" of organizational communication Within the field of communication, TMS has been recognized for its contributions to organizational communication as well as related disciplines. Books focusing on text and conversation theory have sold internationally One to the largest and simplest contributions this theory provided the communication academic field was the ability to describe and characterize and organization. From this, people could better understand and fully construct and organization's identity. == Weakness == According to Nonaka and Takeuchi (1995), organizational learning is the study of how collectives adapt to, or fail to adapt to, their environments. It utilizes tacit knowledge and explicit knowledge. "Tacit Knowledge": personal, contextual, subjective, implicit, and unarticulated "Explicit Knowledge": codified, systematic, formal, explicit, and articulated Ultimately, organizational learning achieves enhanced performance, coordinated activity and structure, and achievement of collective goals by externalization and internalization. "Externalization": getting key workers to make their tacit knowledge the organization's explicit knowledge that can be shared "Internalization": getting the organization's explicit knowledge to become workers’ tacit knowledge Text and conversation theory places significant challenges and burdens on the organization to articulate knowledge. Whether knowledge is passed directly by individuals, up and down or horizontally on the formal or informal organizational structure, there is no guarantee text has proper context to be effective as conversation. Additionally, conversation codes are influenced by how the organization ensures knowledge carriers pass information and communicate with purpose, message, and meaning. How information is passed can be unclear, and consistently has to adapt to new challenges. Some of these challenges, or factors, include how individuals and an organization adapt to meaning, culture, structure, and knowledge, in order to communicate. Ultimately, within the organization itself, people are impacted by bias’ on group and individual levels. "Problems with Group Learning" Responsibility bias: belief of group members’ that someone else in the group will do the work Social desirability bias: group members are reluctant to provide critical assessments for fear of losing face or relational status Hierarchical mum effect: subordinates’ reluctance to provide negative feedback for fear of harming identifies of superiors Groupthink: failure to consider decision alternatives Identification/ego defense: highly identified group members begin to associate their identify with their group membership and will in turn refuse to see the group as wrong, and themselves by extension "Problems with Individual Learning" Confirmation bias: individuals seeks to confirm their own ideas, guesses and beliefs rather than seek dis-confirming information Hindsight bias: individuals tend to forget when their predictions are wrong Fundamental attribution error: individuals tend to attribute others shortcomings to their character, while attributing their own shortcomings to external forces == See also == Tacit knowledge Explicit knowledge Conversation theory Mental model Organizational structure Organizational culture Organizational communication Sensemaking Structuration theory == References == == Bibliography == Giddens, A. (1986). Constitution of society: Outline of the theory of structuration, University of California Press; Reprint edition (January 1, 1986) ISBN 0-520-05728-7 Hoffman, M. F., & Cowan, R. L. (2010). Be Careful What You Ask For: Structuration Theory and Work/Life Accommodation. Communication Studies, 61(2), 205–223. doi:10.1080/10510971003604026 Gordon Pask, Conversation, cognition and learning. New York: Elsevier, 1975. Gordon Pask, The Cybernetics of Human Learning and Performance, Hutchinson. 1975 Gordon Pask, Conversation Theory, Applications in Education and Epistemology, Elsevier, 1976. Scott, B. (2001). Gordon Pask's Conversation Theory: A Domain Independent Constructivist Model of Human Knowing. Foundations of Science, 6(4), 343–360. Maturana, H. and F.J. Varela: 1980, Autopoiesis and Cognition. Reidel, Dordrecht, Holland. Conversation Theory – Gordon Pask overview from web.cortland.edu: http://web.cortland.edu/andersmd/learning/Pask.htm Fairhurst, G. T. (2011), The power of framing: creating the language of leadership, San Francisco: Jossey-Bass Fairhurst, G. T., Jordan, J., & Neuwirth, K. (1997). Why are we here? Managing the meaning of an organizational mission statement. ‘’Journal of Applied Communication Research’’, 25(4), 243-263. Weick, K. E., & Sutcliffe, K. M. (2007). Managing the unexpected: resilient performance in an age of uncertainty (2nd ed.). San Francisco: Jossey-Bass. Martin, J. & Meyerson, D. 1985. Organizational cultures and the denial, masking and amplification of ambiguity. Research Report No. 807, Graduate School of Business, Stanford University, Stanford. Goldsmith, M., & Katzenbach, J. (2007 February 14). Navigating the "informal" organization [Electronic version]. Business Week: [1] Bryan, L. L., Matson, E., & Weiss, L. M. (2007). Harnessing the power of informal employee networks. ‘’McKinsey Quarterly’’, (4), 44-55. Miller, K. (2005). Communication theories: Perspectives, processes, and contexts (2nd Ed.) Columbus, OH: McGraw Hill. Taylor, J.R., Cooren, F., Giroux, N., & Robichaud, D. (1996). The communicational basis of organization: Between the conversation and the text. Communication Theory, 6, 1-39. Heath, R. L., Pearce, W., Shotter, J., Taylor, J. R., Kersten, A., Zorn, T., & ... Deetz, S. (2006). THE PROCESSES OF DIALOGUE: Participation and Legitimation. ‘’Management Communication Quarterly’’, 19(3), 341–375. doi:10.1177/0893318905282208 Welcome to Jim Taylor and Elizabeth Van Every's Website: http://www.taylorvanevery.com/ Nonaka, I. & Takeuchi, H. (1995). The knowledge-creating company. New York: Oxford University Press Giddens, A. (1991). Modernity and self-identity: self and society in the late modern age. Stanford: Stanford University Press.
Wikipedia/Text_and_conversation_theory
A communications system is a collection of individual telecommunications networks systems, relay stations, tributary stations, and terminal equipment usually capable of interconnection and interoperation to form an integrated whole. Communication systems allow the transfer of information from one place to another or from one device to another through a specified channel or medium. The components of a communications system serve a common purpose, are technically compatible, use common procedures, respond to controls, and operate in union. In the structure of a communication system, the transmitter first converts the data received from the source into a light signal and transmits it through the medium to the destination of the receiver. The receiver connected at the receiving end converts it to digital data, maintaining certain protocols e.g. FTP, ISP assigned protocols etc. Telecommunications is a method of communication (e.g., for sports broadcasting, mass media, journalism, etc.). Communication is the act of conveying intended meanings from one entity or group to another through the use of mutually understood signs and semiotic rules. == Types == === By media === An optical communication system is any form of communications system that uses light as the transmission medium. Equipment consists of a transmitter, which encodes a message into an optical signal, a communication channel, which carries the signal to its destination, and a receiver, which reproduces the message from the received optical signal. Fiber-optic communication systems transmit information from one place to another by sending light through an optical fiber. The light forms a carrier signal that is modulated to carry information. A radio communication system is composed of several communications subsystems that give exterior communications capabilities. A radio communication system comprises a transmitting conductor in which electrical oscillations or currents are produced and which is arranged to cause such currents or oscillations to be propagated through the free space medium from one point to another remote therefrom and a receiving conductor at such distant point adapted to be excited by the oscillations or currents propagated from the transmitter. Power-line communication systems operate by impressing a modulated carrier signal on power wires. Different types of power-line communications use different frequency bands, depending on the signal transmission characteristics of the power wiring used. Since the power wiring system was originally intended for transmission of AC power, the power wire circuits have only a limited ability to carry higher frequencies. The propagation problem is a limiting factor for each type of power line communications. === By technology === A duplex communication system is a system composed of two connected parties or devices which can communicate with one another in both directions. The term duplex is used when describing communication between two parties or devices. Duplex systems are employed in nearly all communications networks, either to allow for a communication "two-way street" between two connected parties or to provide a "reverse path" for the monitoring and remote adjustment of equipment in the field. An antenna is basically a small length of a conductor that is used to radiate or receive electromagnetic waves. It acts as a conversion device. At the transmitting end it converts high frequency current into electromagnetic waves. At the receiving end it transforms electromagnetic waves into electrical signals that is fed into the input of the receiver. several types of antenna are used in communication. Examples of communications subsystems include the Defense Communications System (DCS). === Examples: by technology === Telephone Mobile phone Tablet computer Television Telegraph Edison Telegraph TV cable Computer === By application area === The term transmission system is used in the telecommunications industry to emphasize the intermediate media, protocols, and equipment in the circuit, rather than particular end-user applications. A tactical communications system is a communications system that (a) is used within, or in direct support of tactical forces (b) is designed to meet the requirements of changing tactical situations and varying environmental conditions, (c) provides securable communications, such as voice, data, and video, among mobile users to facilitate command and control within, and in support of, tactical forces, and (d) usually requires extremely short installation times, usually on the order of hours, in order to meet the requirements of frequent relocation. An Emergency communication system is any system (typically computer based) that is organized for the primary purpose of supporting the two way communication of emergency messages between both individuals and groups of individuals. These systems are commonly designed to integrate the cross-communication of messages between are variety of communication technologies. An Automatic call distributor (ACD) is a communication system that automatically queues, assigns and connects callers to handlers. This is used often in customer service (such as for product or service complaints), ordering by telephone (such as in a ticket office), or coordination services (such as in air traffic control). A Voice Communication Control System (VCCS) is essentially an ACD with characteristics that make it more adapted to use in critical situations (no waiting for dial tone, or lengthy recorded announcements, radio and telephone lines equally easily connected to, individual lines immediately accessible etc..) == Key components == === Sources === Sources can be classified as electric or non-electric; they are the origins of a message or input signal. Examples of sources include but are not limited to the following: Audio files (MP3, WAV, etc...) Graphic Image Files (GIFs) Email Messages Human voice Television Picture Electromagnetic radiation === Input transducers (sensors) === Sensors, like microphones and cameras, capture non-electric sources, like sound and light (respectively), and convert them into electrical signals. These types of sensors are called input transducers in modern analog and digital communication systems. Without input transducers there would not be an effective way to transport non-electric sources or signals over great distances, i.e. humans would have to rely solely on our eyes and ears to see and hear things despite the distances. Other examples of input transducers include: Microphones Cameras Keyboards Mouse Force sensors Accelerometers === Transmitter === Once the source signal has been converted into an electric signal, the transmitter will modify this signal for efficient transmission. In order to do this, the signal must pass through an electronic circuit containing the following components: Noise filter Analog-to-digital converter Encoder Modulator Signal amplifier After the signal has been amplified, it is ready for transmission. At the end of the circuit is an antenna, the point at which the signal is released as electromagnetic waves (or electromagnetic radiation). === Communication channel === A communication channel is simply referring to the medium by which a signal travels. There are two types of media by which electrical signals travel, i.e. guided and unguided. Guided media refers to any medium that can be directed from transmitter to receiver by means of connecting cables. In optical fiber communication, the medium is an optical (glass-like) fiber. Other guided media might include coaxial cables, telephone wire, twisted-pairs, etc... The other type of media, unguided media, refers to any communication channel that creates space between the transmitter and receiver. For radio or RF communication, the medium is air. Air is the only thing between the transmitter and receiver for RF communication while in other cases, like sonar, the medium is usually water because sound waves travel efficiently through certain liquid media. Both types of media are considered unguided because there are no connecting cables between the transmitter and receiver. Communication channels include almost everything from the vacuum of space to solid pieces of metal; however, some mediums are preferred more than others. That is because differing sources travel through subjective mediums with fluctuating efficiencies. === Receiver === Once the signal has passed through the communication channel, it must be effectively captured by a receiver. The goal of the receiver is to capture and reconstruct the signal before it passed through the transmitter (i.e. the A/D converter, modulator and encoder). This is done by passing the "received" signal through another circuit containing the following components: Noise Filter Digital-to-analog converter Decoder Demodulator Signal Amplifier Most likely the signal will have lost some of its energy after having passed through the communication channel or medium. The signal can be boosted by passing it through a signal amplifier. When the analog signal converted into digital signal. === Output transducer === The output transducer simply converts the electric signal (created by the input transducer) back into its original form. Examples of output transducers include but are not limited to the following: Speakers (Audio) Monitors (See Computer Peripherals) Motors (Movement) Lighting (Visual) === Other === Some common pairs of input and output transducers include: Microphones and speakers (audio signals) Keyboards and computer monitors Cameras and liquid crystal displays (LCDs) Force sensors (buttons) and lights or motors Again, input transducers convert non-electric signals like voice into electric signals that can be transmitted over great distances very quickly. Output transducers convert the electric signal back into sound or picture, etc... There are many different types of transducers and the combinations are limitless. == See also == Automatic call distributor == References == == Further reading ==
Wikipedia/Communication_systems
Models of communication simplify or represent the process of communication. Most communication models try to describe both verbal and non-verbal communication and often understand it as an exchange of messages. Their function is to give a compact overview of the complex process of communication. This helps researchers formulate hypotheses, apply communication-related concepts to real-world cases, and test predictions. Despite their usefulness, many models are criticized based on the claim that they are too simple because they leave out essential aspects. The components and their interactions are usually presented in the form of a diagram. Some basic components and interactions reappear in many of the models. They include the idea that a sender encodes information in the form of a message and sends it to a receiver through a channel. The receiver needs to decode the message to understand the initial idea and provides some form of feedback. In both cases, noise may interfere and distort the message. Models of communication are classified depending on their intended applications and on how they conceptualize the process. General models apply to all forms of communication while specialized models restrict themselves to specific forms, like mass communication. Linear transmission models understand communication as a one-way process in which a sender transmits an idea to a receiver. Interaction models include a feedback loop through which the receiver responds after getting the message. Transaction models see sending and responding as simultaneous activities. They hold that meaning is created in this process and does not exist prior to it. Constitutive and constructionist models stress that communication is a basic phenomenon responsible for how people understand and experience reality. Interpersonal models describe communicative exchanges with other people. They contrast with intrapersonal models, which discuss communication with oneself. Models of non-human communication describe communication among other species. Further types include encoding-decoding models, hypodermic models, and relational models. The problem of communication was already discussed in Ancient Greece but the field of communication studies only developed into a separate research discipline in the middle of the 20th century. All early models were linear transmission models, like Lasswell's model, the Shannon–Weaver model, Gerbner's model, and Berlo's model. For many purposes, they were later replaced by interaction models, like Schramm's model. Beginning in the 1970s, transactional models of communication, like Barnlund's model, were proposed to overcome the limitations of interaction models. They constitute the origin of further developments in the form of constitutive models. == Definition and function == Models of communication are representations of the process of communication. They try to provide a simple explanation of the process by highlighting its most basic characteristics and components. As simplified pictures, they only present the aspects that, according to the model's designer, are most central to communication. Communication can be defined as the transmission of ideas. General models of communication try to describe all of its forms, including verbal and non-verbal communication as well as visual, auditory, and olfactory forms. In the widest sense, communication is not restricted to humans but happens also among animals and between species. However, models of communication normally focus on human communication as the paradigmatic form. They usually involve some type of interaction between two or more parties in which messages are exchanged. The process as a whole is very complex, which is why models of communication only present the most salient features by showing how the main components operate and interact. They usually do so in the form of a simplified visualization and ignore some aspects for the sake of simplicity. Some theorists, like Paul Cobley and Peter J. Schulz, distinguish models of communication from theories of communication. This is based on the idea that theories of communication try to provide a more abstract conceptual framework that is strong enough to accurately represent the underlying reality despite its complexity. Communication theorist Robert Craig sees the difference in the fact that models primarily represent communication while theories additionally explain it. According to Frank Dance, there is no one fully comprehensive model of communication since each one highlights only certain aspects and distorts others. For this reason, he suggests that a family of different models should be adopted. Models of communication serve various functions. Their simplified presentation helps students and researchers identify the main steps of communication and apply communication-related concepts to real-world cases. The unified picture they provide makes it easier to describe and explain the observed phenomena. Models of communication can guide the formulation of hypotheses and predictions about how communicative processes will unfold and show how these processes can be measured. One of their goals is to show how to improve communication, for example, by avoiding distortions through noise or by discovering how societal and economic factors affect the quality of communication. == Basic concepts == Many basic concepts reappear in the different models, like "sender", "receiver", "message", "channel", "signal", "encoding", "decoding", "noise", "feedback", and "context". Their exact meanings vary slightly from model to model and sometimes different terms are used for the same ideas. Simple models only rely on a few of these concepts while more complex models include many of them. The sender is responsible for creating the message and sending it to the receiver. Some theorists use the terms source and destination instead. The message itself can be verbal or non-verbal and contains some form of information. The process of encoding translates the message into a signal that can be conveyed using a channel. The channel is the sensory route on which the signal travels. For example, expressing one's thoughts in a speech encodes them as sounds, which are transmitted using air as a channel. Decoding is the reverse process of encoding: it happens when the signal is translated back into a message. Noise is any influence that interferes with the message reaching its destination. Some theorists distinguish environmental noise and semantic noise: environmental noise distorts the signal on its way to the receiver, whereas semantic noise occurs during encoding or decoding, for example, when an ambiguous word in the message is not interpreted by the receiver as it was meant by the sender. Feedback means that the receiver responds to the message by conveying some information back to the original sender. Context consists in the circumstances of the communication. It is a very wide term that can apply to the physical environment and the mental state of the communicators as well as the general social situation. == Classifications == Models of communication are classified in many ways and the proposed classifications often overlap. Some models are general in the sense that they aim to describe all forms of communication. Others are specialized: they only apply to specific fields or areas. For example, models of mass communication are specialized models that do not aim to give a universal account of communication. Another contrast is between linear and non-linear models. Most early models of communication are linear models. They present communication as a unidirectional process in which messages flow from the communicator to the audience. Non-linear models, on the other hand, are multi-directional: messages are sent back and forth between participants. According to Uma Narula, linear models describe single acts of communication while non-linear models describe the whole process. === Linear transmission === Linear transmission models describe communication as a one-way process. In it, a sender intentionally conveys a message to a receiver. The reception of the message is the endpoint of this process. Since there is no feedback loop, the sender may not know whether the message reached its intended destination. Most early models were transmission models. Due to their linear nature, they are often too simple to capture the dynamic aspects of various forms of communication, such as regular face-to-face conversation. By focusing only on the sender, they leave out the audience's perspective. For example, listening usually does not just happen, but is an active process involving listening skills and interpretation. However, some forms of communication can be accurately described by them, such as many types of computer-mediated communication. This applies, for example, to text messaging, sending an email, posting a blog, or sharing something on social media. Some theorists, like Uma Narula, talk of "action models" instead of linear transmission models to stress how they only focus on the actions of the sender. Linear transmission models include Aristotle's, Lasswell's, Shannon-Weaver's and Berlo's model. === Interaction === For interaction models, the participants in communication alternate the positions of sender and receiver. So upon receiving a message, a new message is generated and returned to the original sender as a form of feedback. In this regard, communication is a two-way process. This adds more complexity to the model since the participants are both senders and receivers and they alternate between these two positions. For interaction models, these steps happen one after the other: first, one message is sent and received, later another message is returned as feedback, etc. Such feedback loops make it possible for the sender to assess whether their message was received and had the intended effect or whether it was distorted by noise. For example, interaction models can be used to describe a conversation through instant messaging: the sender sends a message and then has to wait for the receiver to react. Another example is a question/answer session where one person asks a question and then waits for another person to answer. Interaction models usually put more emphasis on the interactive process and less on the technical problem of how the message is conveyed at each step. For this reason, more prominence is given to the context that shapes the exchange of messages. This includes the physical context, like the distance between the speakers, and the psychological context, which includes mental and emotional factors like stress and anxiety. Schramm's model is one of the earliest interaction models. === Transaction === Transaction models depart from interaction models in two ways. On the one hand, they understand sending and responding as simultaneous processes. This can be used to describe how listeners use non-verbal communication, like body posture and facial expressions, to give some form of feedback. This way, they can signal whether they agree with the message while the speaker is talking. This feedback may in turn influence the speaker's message while it is being produced. On the other hand, transactional models stress that meaning is created in the process of communication and does not exist prior to it. This is often combined with the claim that communication creates social realities like relationships, personal identities, and communities. This also affects the communicators themselves on various levels, such as their thoughts and feelings as well as their social identities. Transaction models usually put more emphasis on contexts and how they shape the exchange of information. They are sometimes divided into social, relational, and cultural contexts. Social contexts include explicit and implicit rules about what form of message and feedback is acceptable. An example is that one should not interrupt people or that greetings should be returned. Relational contexts are more specific in that they concern the previous relationship and shared history of the communicators. This includes factors like whether the participants are friends, neighbors, co-workers, or rivals. The cultural context encompasses the social identities of the communicators, such as race, gender, nationality, sexual orientation, and social class. Barnlund's model is an influential early transaction model. === Constitutive and constructionist === Constitutive models hold that meaning is "reflexively constructed, maintained, or negotiated in the act of communicating". This means that communication is not just the exchange of pre-established bundles of information but a creative process, unlike the outlook found in many transmission models. According to Robert Craig, this implies that communication is a basic social phenomenon that cannot be explained through psychological, cultural, economic, or other factors. Instead, communication is to be seen as the cause of other social processes and not as their result. Constitutive models are closely related to constructionist models, which see communication as the basic process responsible for how people understand, represent, and experience reality. According to social constructionists, like George Herbert Mead, reality is not something wholly external but depends on how it is conceptualized, which happens through communication. === Interpersonal and intrapersonal === Interpersonal communication is communication between two distinct persons, like when greeting someone on the street or making a phone call. Intrapersonal communication, in contrast, is communication with oneself. An example is a person thinking to themself that they should bring in the laundry from outside because it is about to rain. Most models of communication focus on interpersonal communication by assuming that sender and receiver are distinct persons. They often explore how the sender encodes a message, how this message is transmitted and possibly distorted, and how the receiver decodes and interprets the message. However, some models are specifically formulated for intrapersonal communication. Many of them focus on the idea that intrapersonal communication starts with the perception of internal and external stimuli carrying information. These stimuli are processed and interpreted in various ways, for example, by classifying them and by ascribing symbolic meaning to them. Later steps include thinking about them, organizing information, and then encoding the ideas conceived this way in a behavioral response. This response can itself produce new stimuli and act as a form of feedback loop for continued intrapersonal communication. Some models of communication try to provide a perspective that includes both interpersonal and intrapersonal communication in order to show how these two phenomena influence each other. === Non-human === The discipline of communication studies and the models of communication proposed in it are not restricted to human communication. They include discussions of communication among other species, like non-human animals and plants. Models of non-human communication usually stress the practical aspects of communication, ie., what effects it has on behavior. An example is that communication provides an evolutionary advantage to the communicators. Some models of animal communication are similar to models of human communication in that they understand the process as an exchange of information. This exchange helps the communicators to reduce uncertainty and to act in a way that is beneficial to them. A further approach is discussed in the manipulative model of animal communication. It argues that the central aspect of communication does not consist in the exchange of information but in causing changes to the behavior of other organisms. This influence provides primarily a benefit to the sender and does not need to involve the transmission of messages. In this way, the sender "exploits another animal's ... muscle power". A slightly different approach focuses more on the cooperative aspect of communication and holds that both sender and receiver benefit from the exchange. Models of plant communication usually understand communication in terms of biochemical changes and responses. According to Richard Karban, this process starts with a cue that is emitted by a sender and then perceived by a receiver. The receiver processes this information to translate it into some kind of response. === Others === Additional classifications of communication models have been suggested. The term encoding-decoding model is used for any model that includes the phases of encoding and decoding in its description of communication. Such models stress that to send information, a code is necessary. A code is a sign system used to express ideas and interpret messages. Encoding-decoding models are sometimes contrasted with inferential models. For the latter, the receiver is not only interested in the information sent but tries to infer the sender's intention behind formulating the message. Hypodermic models, also referred to as magic bullet theories, hold that communication can be reduced to the transfer of ideas, information, or feelings from a sender to a receiver. In them, the message is like a magic bullet that is shot by active senders at passive and defenseless receivers. They are closely related to linear transmission models and contrast with reception models, which ascribe an active role to the receiver in the process of communication and meaning-making. Relational models stress the importance of the relationship between communicators. For example, Wilbur Schramm holds that this relationship informs the expectations the participants bring to the exchange and the roles they play in it. These roles influence how the communicators try to contribute to the communicative goal. In the context of instruction, for example, the teacher's role includes sharing and explaining information while the student's role involves learning and asking clarifying questions. Relational models also describe how communication affects the relationship between the communicators. For example, the communication between patient and hospital staff affects whether the patient feels cared for or dehumanized. Relational models are closely related to convergence models. For convergence models, the goal of communication is convergence: to reach a mutual understanding. Feedback plays a central role in this regard: effective feedback helps achieve this goal while ineffective feedback leads to divergence. Difference models emphasize the role of gender and racial differences in the process of communication. Some posit, for example, that men and women have different communication styles and aim to achieve different goals through communication. == History == Communication was studied as early as Ancient Greece and one of the first models of communication is due to Aristotle. However, the field of communication studies only developed in the 20th century into a separate research discipline. In its early stages, it often borrowed models and concepts from other disciplines, such as psychology, sociology, anthropology, and political science. But as it developed as a science, it started to rely more and more on its own models and concepts. Beginning in the 1940s and the following decades, many new models of communication were developed. Most of the early models were linear transmission models. For many purposes, they were replaced by non-linear models such as interaction, transaction, and convergence models. === Aristotle === One of the earliest models of communication was given by Aristotle. He speaks of communication in his treatise Rhetoric and characterizes it as a techne or an art. His model is primarily concerned with public speaking and is made up of five elements: the speaker, the message, the audience, the occasion, and the effect. According to Aristotle's communication model, the speaker wishes to have an effect on the audience, such as persuading them of an opinion or a course of action. The same message may have very different effects depending on the audience and the occasion. For this reason, the speaker should take these factors into account and compose their message accordingly. Many of the basic elements of the Aristotelian model of communication are still found in contemporary models. === Lasswell === Lasswell's model is an early and influential model of communication. It was proposed by Harold Lasswell in 1948 and uses five questions to identify and describe the main aspects of communication: "Who?", "Says What?", "In What Channel?", "To Whom?", and "With What Effect?". They correspond to five basic components involved in the communicative process: the sender, the message, the channel, the receiver, and the effect. For a newspaper headline, those five components are the reporter, the content of the headline, the newspaper itself, the reader, and the reader's response to the headline. Lasswell assigns a field of inquiry to each component, corresponding to control analysis, content analysis, media analysis, audience analysis, and effect analysis. The model is usually seen as a linear transmission model and was initially formulated specifically for mass communication, like radio, television, and newspapers. Nonetheless, it has been used in other fields, like new media. Many theorists treat it as a universal model applying to any form of communication. It is widely cited as a model of communication but some theorists, like Zachary S. Sapienza et al, have raised doubts about this characterization and see it instead as a questioning device, a formula, or a construct. Lasswell's model is often criticized due to its simplicity. An example is that it does not include an explicit discussion of vital factors such as noise and feedback loops. It also does not talk about the influence of physical, emotional, social, and cultural contexts. These shortcomings have prompted some theorists to expand Lasswell's model. For example, Richard Braddock published an extension in 1958 including two additional questions: "Under What Circumstances?" and "For What Purpose?". === Shannon and Weaver === The Shannon–Weaver model is another early and influential model of communication. It is a linear transmission model that was published in 1948 and describes communication as the interaction of five basic components: a source, a transmitter, a channel, a receiver, and a destination. The source is responsible for generating the message. This message is translated by the transmitter into a signal, which is then sent using a channel. The receiver has the opposite function of the transmitter: it translates the signal back into a message, which is made available to the destination. The Shannon–Weaver model was initially formulated in analogy to how telephone calls work but is intended as a general model of all forms of communication. In the case of a landline phone call, the person calling is the source and their telephone is the transmitter translating the message into an electric signal. The wire acts as the channel. The person taking the call is the destination, and their telephone is the receiver. Claude Shannon and Warren Weaver categorize and address problems relevant to models of communication at three basic levels: technical, semantic, and effectiveness problems. They correspond to the issues of how to transmit the symbols in the message to the receiver, how these symbols carry meaning, and how to ensure that the message has the intended effect on the receiver. Shannon and Weaver focus their attention on the technical level by discussing how noise can interfere with the signal. This makes it difficult for the receiver to reconstruct the source's intention found in the original message. They try to solve this problem by making the message redundant so that it is easier to detect distortions. The Shannon–Weaver model has been influential in the fields of communication theory and information theory. However, it has been criticized because it simplifies some parts of the communicative process. For example, it presents communication as a one-way process and not as a dynamic interaction of messages going back and forth between both participants. === Newcomb === Newcomb's model was first published by Theodore M. Newcomb in his 1953 paper "An approach to the study of communicative acts". It is called the ABX model of communication since it understands communication in terms of three components: two parties (A and B) interacting with each other about a topic or object (X). A and B can be persons or groups, such as trade unions or nations. X can be any part of their shared environment like a specific thing or another person. The ABX model differs from earlier models by focusing on the social relation between the communicators in the form of the orientations or attitudes they have toward each other and toward the topic. The orientations can be favorable or unfavorable and include beliefs. They have a big impact on how communication unfolds. It is relevant, for example, whether A and B like each other and whether they have the same attitude towards X. Newcomb understands communication as a "learned response to strain" caused by discrepancies between orientations. The social function of communication is to maintain equilibrium in the social system by keeping the different orientations in balance. In Newcomb's words, communication enables "two or more individuals to maintain simultaneous orientation to each other and towards objects of the external environment". The orientations of A and B are subject to change and influence each other. Significant discrepancies between them, such as divergent opinions on X, cause a strain in the relation. In such cases, communication aims to reduce the strain and restore balance through the exchange of information about the object. For example, if A and B are friends and X is someone both know, then equilibrium means that they have the same attitude towards X. However, there is a disequilibrium or strain if A likes X but B does not. This creates a tendency for A and B to exchange information about X until they arrive at a shared attitude. The more important X is to A and B, the more urgent this tendency is. An influential expansion of Newcomb's model is due to Westley and MacLean. They introduce the idea of asymmetry of information: the sender (A) is aware of several topics (X1 to X3) and has to compose the message (X') to communicate to the receiver (B). B's direct perception is limited to only a few of these topics (X1B). Another addition is the inclusion of feedback (fBA) from the receiver to the sender. Westley and MacLean also propose a further expansion to account for mass communication. For this purpose, they include an additional component, C, that has the role of a gatekeeper filtering the original message for the mass audience. === Schramm === Schramm's model of communication is one of the earliest interaction models of communication. It was published by Wilbur Schramm in 1954 as a response to and an improvement over linear transmission models of communication, such as Lasswell's model and the Shannon–Weaver model. The main difference in this regard is that Schramm does not see the audience as passive recipients. Instead, he understands them as active participants that respond by sending their own message as a form of feedback. Feedback forms part of many types of communication and makes it easier for the participants to identify and resolve possible misunderstandings. For Schramm, communication is based on the relation between a source and a destination and consists in sharing ideas or information. For this to happen, the source has to encode their idea in symbolic form as a message. This message is sent to the destination using a channel, such as sound waves or ink on paper. The destination has to decode and interpret the message in order to reconstruct the original idea. The processes of encoding and decoding correspond to the roles of transmitter and receiver in the Shannon–Weaver model. According to Schramm, these processes are influenced by the fields of experience of each participant. A field of experience includes past life experiences and affects what the participant understands and is familiar with. Communication fails if the message is outside the receiver's field of experience. In this case, the receiver is unable to decode it and connect it to the sender's idea. Other sources of error are external noise or mistakes in the phases of decoding and encoding. Schramm holds that successful communication is about realizing an intended effect. He discusses the conditions for this to be possible. They include making sure that one has the receiver's attention, that the message is understandable, and that the audience is able and motivated to react to the message in the intended way. In the 1970s, Schramm proposed modifications to his original model to take into account the discoveries made in communication studies in the preceding decades. His new approach gives special emphasis to the relation between the participants. The relation determines the goal of communication and the roles played by the participants. === Gerbner === George Gerbner first published his model in his 1956 paper Toward a General Model of Communication. It is a linear transmission model. It is based on the Shannon–Weaver model and Lasswell's model but expands them in various ways. It aims to provide a general account of all forms of communication. One of its innovations is that it starts not with a message or an idea but with an event. The communicating agent perceives it and composes a message about it. For Gerbner, messages are not packages that exist prior to communication. Instead, the message is created in the process of encoding and is affected by the code and the channel. Gerbner assumes that the goal of communication is to inform another person about something they are unaware of. He includes a total of ten essential components: (1) someone (2) perceives an event (3) and reacts (4) in a situation (5) through some means. This is done with the goal of (6) making available materials (7) in some form (8) and context (9) conveying content (10) of some consequence. Each of these components corresponds to a different area of study. For example, communicator and audience research studies the first component while perception research is concerned with the second component. In Gerbner's example, "a man notices a house burning across the street and shouts 'Fire!'". In this case, "someone" corresponds to the man and the perceived event is the burning house. Other components include his voice (means) and the fire (conveyed content). The relation between message and reality is of central importance to Gerbner. For this reason, his model includes two dimensions. The horizontal dimension corresponds to the relation between communicator and event. The vertical dimension corresponds to the relation between communicator and message. Communication starts in the horizontal dimension with an event perceived by the sender. The next step happens in the vertical dimension, where the percept is translated into a signal containing the message. The message has two key aspects: content and form. The content is the information about the event. The last step belongs again to the horizontal dimension: the audience perceives and interprets the message about the event. All these steps are creative processes that select some features to be included. For example, the event is never perceived in its entirety. Instead, the communicator has to select and interpret its most salient features. The same happens when encoding the message: the percept is usually too complex to be fully communicated and only its most significant aspects are expressed. Selection also concerns the choice of the code and channel to be used. The availability of a channel differs from person to person and from situation to situation. For example, many people do not have access to mass media, like television, to send their message to a wide audience. Gerbner's emphasis on the relation between message and reality has been influential for subsequent models of communication. However, Gerbner's model still suffers from many of the limitations of the earlier models it is based on. An example is the focus on the linear transmission of information without an in-depth discussion of the role of feedback loops. Another issue concerns the question of how meaning is created. === Berlo === Berlo's model is a linear transmission model of communication. It was published by David Berlo in 1960 and was influenced by earlier models, such as the Shannon–Weaver model and Schramm's model. It is usually referred to as the Source-Message-Channel-Receiver (SMCR) model because of its four main components (source, message, channel, and receiver). Each of these components is characterized by various aspects and the main focus of the model is a detailed discussion of each of them. For Berlo, all forms of communication are attempts to influence the behavior of the receiver. To do so, the source has to express their purpose by encoding it into a message. This message is sent through a channel to the receiver, who has to decode it in order to understand it and react to it. Communication is successful if the reaction of the receiver matches the purpose of the source. Berlo's main interest in discussing the components and their aspects is to analyze their impact on successful communication. Source and receiver are usually persons but can also be groups or institutions. On this level, Berlo identifies four features: communication skills, attitudes, knowledge, and social-cultural system. Communication skills are primarily the ability of the source to encode messages and the ability of the receiver to decode them. The attitude is the positive or negative stance that source and receiver have toward themselves, each other, and the discussed topic. Knowledge stands for the understanding of the topic and the social-cultural system includes background beliefs and social norms common in the culture and social context of the communicators. Generally speaking, the more source and receiver are alike in regard to these factors, the more likely successful communication is. Communication may fail, for example, if the receiver lacks the decoding skills necessary to understand the message or if the source has a demeaning attitude toward the receiver. For the message, the main factors are code, content, and treatment, each of which can be analyzed in terms of its structure and its elements. The code is the sign system used to express the message, like a language. The content is the idea or information expressed in the message. Choosing an appropriate content and the right code to express it matters for successful communication. Berlo uses the term treatment to refer to this selection. It reflects the style of the source as a communicator. The channel is the medium and process of how the message is transmitted. Berlo analyzes it mainly based on the five senses used to decode messages: seeing, hearing, touching, smelling, and tasting. The SMCR model has inspired subsequent theorists. However, it is often criticized based on its simplicity because it does not discuss feedback loops and because it does not give enough emphasis on noise and other barriers to communication. === Dance === Frank Dance's helical model of communication was initially published in his 1967 book Human Communication Theory. It is intended as a response to and an improvement over linear and circular models by stressing the dynamic nature of communication and how it changes the participants. Dance sees the fault of linear models as their attempt to understand communication as a linear flow of messages from a sender to a receiver. According to him, this fault is avoided by circular models, which include a feedback loop through which messages are exchanged back and forth. Dance criticizes the circular approach by holding that it "suggests that communication comes back, full circle, to exactly the same point from which it started". Dance holds that a helix is a more adequate representation of the process of communication since it implies that there is always a forward movement. It shows how the content and structure of earlier communicative acts influence the content and structure of later communicative acts. In this regard, communication has a lasting effect on the communicators and evolves continuously as a process. The upward widening movement of the helix represents a form of optimism by seeing communication as a means of growth, learning, and improvement. The basic idea behind Dance's helical model of communication is also found in education theory in the spiral approach proposed by Jerome Bruner. Dance's model has been criticized based on the claim that it focuses only on some aspects of communication but does not provide a tool for detailed analysis. === Barnlund === Barnlund's model is an influential transactional model of communication first published in 1970. Its goal is to avoid the inaccuracies of earlier models and account for communication in all its complexity. This includes dismissing the idea that communication is defined as the transmission of ideas from a sender to a receiver. For Barnlund, communication "is the production of meaning, rather than the production of messages". He holds that the world and its objects lack meaning on their own. They are only meaningful to the extent that people interpret them and assign meaning to them by engaging in the processes of decoding and encoding. In doing so, people try to decrease uncertainty and arrive at a shared understanding. Barnlund's model rests on a set of basic assumptions. For Barnlund, any activity that creates meaning is a form of communication. He sees communication as dynamic because meaning is not fixed but depends on the human practice of interpretation, which is itself subject to change. Communication is continuous in the sense that it does not have a beginning or an end: people decode cues and encode responses all the time, even when no one else is present. For Barnlund, communication is also circular because there is no clear division between sender and receiver as found in linear transmission models. It is irreversible due to the diverse effects it has on the communicators that cannot be undone. It is also complex since many components are involved and many factors influence how it unfolds. Because of its complexity, communication is unrepeatable: it is not possible to control all these factors to exactly repeat a previous exchange. This is not even the case when the same communicators exchange the same messages. Barnlund's model is based on the idea that communication consists of decoding cues by ascribing meaning to them and encoding appropriate responses to them. Barnlund distinguishes between public, private, and behavioral cues. Public cues are accessible to anyone in the situation, such as a tree in a park or a table in a room. Private cues are only available to one person, like a coin in one's pocket or an itch on one's wrist. Behavioral cues are under the control of the communicators and constitute the main vehicles of communication. They include verbal behavior, like discussing a business proposal, and non-verbal behavior, like raising one's eyebrows or sitting down in a chair. Barnlund's model has been influential, both for its innovations and for its criticisms of earlier models. Some objections to it include that it is not equally useful for all forms of communication and that it does not explain how exactly meaning is produced. == References == === Citations === === Sources ===
Wikipedia/Models_of_communication
In the study of the biological sciences, biocommunication is any specific type of communication within (intraspecific) or between (interspecific) species of plants, animals, fungi, protozoa and microorganisms. Communication means sign-mediated interactions following three levels of rules (syntactic, pragmatic and semantic). Signs in most cases are chemical molecules (semiochemicals), but also tactile, or as in animals also visual and auditive. Biocommunication of animals may include vocalizations (as between competing bird species), or pheromone production (as between various species of insects), chemical signals between plants and animals (as in tannin production used by vascular plants to warn away insects), and chemically mediated communication between plants and within plants. Biocommunication of fungi demonstrates that mycelia communication integrates interspecific sign-mediated interactions between fungal organisms, soil bacteria and plant root cells without which plant nutrition could not be organized. Biocommunication of Ciliates identifies the various levels and motifs of communication in these unicellular eukaryotes. Biocommunication of Archaea represents key levels of sign-mediated interactions in the evolutionarily oldest akaryotes. Biocommunication of phages demonstrates that the most abundant living agents on this planet coordinate and organize by sign-mediated interactions. Biocommunication is the essential tool to coordinate behavior of various cell types of immune systems. == Biocommunication, biosemiotics and linguistics == Biocommunication theory may be considered to be a branch of biosemiotics. Whereas biosemiotics studies the production and interpretation of signs and codes, biocommunication theory investigates concrete interactions in and between cells, tissues, organs and organismus mediated by signs. Accordingly, syntactic, semantic, and pragmatic aspects of biocommunication processes are distinguished. Biocommunication specific to animals (animal communication) is considered a branch of zoosemiotics. The semiotic study of molecular genetics can be considered a study of biocommunication at its most basic level. == Interpretation of abiotic indices == Interpreting stimuli from the environment is an essential part of life for any organism. Abiotic things that an organism must interpret include climate (weather, temperature, rainfall), geology (rocks, soil type), and geography (location of vegetation communities, exposure to elements, location of food and water sources relative to shelter sites). Birds, for example, migrate using cues such as the approaching weather or seasonal day length cues. Birds also migrate from areas of low or decreasing resources to areas of high or increasing resources, most commonly food or nesting locations. Birds that nest in the Northern Hemisphere tend to migrate north in the spring due to the increase in insect population, budding plants and the abundance of nesting locations. During the winter birds will migrate south to not only escape the cold, but find a sustainable food source. Some plants will bloom and attempt to reproduce when they sense days getting shorter. If they cannot fertilize before the seasons change and they die then they do not pass on their genes. Their ability to recognize a change in abiotic factors allow them to ensure reproduction. == Trans-organismic communication == Trans-organismic communication is when organisms of different species interact. In biology the relationships formed between different species is known as symbiosis. These relationships come in two main forms - mutualistic and parasitic. Mutualistic relationships are when both species benefit from their interactions. For example, pilot fish gather around sharks, rays, and sea turtles to eat various parasites from the surface of the larger organism. The fish obtain food from following the sharks, and the sharks receive a cleaning in return. Parasitic relationships are where one organism benefits off of the other organism at a cost. For example, in order for mistletoe to grow it must leach water and nutrients from a tree or shrub. Communication between species is not limited to securing sustenance. Many flowers rely on bees to spread their pollen and facilitate floral reproduction. To allow this, many flowers evolved bright, attractive petals and sweet nectar to attract bees. In a 2010 study, researchers at the University of Buenos Aires examined a possible relationship between fluorescence and attraction. The study concluded that reflected light was much more important in pollinator attraction than fluorescence. Communicating with other species allows organisms to form relationships that are advantageous in survival, and all of these relationships are all based on some form of trans-organismic communication. == Inter-organismic communication == Inter-organismic communication is communication between organisms of the same species (conspecifics). Inter-organismic communication includes human speech, which is key to maintaining social structures. Dolphins communicate with one another in a number of ways by creating sounds, making physical contact with one another and through the use of body language. Dolphins communicate vocally through clicking sounds and pitches of whistling specific to only one individual. The whistling helps communicate the individual's location to other dolphins. For example, if a mother loses sight of her offspring, or when two familiar individuals cannot find each other, their individual pitches help navigate back into a group. Body language can be used to indicate numerous things such as a nearby predator, to indicate to others that food has been found, and to demonstrate their level of attractiveness in order to find a mating partner, and even more. However, mammals such as dolphins and humans are not alone communicating within their own species. Peacocks can fan their feathers in order to communicate a territorial warning. Bees can tell other bees when they have found nectar by performing a dance when they return to the hive. Deer may flick their tails to warn others in their trail that danger is approaching. == Sexual communication == Sexual communication is the use of biocommunication signals to facilitate sexual interaction. Sexual communication appears to have three different aspects. (1) First, signals are employed to facilitate sexual interaction between individuals. (2) Second, signals are used to facilitate outbreeding and reduce inbreeding. (3) Third, signals are used to facilitate sexual selection among potential mates. It was proposed that these three aspects of sexual communication respectively promote the repair of DNA damage in the genomes passed on to progeny, the masking of mutations in the genomes of progeny, and selection for genetic fitness in a mating partner. Examples of sexual communication have been described in bacteria, fungi, protozoa, insects, plants and vertebrates. == Intra-organismic communication == Intra-organismic communication is not solely the passage of information within an organism, but also concrete interaction between and within cells of an organism, mediated by signs. This could be on a cellular and molecular level. An organism's ability to interpret its own biotic information is extremely important. If the organism is injured, falls ill, or must respond to danger, it needs to be able to process that physiological information and adjust its behavior. For example, when the human body starts to overheat, specialized glands release sweat, which absorbs the heat and then evaporates. This communication is imperative to survival in many species including plant life. Plants lack a central nervous system so they rely on a decentralized system of chemical messengers. This allows them to grow in response to factors such as wind, light and plant architecture. Using these chemical messengers, they can react to the environment and assess the best growth pattern. Essentially, plants grow to optimize their metabolic efficiency. Humans also rely on chemical messengers for survival. Epinephrine, also known as adrenaline, is a hormone that is secreted during times of great stress. It binds to receptors on the surface of cells and activates a pathway that alters the structure of glucose. This causes a rapid increase in blood sugar. Adrenaline also activates the central nervous system increasing heart rate and breathing rate. This prepares the muscles for the body's natural fight-or-flight response. Organisms rely on many different means of intra-organismic communication. Whether it is through neural connections or chemical messengers (including hormones), intra-organismic biocommunication evolved to respond to threats, maintain homeostasis and ensure self preservation. == Language hierarchy == Subhash Kak's hierarchy of language as biocommunications positions communication on a gradient of three levels of complexity: associative, re-organizational, and quantum. Most primitive is the associative language that is simple response-signal communication, such as insect pheromone trails or bird alarm calls not requiring cognitive flexibility. Re-organizational language is the more advanced development that allows the communication of situation-dependent information—such as the honeybee dance that tells food locations or the primate calls that change based on situation-dependent variables—demonstrating higher adaptability and potential structure of the syntax. Quantum language is the most advanced and speculative and is associated with abstract, potentially quantum-based communication with the complexity of human language to communicate abstract concepts and emotion the best example, but it is not described as to how it would apply to animals other than humans. In contrast to biological communication quantum language concepts are not applied to sign-mediated interactions in plants, fungi, protozoa or bacteria. The hierarchy suggests the complexity of communication is evolving, although its quantum features and relationship to formal theories of language such as the Chomsky hierarchy is controversial with scientists. == See also == == Notes ==
Wikipedia/Biocommunication_(science)
In fluid dynamics, the Boussinesq approximation (pronounced [businɛsk], named for Joseph Valentin Boussinesq) is used in the field of buoyancy-driven flow (also known as natural convection). It ignores density differences except where they appear in terms multiplied by g, the acceleration due to gravity. The essence of the Boussinesq approximation is that the difference in inertia is negligible but gravity is sufficiently strong to make the specific weight appreciably different between the two fluids. The existence of sound waves in a Boussinesq fluid is not possible as sound is the result of density fluctuations within a fluid. Boussinesq flows are common in nature (such as atmospheric fronts, oceanic circulation, katabatic winds), industry (dense gas dispersion, fume cupboard ventilation), and the built environment (natural ventilation, central heating). The approximation can be used to simplify the equations describing such flows, whilst still describing the flow behaviour to a high degree of accuracy. == Formulation == The Boussinesq approximation is applied to problems where the fluid varies in temperature (or composition) from one place to another, driving a flow of fluid and heat transfer (or mass transfer). The fluid satisfies conservation of mass, conservation of momentum and conservation of energy. In the Boussinesq approximation, variations in fluid properties other than density ρ are ignored, and density only appears when it is multiplied by g, the gravitational acceleration.: 127–128  If u is the local velocity of a parcel of fluid, the continuity equation for conservation of mass is: 52  ∂ ρ ∂ t + ∇ ⋅ ( ρ u ) = 0. {\displaystyle {\frac {\partial \rho }{\partial t}}+\nabla \cdot \left(\rho \mathbf {u} \right)=0.} If density variations are ignored, this reduces to: 128  The general expression for conservation of momentum of an incompressible, Newtonian fluid (the Navier–Stokes equations) is ∂ u ∂ t + ( u ⋅ ∇ ) u = − 1 ρ ∇ p + ν ∇ 2 u + 1 ρ F , {\displaystyle {\frac {\partial \mathbf {u} }{\partial t}}+\left(\mathbf {u} \cdot \nabla \right)\mathbf {u} =-{\frac {1}{\rho }}\nabla p+\nu \nabla ^{2}\mathbf {u} +{\frac {1}{\rho }}\mathbf {F} ,} where ν (nu) is the kinematic viscosity and F is the sum of any body forces such as gravity.: 59  In this equation, density variations are assumed to have a fixed part and another part that has a linear dependence on temperature: ρ = ρ 0 − α ρ 0 ( T − T 0 ) , {\displaystyle \rho =\rho _{0}-\alpha \rho _{0}(T-T_{0}),} where α is the coefficient of thermal expansion.: 128–129  The Boussinesq approximation states that the density variation is only important in the buoyancy term. If F = ρ g {\displaystyle F=\rho \mathbf {g} } is the gravitational body force, the resulting conservation equation is: 129  In the equation for heat flow in a temperature gradient, the heat capacity per unit volume, ρ C p {\displaystyle \rho C_{p}} , is assumed constant and the dissipation term is ignored. The resulting equation is where J is the rate per unit volume of internal heat production and k {\displaystyle k} is the thermal conductivity.: 129  The three numbered equations are the basic convection equations in the Boussinesq approximation. == Advantages == The advantage of the approximation arises because when considering a flow of, say, warm and cold water of density ρ1 and ρ2 one needs only to consider a single density ρ: the difference Δρ = ρ1 − ρ2 is negligible. Dimensional analysis shows that, under these circumstances, the only sensible way that acceleration due to gravity g should enter into the equations of motion is in the reduced gravity g′ where g ′ = g ρ 1 − ρ 2 ρ . {\displaystyle g'=g{\frac {\rho _{1}-\rho _{2}}{\rho }}.} (Note that the denominator may be either density without affecting the result because the change would be of order ⁠ g ( Δ ρ ρ ) 2 {\displaystyle g\left({\tfrac {\Delta \rho }{\rho }}\right)^{2}} ⁠.) The most generally used dimensionless number would be the Richardson number and Rayleigh number. The mathematics of the flow is therefore simpler because the density ratio ⁠ρ1/ρ2⁠, a dimensionless number, does not affect the flow; the Boussinesq approximation states that it may be assumed to be exactly one. == Inversions == One feature of Boussinesq flows is that they look the same when viewed upside-down, provided that the identities of the fluids are reversed. The Boussinesq approximation is inaccurate when the dimensionless density difference ⁠Δρ/ρ⁠ is approximately 1, i.e. Δρ ≈ ρ. For example, consider an open window in a warm room. The warm air inside is less dense than the cold air outside, which flows into the room and down towards the floor. Now imagine the opposite: a cold room exposed to warm outside air. Here the air flowing in moves up toward the ceiling. If the flow is Boussinesq (and the room is otherwise symmetrical), then viewing the cold room upside down is exactly the same as viewing the warm room right-way-round. This is because the only way density enters the problem is via the reduced gravity g′ which undergoes only a sign change when changing from the warm room flow to the cold room flow. An example of a non-Boussinesq flow is bubbles rising in water. The behaviour of air bubbles rising in water is very different from the behaviour of water falling in air: in the former case rising bubbles tend to form hemispherical shells, while water falling in air splits into raindrops (at small length scales surface tension enters the problem and confuses the issue). == References == == Further reading == Boussinesq, Joseph (1897). Théorie de l'écoulement tourbillonnant et tumultueux des liquides dans les lits rectilignes a grande section. Vol. 1. Gauthier-Villars. Retrieved 10 October 2015. Kleinstreuer, Clement (1997). Engineering Fluid Dynamics An Interdisciplinary Systems Approach. Cambridge University Press. ISBN 978-0-52-101917-0. Tritton, D.J. (1988). Physical Fluid Dynamics (Second ed.). Oxford University Press. ISBN 978-0-19-854493-7.
Wikipedia/Boussinesq_approximation_(buoyancy)
In physics, the first law of thermodynamics is an expression of the conservation of total energy of a system. The increase of the energy of a system is equal to the sum of work done on the system and the heat added to that system: d E t = d Q + d W {\displaystyle dE_{t}=dQ+dW} where E t {\displaystyle E_{t}} is the total energy of a system. W {\displaystyle W} is the work done on it. Q {\displaystyle Q} is the heat added to that system. In fluid mechanics, the first law of thermodynamics takes the following form: D E t D t = D W D t + D Q D t → D E t D t = ∇ ⋅ ( σ ⋅ v ) − ∇ ⋅ q {\displaystyle {\frac {DE_{t}}{Dt}}={\frac {DW}{Dt}}+{\frac {DQ}{Dt}}\to {\frac {DE_{t}}{Dt}}=\nabla \cdot ({\mathbf {\sigma } \cdot v})-\nabla \cdot {\mathbf {q} }} where σ {\displaystyle \mathbf {\sigma } } is the Cauchy stress tensor. v {\displaystyle \mathbf {v} } is the flow velocity. and q {\displaystyle \mathbf {q} } is the heat flux vector. Because it expresses conservation of total energy, this is sometimes referred to as the energy balance equation of continuous media. The first law is used to derive the non-conservation form of the Navier–Stokes equations. == Note == σ = − p I + T {\displaystyle {\mathbf {\sigma } }=-p{\mathbf {I} }+{\mathbf {T} }} Where p {\displaystyle p} is the pressure I {\displaystyle \mathbf {I} } is the identity matrix T {\displaystyle \mathbf {T} } is the deviatoric stress tensor That is, pulling is positive stress and pushing is negative stress. == Compressible fluid == For a compressible fluid the left hand side of equation becomes: D E t D t = ∂ E ∂ t + ∇ ⋅ ( E v ) {\displaystyle {\frac {DE_{t}}{Dt}}={\frac {\partial E}{\partial t}}+\nabla \cdot (E\mathbf {v} )} because in general ∇ ⋅ v ≠ 0. {\displaystyle \nabla \cdot \mathbf {v} \neq 0.} == Integral form == ∫ V ∂ E ∂ t d V = − ∮ ∂ V E v ⋅ d A + ∮ ∂ V ( σ ⋅ v ) ⋅ d A − ∮ ∂ V q ⋅ d A {\displaystyle \int _{V}{\frac {\partial E}{\partial t}}\,dV=-\oint _{\partial V}E{\mathbf {v} }\cdot d{\mathbf {A} }+\oint _{\partial V}({\mathbf {\sigma } \cdot v})\cdot d{\mathbf {A} }-\oint _{\partial V}{\mathbf {q} }\cdot d{\mathbf {A} }} That is, the change in the internal energy of the substance within a volume is the negative of the amount carried out of the volume by the flow of material across the boundary plus the work done compressing the material on the boundary minus the flow of heat out through the boundary. More generally, it is possible to incorporate source terms. == Alternative representation == ρ D h D t = D p D t + ∇ ⋅ ( k ∇ T ) + Φ {\displaystyle \rho {\frac {Dh}{Dt}}={\frac {Dp}{Dt}}+\nabla \cdot (k\,\nabla T)+\Phi } where h {\displaystyle h} is specific enthalpy, Φ = τ : ∇ v {\displaystyle \Phi ={\mathbf {\tau } }:\nabla {\mathbf {v} }} is dissipation function and T {\displaystyle T} is temperature. And where E t = ρ ( e + 1 2 v 2 − g ⋅ r ) {\displaystyle E_{t}=\rho (e+{\frac {1}{2}}v^{2}-\mathbf {g\cdot r} )} i.e. internal energy per unit volume equals mass density times the sum of: proper energy per unit mass, kinetic energy per unit mass, and gravitational potential energy per unit mass. − ∇ ⋅ q = + ∇ ⋅ ( k ∇ T ) {\displaystyle -\nabla \cdot {\mathbf {q} }=+\nabla \cdot (k\,\nabla T)} i.e. change in heat per unit volume (negative divergence of heat flow) equals the divergence of heat conductivity times the gradient of the temperature. ∇ ⋅ ( σ ⋅ v ) = v ⋅ ∇ ⋅ σ + σ : ∇ v {\displaystyle \nabla \cdot ({\mathbf {\sigma } \cdot v})={\mathbf {v} \cdot \nabla \cdot \sigma }+\sigma :\nabla \mathbf {v} } i.e. divergence of work done against stress equals flow of material times divergence of stress plus stress times divergence of material flow. σ : ∇ v = Φ − p ∇ ⋅ v {\displaystyle \sigma :\nabla {\mathbf {v} }=\Phi -p\,\nabla \cdot {\mathbf {v} }} i.e. stress times divergence of material flow equals deviatoric stress tensor times divergence of material flow minus pressure times material flow. h = e + p ρ {\displaystyle h=e+{\frac {p}{\rho }}} i.e. enthalpy per unit mass equals proper energy per unit mass plus pressure times volume per unit mass (reciprocal of mass density). ∇ ⋅ σ = D J D t − f {\displaystyle \nabla \cdot \sigma ={\frac {DJ}{Dt}}-\mathbf {f} } − p ∇ ⋅ v = D p D t − ρ D D t ( p ρ ) {\displaystyle -p\,\nabla \cdot {\mathbf {v} }={\frac {Dp}{Dt}}-\rho {\frac {D}{Dt}}\left({\frac {p}{\rho }}\right)} == Alternative form data == ∇ ⋅ σ = D J D t − f {\displaystyle \nabla \cdot {\mathbf {\sigma } }={\frac {D{\mathbf {J} }}{Dt}}-\mathbf {f} } left hand side of Navier–Stokes equations minus body force (per unit volume) acting on fluid. − p ∇ ⋅ v = D p D t − ρ D D t ( p ρ ) {\displaystyle -p\nabla \cdot {\mathbf {v} }={\frac {Dp}{Dt}}-\rho {\frac {D}{Dt}}\left({\frac {p}{\rho }}\right)} this relation is derived using this relationship ρ ∇ ⋅ v = − D ρ D t {\displaystyle \rho \nabla \cdot {\mathbf {v} }=-{\frac {D\rho }{Dt}}} which is alternative form of continuity equation ∂ ρ ∂ t + ∇ ⋅ J = 0 {\displaystyle {\frac {\partial \rho }{\partial t}}+\nabla \cdot {\mathbf {J} }=0} == See also == Clausius–Duhem inequality Continuum mechanics First law of thermodynamics Material derivative Incompressible flow == References ==
Wikipedia/First_law_of_thermodynamics_(fluid_mechanics)
The Reynolds-averaged Navier–Stokes equations (RANS equations) are time-averaged equations of motion for fluid flow. The idea behind the equations is Reynolds decomposition, whereby an instantaneous quantity is decomposed into its time-averaged and fluctuating quantities, an idea first proposed by Osborne Reynolds. The RANS equations are primarily used to describe turbulent flows. These equations can be used with approximations based on knowledge of the properties of flow turbulence to give approximate time-averaged solutions to the Navier–Stokes equations. For a stationary flow of an incompressible Newtonian fluid, these equations can be written in Einstein notation in Cartesian coordinates as: ρ u ¯ j ∂ u ¯ i ∂ x j = ρ f ¯ i + ∂ ∂ x j [ − p ¯ δ i j + μ ( ∂ u ¯ i ∂ x j + ∂ u ¯ j ∂ x i ) − ρ u i ′ u j ′ ¯ ] . {\displaystyle \rho {\bar {u}}_{j}{\frac {\partial {\bar {u}}_{i}}{\partial x_{j}}}=\rho {\bar {f}}_{i}+{\frac {\partial }{\partial x_{j}}}\left[-{\bar {p}}\delta _{ij}+\mu \left({\frac {\partial {\bar {u}}_{i}}{\partial x_{j}}}+{\frac {\partial {\bar {u}}_{j}}{\partial x_{i}}}\right)-\rho {\overline {u_{i}^{\prime }u_{j}^{\prime }}}\right].} The left hand side of this equation represents the change in mean momentum of a fluid element owing to the unsteadiness in the mean flow and the convection by the mean flow. This change is balanced by the mean body force, the isotropic stress owing to the mean pressure field, the viscous stresses, and apparent stress ( − ρ u i ′ u j ′ ¯ ) {\displaystyle \left(-\rho {\overline {u_{i}^{\prime }u_{j}^{\prime }}}\right)} owing to the fluctuating velocity field, generally referred to as the Reynolds stress. This nonlinear Reynolds stress term requires additional modeling to close the RANS equation for solving, and has led to the creation of many different turbulence models. The time-average operator . ¯ {\displaystyle {\overline {.}}} is a Reynolds operator. == Derivation of RANS equations == The basic tool required for the derivation of the RANS equations from the instantaneous Navier–Stokes equations is the Reynolds decomposition. Reynolds decomposition refers to separation of the flow variable (like velocity u {\displaystyle u} ) into the mean (time-averaged) component ( u ¯ {\displaystyle {\overline {u}}} ) and the fluctuating component ( u ′ {\displaystyle u^{\prime }} ). Because the mean operator is a Reynolds operator, it has a set of properties. One of these properties is that the mean of the fluctuating quantity is equal to zero ( u ′ ¯ ) {\displaystyle ({\bar {u'}})} Some authors prefer using U {\displaystyle U} instead of u ¯ {\displaystyle {\bar {u}}} for the mean term (since an overbar is sometimes used to represent a vector). In this case, the fluctuating term u ′ {\displaystyle u^{\prime }} is represented instead by u {\displaystyle u} . This is possible because the two terms do not appear simultaneously in the same equation. To avoid confusion, the notation u {\displaystyle u} , u ¯ {\displaystyle {\bar {u}}} , and u ′ {\displaystyle u'} will be used to represent the instantaneous, mean, and fluctuating terms, respectively. The properties of Reynolds operators are useful in the derivation of the RANS equations. Using these properties, the Navier–Stokes equations of motion, expressed in tensor notation, are (for an incompressible Newtonian fluid): ∂ u i ∂ x i = 0 {\displaystyle {\frac {\partial u_{i}}{\partial x_{i}}}=0} ∂ u i ∂ t + u j ∂ u i ∂ x j = f i − 1 ρ ∂ p ∂ x i + ν ∂ 2 u i ∂ x j ∂ x j {\displaystyle {\frac {\partial u_{i}}{\partial t}}+u_{j}{\frac {\partial u_{i}}{\partial x_{j}}}=f_{i}-{\frac {1}{\rho }}{\frac {\partial p}{\partial x_{i}}}+\nu {\frac {\partial ^{2}u_{i}}{\partial x_{j}\partial x_{j}}}} where f i {\displaystyle f_{i}} is a vector representing external forces. Next, each instantaneous quantity can be split into time-averaged and fluctuating components, and the resulting equation time-averaged, to yield: ∂ u ¯ i ∂ x i = 0 {\displaystyle {\frac {\partial {\bar {u}}_{i}}{\partial x_{i}}}=0} ∂ u ¯ i ∂ t + u ¯ j ∂ u ¯ i ∂ x j + u j ′ ∂ u i ′ ∂ x j ¯ = f ¯ i − 1 ρ ∂ p ¯ ∂ x i + ν ∂ 2 u ¯ i ∂ x j ∂ x j . {\displaystyle {\frac {\partial {\bar {u}}_{i}}{\partial t}}+{\bar {u}}_{j}{\frac {\partial {\bar {u}}_{i}}{\partial x_{j}}}+{\overline {u_{j}^{\prime }{\frac {\partial u_{i}^{\prime }}{\partial x_{j}}}}}={\bar {f}}_{i}-{\frac {1}{\rho }}{\frac {\partial {\bar {p}}}{\partial x_{i}}}+\nu {\frac {\partial ^{2}{\bar {u}}_{i}}{\partial x_{j}\partial x_{j}}}.} The momentum equation can also be written as, ∂ u ¯ i ∂ t + u ¯ j ∂ u ¯ i ∂ x j = f ¯ i − 1 ρ ∂ p ¯ ∂ x i + ν ∂ 2 u ¯ i ∂ x j ∂ x j − ∂ u i ′ u j ′ ¯ ∂ x j . {\displaystyle {\frac {\partial {\bar {u}}_{i}}{\partial t}}+{\bar {u}}_{j}{\frac {\partial {\bar {u}}_{i}}{\partial x_{j}}}={\bar {f}}_{i}-{\frac {1}{\rho }}{\frac {\partial {\bar {p}}}{\partial x_{i}}}+\nu {\frac {\partial ^{2}{\bar {u}}_{i}}{\partial x_{j}\partial x_{j}}}-{\frac {\partial {\overline {u_{i}^{\prime }u_{j}^{\prime }}}}{\partial x_{j}}}.} On further manipulations this yields, ρ ∂ u ¯ i ∂ t + ρ u ¯ j ∂ u ¯ i ∂ x j = ρ f ¯ i + ∂ ∂ x j [ − p ¯ δ i j + 2 μ S ¯ i j − ρ u i ′ u j ′ ¯ ] {\displaystyle \rho {\frac {\partial {\bar {u}}_{i}}{\partial t}}+\rho {\bar {u}}_{j}{\frac {\partial {\bar {u}}_{i}}{\partial x_{j}}}=\rho {\bar {f}}_{i}+{\frac {\partial }{\partial x_{j}}}\left[-{\bar {p}}\delta _{ij}+2\mu {\bar {S}}_{ij}-\rho {\overline {u_{i}^{\prime }u_{j}^{\prime }}}\right]} where, S ¯ i j = 1 2 ( ∂ u ¯ i ∂ x j + ∂ u ¯ j ∂ x i ) {\displaystyle {\bar {S}}_{ij}={\frac {1}{2}}\left({\frac {\partial {\bar {u}}_{i}}{\partial x_{j}}}+{\frac {\partial {\bar {u}}_{j}}{\partial x_{i}}}\right)} is the mean rate of strain tensor. Finally, since integration in time removes the time dependence of the resultant terms, the time derivative must be eliminated, leaving: ρ u ¯ j ∂ u ¯ i ∂ x j = ρ f i ¯ + ∂ ∂ x j [ − p ¯ δ i j + 2 μ S ¯ i j − ρ u i ′ u j ′ ¯ ] . {\displaystyle \rho {\bar {u}}_{j}{\frac {\partial {\bar {u}}_{i}}{\partial x_{j}}}=\rho {\bar {f_{i}}}+{\frac {\partial }{\partial x_{j}}}\left[-{\bar {p}}\delta _{ij}+2\mu {\bar {S}}_{ij}-\rho {\overline {u_{i}^{\prime }u_{j}^{\prime }}}\right].} == Equations of Reynolds stress == The time evolution equation of Reynolds stress is given by: ∂ u i ′ u j ′ ¯ ∂ t + u ¯ k ∂ u i ′ u j ′ ¯ ∂ x k = − u i ′ u k ′ ¯ ∂ u ¯ j ∂ x k − u j ′ u k ′ ¯ ∂ u ¯ i ∂ x k + p ′ ρ ( ∂ u i ′ ∂ x j + ∂ u j ′ ∂ x i ) ¯ − ∂ ∂ x k ( u i ′ u j ′ u k ′ ¯ + p ′ u i ′ ¯ ρ δ j k + p ′ u j ′ ¯ ρ δ i k − ν ∂ u i ′ u j ′ ¯ ∂ x k ) − 2 ν ∂ u i ′ ∂ x k ∂ u j ′ ∂ x k ¯ {\displaystyle {\frac {\partial {\overline {u_{i}^{\prime }u_{j}^{\prime }}}}{\partial t}}+{\bar {u}}_{k}{\frac {\partial {\overline {u_{i}^{\prime }u_{j}^{\prime }}}}{\partial x_{k}}}=-{\overline {u_{i}^{\prime }u_{k}^{\prime }}}{\frac {\partial {\bar {u}}_{j}}{\partial x_{k}}}-{\overline {u_{j}^{\prime }u_{k}^{\prime }}}{\frac {\partial {\bar {u}}_{i}}{\partial x_{k}}}+{\overline {{\frac {p^{\prime }}{\rho }}\left({\frac {\partial u_{i}^{\prime }}{\partial x_{j}}}+{\frac {\partial u_{j}^{\prime }}{\partial x_{i}}}\right)}}-{\frac {\partial }{\partial x_{k}}}\left({\overline {u_{i}^{\prime }u_{j}^{\prime }u_{k}^{\prime }}}+{\frac {\overline {p^{\prime }u_{i}^{\prime }}}{\rho }}\delta _{jk}+{\frac {\overline {p^{\prime }u_{j}^{\prime }}}{\rho }}\delta _{ik}-\nu {\frac {\partial {\overline {u_{i}^{\prime }u_{j}^{\prime }}}}{\partial x_{k}}}\right)-2\nu {\overline {{\frac {\partial u_{i}^{\prime }}{\partial x_{k}}}{\frac {\partial u_{j}^{\prime }}{\partial x_{k}}}}}} This equation is very complicated. If u i ′ u j ′ ¯ {\displaystyle {\overline {u_{i}^{\prime }u_{j}^{\prime }}}} is traced, turbulence kinetic energy is obtained. The last term ν ∂ u i ′ ∂ x k ∂ u j ′ ∂ x k ¯ {\displaystyle \nu {\overline {{\frac {\partial u_{i}^{\prime }}{\partial x_{k}}}{\frac {\partial u_{j}^{\prime }}{\partial x_{k}}}}}} is turbulent dissipation rate. All RANS models are based on the above equation. == Applications (RANS modelling) == A model for testing performance was determined that, when combined with the vortex lattice (VLM) or boundary element method (BEM), RANS was found useful for modelling the flow of water between two contrary rotation propellers, where VLM or BEM are applied to the propellers and RANS is used for the dynamically fluxing inter-propeller state. The RANS equations have been widely utilized as a model for determining flow characteristics and assessing wind comfort in urban environments. This computational approach can be executed through direct calculations involving the solution of the RANS equations, or through an indirect method involving the training of machine learning algorithms using the RANS equations as a basis. The direct approach is more accurate than the indirect approach but it requires expertise in numerical methods and computational fluid dynamics (CFD), as well as substantial computational resources to handle the complexity of the equations. == Notes == == See also == Favre averaging == References ==
Wikipedia/Reynolds-averaged_Navier–Stokes_equations
The cryosphere is an umbrella term for those portions of Earth's surface where water is in solid form. This includes sea ice, ice on lakes or rivers, snow, glaciers, ice caps, ice sheets, and frozen ground (which includes permafrost). Thus, there is an overlap with the hydrosphere. The cryosphere is an integral part of the global climate system. It also has important feedbacks on the climate system. These feedbacks come from the cryosphere's influence on surface energy and moisture fluxes, clouds, the water cycle, atmospheric and oceanic circulation. Through these feedback processes, the cryosphere plays a significant role in the global climate and in climate model response to global changes. Approximately 10% of the Earth's surface is covered by ice, but this is rapidly decreasing. Current reductions in the cryosphere (caused by climate change) are measurable in ice sheet melt, glaciers decline, sea ice decline, permafrost thaw and snow cover decrease. == Definition and terminology == The cryosphere describes those portions of Earth's surface where water is in solid form. Frozen water is found on the Earth's surface primarily as snow cover, freshwater ice in lakes and rivers, sea ice, glaciers, ice sheets, and frozen ground and permafrost (permanently frozen ground). The cryosphere is one of five components of the climate system. The others are the atmosphere, the hydrosphere, the lithosphere and the biosphere.: 1451  The term cryosphere comes from the Greek word kryos, meaning cold, frost or ice and the Greek word sphaira, meaning globe or ball. Cryospheric sciences is an umbrella term for the study of the cryosphere. As an interdisciplinary Earth science, many disciplines contribute to it, most notably geology, hydrology, and meteorology and climatology; in this sense, it is comparable to glaciology. The term deglaciation describes the retreat of cryospheric features. == Properties and interactions == There are several fundamental physical properties of snow and ice that modulate energy exchanges between the surface and the atmosphere. The most important properties are the surface reflectance (albedo), the ability to transfer heat (thermal diffusivity), and the ability to change state (latent heat). These physical properties, together with surface roughness, emissivity, and dielectric characteristics, have important implications for observing snow and ice from space. For example, surface roughness is often the dominant factor determining the strength of radar backscatter. Physical properties such as crystal structure, density, length, and liquid water content are important factors affecting the transfers of heat and water and the scattering of microwave energy. === Residence time and extent === The residence time of water in each of the cryospheric sub-systems varies widely. Snow cover and freshwater ice are essentially seasonal, and most sea ice, except for ice in the central Arctic, lasts only a few years if it is not seasonal. A given water particle in glaciers, ice sheets, or ground ice, however, may remain frozen for 10–100,000 years or longer, and deep ice in parts of East Antarctica may have an age approaching 1 million years. Most of the world's ice volume is in Antarctica, principally in the East Antarctic Ice Sheet. In terms of areal extent, however, Northern Hemisphere winter snow and ice extent comprise the largest area, amounting to an average 23% of hemispheric surface area in January. The large areal extent and the important climatic roles of snow and ice is related to their unique physical properties. This also indicates that the ability to observe and model snow and ice-cover extent, thickness, and physical properties (radiative and thermal properties) is of particular significance for climate research. === Surface reflectance === The surface reflectance of incoming solar radiation is important for the surface energy balance (SEB). It is the ratio of reflected to incident solar radiation, commonly referred to as albedo. Climatologists are primarily interested in albedo integrated over the shortwave portion of the electromagnetic spectrum (~300 to 3500 nm), which coincides with the main solar energy input. Typically, albedo values for non-melting snow-covered surfaces are high (~80–90%) except in the case of forests. The higher albedos for snow and ice cause rapid shifts in surface reflectivity in autumn and spring in high latitudes, but the overall climatic significance of this increase is spatially and temporally modulated by cloud cover. (Planetary albedo is determined principally by cloud cover, and by the small amount of total solar radiation received in high latitudes during winter months.) Summer and autumn are times of high-average cloudiness over the Arctic Ocean so the albedo feedback associated with the large seasonal changes in sea-ice extent is greatly reduced. It was found that snow cover exhibited the greatest influence on Earth's radiative balance in the spring (April to May) period when incoming solar radiation was greatest over snow-covered areas. === Thermal properties of cryospheric elements === The thermal properties of cryospheric elements also have important climatic consequences. Snow and ice have much lower thermal diffusivities than air. Thermal diffusivity is a measure of the speed at which temperature waves can penetrate a substance. Snow and ice are many orders of magnitude less efficient at diffusing heat than air. Snow cover insulates the ground surface, and sea ice insulates the underlying ocean, decoupling the surface-atmosphere interface with respect to both heat and moisture fluxes. The flux of moisture from a water surface is eliminated by even a thin skin of ice, whereas the flux of heat through thin ice continues to be substantial until it attains a thickness in excess of 30 to 40 cm. However, even a small amount of snow on top of the ice will dramatically reduce the heat flux and slow down the rate of ice growth. The insulating effect of snow also has major implications for the hydrological cycle. In non-permafrost regions, the insulating effect of snow is such that only near-surface ground freezes and deep-water drainage is uninterrupted. While snow and ice act to insulate the surface from large energy losses in winter, they also act to retard warming in the spring and summer because of the large amount of energy required to melt ice (the latent heat of fusion, 3.34 x 105 J/kg at 0 °C). However, the strong static stability of the atmosphere over areas of extensive snow or ice tends to confine the immediate cooling effect to a relatively shallow layer, so that associated atmospheric anomalies are usually short-lived and local to regional in scale. In some areas of the world such as Eurasia, however, the cooling associated with a heavy snowpack and moist spring soils is known to play a role in modulating the summer monsoon circulation. === Climate change feedback mechanisms === There are numerous cryosphere-climate feedbacks in the global climate system. These operate over a wide range of spatial and temporal scales from local seasonal cooling of air temperatures to hemispheric-scale variations in ice sheets over time scales of thousands of years. The feedback mechanisms involved are often complex and incompletely understood. For example, Curry et al. (1995) showed that the so-called "simple" sea ice-albedo feedback involved complex interactions with lead fraction, melt ponds, ice thickness, snow cover, and sea-ice extent. The role of snow cover in modulating the monsoon is just one example of a short-term cryosphere-climate feedback involving the land surface and the atmosphere. == Components == === Glaciers and ice sheets === Ice sheets and glaciers are flowing ice masses that rest on solid land. They are controlled by snow accumulation, surface and basal melt, calving into surrounding oceans or lakes and internal dynamics. The latter results from gravity-driven creep flow ("glacial flow") within the ice body and sliding on the underlying land, which leads to thinning and horizontal spreading. Any imbalance of this dynamic equilibrium between mass gain, loss and transport due to flow results in either growing or shrinking ice bodies.Relationships between global climate and changes in ice extent are complex. The mass balance of land-based glaciers and ice sheets is determined by the accumulation of snow, mostly in winter, and warm-season ablation due primarily to net radiation and turbulent heat fluxes to melting ice and snow from warm-air advection Where ice masses terminate in the ocean, iceberg calving is the major contributor to mass loss. In this situation, the ice margin may extend out into deep water as a floating ice shelf, such as that in the Ross Sea. === Sea ice === Sea ice covers much of the polar oceans and forms by freezing of sea water. Satellite data since the early 1970s reveal considerable seasonal, regional, and interannual variability in the sea ice covers of both hemispheres. Seasonally, sea-ice extent in the Southern Hemisphere varies by a factor of 5, from a minimum of 3–4 million km2 in February to a maximum of 17–20 million km2 in September. The seasonal variation is much less in the Northern Hemisphere where the confined nature and high latitudes of the Arctic Ocean result in a much larger perennial ice cover, and the surrounding land limits the equatorward extent of wintertime ice. Thus, the seasonal variability in Northern Hemisphere ice extent varies by only a factor of 2, from a minimum of 7–9 million km2 in September to a maximum of 14–16 million km2 in March. The ice cover exhibits much greater regional-scale interannual variability than it does hemispherical. For instance, in the region of the Sea of Okhotsk and Japan, maximum ice extent decreased from 1.3 million km2 in 1983 to 0.85 million km2 in 1984, a decrease of 35%, before rebounding the following year to 1.2 million km2. The regional fluctuations in both hemispheres are such that for any several-year period of the satellite record some regions exhibit decreasing ice coverage while others exhibit increasing ice cover. === Frozen ground and permafrost === === Snow cover === Most of the Earth's snow-covered area is located in the Northern Hemisphere, and varies seasonally from 46.5 million km2 in January to 3.8 million km2 in August. Snow cover is an extremely important storage component in the water balance, especially seasonal snowpacks in mountainous areas of the world. Though limited in extent, seasonal snowpacks in the Earth's mountain ranges account for the major source of the runoff for stream flow and groundwater recharge over wide areas of the midlatitudes. For example, over 85% of the annual runoff from the Colorado River basin originates as snowmelt. Snowmelt runoff from the Earth's mountains fills the rivers and recharges the aquifers that over a billion people depend on for their water resources. Furthermore, over 40% of the world's protected areas are in mountains, attesting to their value both as unique ecosystems needing protection and as recreation areas for humans. === Ice on lakes and rivers === Ice forms on rivers and lakes in response to seasonal cooling. The sizes of the ice bodies involved are too small to exert anything other than localized climatic effects. However, the freeze-up/break-up processes respond to large-scale and local weather factors, such that considerable interannual variability exists in the dates of appearance and disappearance of the ice. Long series of lake-ice observations can serve as a proxy climate record, and the monitoring of freeze-up and break-up trends may provide a convenient integrated and seasonally-specific index of climatic perturbations. Information on river-ice conditions is less useful as a climatic proxy because ice formation is strongly dependent on river-flow regime, which is affected by precipitation, snow melt, and watershed runoff as well as being subject to human interference that directly modifies channel flow, or that indirectly affects the runoff via land-use practices. Lake freeze-up depends on the heat storage in the lake and therefore on its depth, the rate and temperature of any inflow, and water-air energy fluxes. Information on lake depth is often unavailable, although some indication of the depth of shallow lakes in the Arctic can be obtained from airborne radar imagery during late winter (Sellman et al. 1975) and spaceborne optical imagery during summer (Duguay and Lafleur 1997). The timing of breakup is modified by snow depth on the ice as well as by ice thickness and freshwater inflow. == Changes caused by climate change == === Ice sheet melt === === Decline of glaciers === === Sea ice decline === === Permafrost thaw === === Snow cover decrease === Studies in 2021 found that Northern Hemisphere snow cover has been decreasing since 1978, along with snow depth. Paleoclimate observations show that such changes are unprecedented over the last millennia in Western North America. North American winter snow cover increased during the 20th century, largely in response to an increase in precipitation. Because of its close relationship with hemispheric air temperature, snow cover is an important indicator of climate change. Global warming is expected to result in major changes to the partitioning of snow and rainfall, and to the timing of snowmelt, which will have important implications for water use and management. These changes also involve potentially important decadal and longer time-scale feedbacks to the climate system through temporal and spatial changes in soil moisture and runoff to the oceans.(Walsh 1995). Freshwater fluxes from the snow cover into the marine environment may be important, as the total flux is probably of the same magnitude as desalinated ridging and rubble areas of sea ice. In addition, there is an associated pulse of precipitated pollutants which accumulate over the Arctic winter in snowfall and are released into the ocean upon ablation of the sea ice. == See also == Cryobiology International Association of Cryospheric Sciences (IACS) Polar regions of Earth Special Report on the Ocean and Cryosphere in a Changing Climate Water cycle == References == == External links == Canadian Cryospheric Information Network Near-real-time overview of global ice concentration and snow extent National Snow and Ice Data Center
Wikipedia/Cryosphere_science
While geostrophic motion refers to the wind that would result from an exact balance between the Coriolis force and horizontal pressure-gradient forces, quasi-geostrophic (QG) motion refers to flows where the Coriolis force and pressure gradient forces are almost in balance, but with inertia also having an effect. == Origin == Atmospheric and oceanographic flows take place over horizontal length scales which are very large compared to their vertical length scale, and so they can be described using the shallow water equations. The Rossby number is a dimensionless number which characterises the strength of inertia compared to the strength of the Coriolis force. The quasi-geostrophic equations are approximations to the shallow water equations in the limit of small Rossby number, so that inertial forces are an order of magnitude smaller than the Coriolis and pressure forces. If the Rossby number is equal to zero then we recover geostrophic flow. The quasi-geostrophic equations were first formulated by Jule Charney. == Derivation of the single-layer QG equations == In Cartesian coordinates, the components of the geostrophic wind are f 0 v g = ∂ Φ ∂ x {\displaystyle {f_{0}}{v_{g}}={\partial \Phi \over \partial x}} (1a) f 0 u g = − ∂ Φ ∂ y {\displaystyle {f_{0}}{u_{g}}=-{\partial \Phi \over \partial y}} (1b) where Φ {\displaystyle {\Phi }} is the geopotential. The geostrophic vorticity ζ g = k ^ ⋅ ∇ × V g {\displaystyle {\zeta _{g}}={{\hat {\mathbf {k} }}\cdot \nabla \times \mathbf {V_{g}} }} can therefore be expressed in terms of the geopotential as ζ g = ∂ v g ∂ x − ∂ u g ∂ y = 1 f 0 ( ∂ 2 Φ ∂ x 2 + ∂ 2 Φ ∂ y 2 ) = 1 f 0 ∇ 2 Φ {\displaystyle {\zeta _{g}}={{\partial v_{g} \over \partial x}-{\partial u_{g} \over \partial y}={1 \over f_{0}}\left({{\partial ^{2}\Phi \over \partial x^{2}}+{\partial ^{2}\Phi \over \partial y^{2}}}\right)={1 \over f_{0}}{\nabla ^{2}\Phi }}} (2) Equation (2) can be used to find ζ g ( x , y ) {\displaystyle {\zeta _{g}(x,y)}} from a known field Φ ( x , y ) {\displaystyle {\Phi (x,y)}} . Alternatively, it can also be used to determine Φ {\displaystyle {\Phi }} from a known distribution of ζ g {\displaystyle {\zeta _{g}}} by inverting the Laplacian operator. The quasi-geostrophic vorticity equation can be obtained from the x {\displaystyle {x}} and y {\displaystyle {y}} components of the quasi-geostrophic momentum equation which can then be derived from the horizontal momentum equation D V D t + f k ^ × V = − ∇ Φ {\displaystyle {D\mathbf {V} \over Dt}+f{\hat {\mathbf {k} }}\times \mathbf {V} =-\nabla \Phi } (3) The material derivative in (3) is defined by D D t = ( ∂ ∂ t ) p + ( V ⋅ ∇ ) p + ω ∂ ∂ p {\displaystyle {{D \over Dt}={\left({\partial \over \partial t}\right)_{p}}+{\left({\mathbf {V} \cdot \nabla }\right)_{p}}+{\omega {\partial \over \partial p}}}} (4) where ω = D p D t {\displaystyle {\omega ={Dp \over Dt}}} is the pressure change following the motion. The horizontal velocity V {\displaystyle {\mathbf {V} }} can be separated into a geostrophic V g {\displaystyle {\mathbf {V_{g}} }} and an ageostrophic V a {\displaystyle {\mathbf {V_{a}} }} part V = V g + V a {\displaystyle {\mathbf {V} =\mathbf {V_{g}} +\mathbf {V_{a}} }} (5) Two important assumptions of the quasi-geostrophic approximation are 1. V g ≫ V a {\displaystyle {\mathbf {V_{g}} \gg \mathbf {V_{a}} }} , or, more precisely | V a | | V g | ∼ O ( Rossby number ) {\displaystyle {|\mathbf {V_{a}} | \over |\mathbf {V_{g}} |}\sim O({\text{Rossby number}})} . 2. the beta-plane approximation f = f 0 + β y {\displaystyle {f=f_{0}+\beta y}} with β y f 0 ∼ O ( Rossby number ) {\displaystyle {{\frac {\beta y}{f_{0}}}\sim O({\text{Rossby number}})}} The second assumption justifies letting the Coriolis parameter have a constant value f 0 {\displaystyle {f_{0}}} in the geostrophic approximation and approximating its variation in the Coriolis force term by f 0 + β y {\displaystyle {f_{0}+\beta y}} . However, because the acceleration following the motion, which is given in (1) as the difference between the Coriolis force and the pressure gradient force, depends on the departure of the actual wind from the geostrophic wind, it is not permissible to simply replace the velocity by its geostrophic velocity in the Coriolis term. The acceleration in (3) can then be rewritten as f k ^ × V + ∇ Φ = ( f 0 + β y ) k ^ × ( V g + V a ) − f 0 k ^ × V g = f 0 k ^ × V a + β y k ^ × V g {\displaystyle {f{\hat {\mathbf {k} }}\times \mathbf {V} +\nabla \Phi }={(f_{0}+\beta y){\hat {\mathbf {k} }}\times (\mathbf {V_{g}} +\mathbf {V_{a}} )-f_{0}{\hat {\mathbf {k} }}\times \mathbf {V_{g}} }={f_{0}{\hat {\mathbf {k} }}\times \mathbf {V_{a}} +\beta y{\hat {\mathbf {k} }}\times \mathbf {V_{g}} }} (6) The approximate horizontal momentum equation thus has the form D g V g D t = − f 0 k ^ × V a − β y k ^ × V g {\displaystyle {D_{g}\mathbf {V_{g}} \over Dt}={-f_{0}{\hat {\mathbf {k} }}\times \mathbf {V_{a}} -\beta y{\hat {\mathbf {k} }}\times \mathbf {V_{g}} }} (7) Expressing equation (7) in terms of its components, D g u g D t − f 0 v a − β y v g = 0 {\displaystyle {{D_{g}u_{g} \over Dt}-{f_{0}v_{a}}-{\beta yv_{g}}=0}} (8a) D g v g D t + f 0 u a + β y u g = 0 {\displaystyle {{D_{g}v_{g} \over Dt}+{f_{0}u_{a}}+{\beta yu_{g}}=0}} (8b) Taking ∂ ( 8 b ) ∂ x − ∂ ( 8 a ) ∂ y {\displaystyle {{\partial (8b) \over \partial x}-{\partial (8a) \over \partial y}}} , and noting that geostrophic wind is nondivergent (i.e., ∇ ⋅ V = 0 {\displaystyle {\nabla \cdot \mathbf {V} =0}} ), the vorticity equation is D g ζ g D t = − f 0 ( ∂ u a ∂ x + ∂ v a ∂ y ) − β v g {\displaystyle {{D_{g}\zeta _{g} \over Dt}=-f_{0}\left({{\partial u_{a} \over \partial x}+{\partial v_{a} \over \partial y}}\right)-\beta v_{g}}} (9) Because f {\displaystyle {f}} depends only on y {\displaystyle {y}} (i.e., D g f D t = V g ⋅ ∇ f = β v g {\displaystyle {{D_{g}f \over Dt}=\mathbf {V_{g}} \cdot \nabla f=\beta v_{g}}} ) and that the divergence of the ageostrophic wind can be written in terms of ω {\displaystyle {\omega }} based on the continuity equation ∂ u a ∂ x + ∂ v a ∂ y + ∂ ω ∂ p = 0 {\displaystyle {{\partial u_{a} \over \partial x}+{\partial v_{a} \over \partial y}+{\partial \omega \over \partial p}=0}} equation (9) can therefore be written as ∂ ζ g ∂ t = − V g ⋅ ∇ ( ζ g + f ) − f 0 ∂ ω ∂ p {\displaystyle {{\partial \zeta _{g} \over \partial t}={-\mathbf {V_{g}} \cdot \nabla ({\zeta _{g}+f})}-{f_{0}{\partial \omega \over \partial p}}}} (10) === The same identity using the geopotential === Defining the geopotential tendency χ = ∂ Φ ∂ t {\displaystyle {\chi ={\partial \Phi \over \partial t}}} and noting that partial differentiation may be reversed, equation (10) can be rewritten in terms of χ {\displaystyle {\chi }} as 1 f 0 ∇ 2 χ = − V g ⋅ ∇ ( 1 f 0 ∇ 2 Φ + f ) + f 0 ∂ ω ∂ p {\displaystyle {{1 \over f_{0}}{\nabla ^{2}\chi }={-\mathbf {V_{g}} \cdot \nabla \left({{1 \over f_{0}}{\nabla ^{2}\Phi }+f}\right)}+{f_{0}{\partial \omega \over \partial p}}}} (11) The right-hand side of equation (11) depends on variables Φ {\displaystyle {\Phi }} and ω {\displaystyle {\omega }} . An analogous equation dependent on these two variables can be derived from the thermodynamic energy equation ( ∂ ∂ t + V g ⋅ ∇ ) ( − ∂ Φ ∂ p ) − σ ω = k J p {\displaystyle {{{\left({{\partial \over \partial t}+{\mathbf {V_{g}} \cdot \nabla }}\right)\left({-\partial \Phi \over \partial p}\right)}-\sigma \omega }={kJ \over p}}} (12) where σ = − R T 0 p d log ⁡ Θ 0 d p {\displaystyle {\sigma ={-RT_{0} \over p}{d\log \Theta _{0} \over dp}}} and Θ 0 {\displaystyle {\Theta _{0}}} is the potential temperature corresponding to the basic state temperature. In the midtroposphere, σ {\displaystyle {\sigma }} ≈ 2.5 × 10 − 6 m 2 P a − 2 s − 2 {\displaystyle {2.5\times 10^{-6}\mathrm {m} {^{2}}\mathrm {Pa} ^{-2}\mathrm {s} ^{-2}}} . Multiplying (12) by f 0 σ {\displaystyle {f_{0} \over \sigma }} and differentiating with respect to p {\displaystyle {p}} and using the definition of χ {\displaystyle {\chi }} yields ∂ ∂ p ( f 0 σ ∂ χ ∂ p ) = − ∂ ∂ p ( f 0 σ V g ⋅ ∇ ∂ Φ ∂ p ) − f 0 ∂ ω ∂ p − f 0 ∂ ∂ p ( k J σ p ) {\displaystyle {{{\partial \over \partial p}\left({{f_{0} \over \sigma }{\partial \chi \over \partial p}}\right)}=-{{\partial \over \partial p}\left({{f_{0} \over \sigma }{\mathbf {V_{g}} \cdot \nabla }{\partial \Phi \over \partial p}}\right)}-{{f_{0}}{\partial \omega \over \partial p}}-{{f_{0}}{\partial \over \partial p}\left({kJ \over \sigma p}\right)}}} (13) If for simplicity J {\displaystyle {J}} were set to 0, eliminating ω {\displaystyle {\omega }} in equations (11) and (13) yields ( ∇ 2 + ∂ ∂ p ( f 0 2 σ ∂ ∂ p ) ) χ = − f 0 V g ⋅ ∇ ( 1 f 0 ∇ 2 Φ + f ) − ∂ ∂ p ( − f 0 2 σ V g ⋅ ∇ ( ∂ Φ ∂ p ) ) {\displaystyle {{\left({\nabla ^{2}+{{\partial \over \partial p}\left({{f_{0}^{2} \over \sigma }{\partial \over \partial p}}\right)}}\right){\chi }}=-{{f_{0}}{\mathbf {V_{g}} \cdot \nabla }\left({{{1 \over f_{0}}{\nabla ^{2}\Phi }}+f}\right)}-{{\partial \over \partial p}\left({{-}{f_{0}^{2} \over \sigma }{\mathbf {V_{g}} \cdot \nabla }\left({\partial \Phi \over \partial p}\right)}\right)}}} (14) Equation (14) is often referred to as the geopotential tendency equation. It relates the local geopotential tendency (term A) to the vorticity advection distribution (term B) and thickness advection (term C). === The same identity using the quasi-geostrophic potential vorticity === Using the chain rule of differentiation, term C can be written as − V g ⋅ ∇ ∂ ∂ p ( f 0 2 σ ∂ Φ ∂ p ) − f 0 2 σ ∂ V g ∂ p ⋅ ∇ ∂ Φ ∂ p {\displaystyle {-{{\mathbf {V_{g}} \cdot \nabla }{\partial \over \partial p}\left({{f_{0}^{2} \over \sigma }{\partial \Phi \over \partial p}}\right)}-{{f_{0}^{2} \over \sigma }{\partial \mathbf {V_{g}} \over \partial p}{\cdot \nabla }{\partial \Phi \over \partial p}}}} (15) But based on the thermal wind relation, f 0 ∂ V g ∂ p = k ^ × ∇ ( ∂ Φ ∂ p ) {\displaystyle {{f_{0}{\partial \mathbf {V_{g}} \over \partial p}}={{\hat {\mathbf {k} }}\times \nabla \left({\partial \Phi \over \partial p}\right)}}} . In other words, ∂ V g ∂ p {\displaystyle {\partial \mathbf {V_{g}} \over \partial p}} is perpendicular to ∇ ( ∂ Φ ∂ p ) {\displaystyle {\nabla ({\partial \Phi \over \partial p})}} and the second term in equation (15) disappears. The first term can be combined with term B in equation (14) which, upon division by f 0 {\displaystyle {f_{0}}} can be expressed in the form of a conservation equation ( ∂ ∂ t + V g ⋅ ∇ ) q = D g q D t = 0 {\displaystyle {{\left({{\partial \over \partial t}+{\mathbf {V_{g}} \cdot \nabla }}\right)q}={D_{g}q \over Dt}=0}} (16) where q {\displaystyle {q}} is the quasi-geostrophic potential vorticity defined by q = 1 f 0 ∇ 2 Φ + f + ∂ ∂ p ( f 0 σ ∂ Φ ∂ p ) {\displaystyle {q={{{1 \over f_{0}}{\nabla ^{2}\Phi }}+{f}+{{\partial \over \partial p}\left({{f_{0} \over \sigma }{\partial \Phi \over \partial p}}\right)}}}} (17) The three terms of equation (17) are, from left to right, the geostrophic relative vorticity, the planetary vorticity and the stretching vorticity. == Implications == As an air parcel moves about in the atmosphere, its relative, planetary and stretching vorticities may change but equation (17) shows that the sum of the three must be conserved following the geostrophic motion. Equation (17) can be used to find q {\displaystyle {q}} from a known field Φ {\displaystyle {\Phi }} . Alternatively, it can also be used to predict the evolution of the geopotential field given an initial distribution of Φ {\displaystyle {\Phi }} and suitable boundary conditions by using an inversion process. More importantly, the quasi-geostrophic system reduces the five-variable primitive equations to a one-equation system where all variables such as u g {\displaystyle {u_{g}}} , v g {\displaystyle {v_{g}}} and T {\displaystyle {T}} can be obtained from q {\displaystyle {q}} or height Φ {\displaystyle {\Phi }} . Also, because ζ g {\displaystyle {\zeta _{g}}} and V g {\displaystyle {\mathbf {V_{g}} }} are both defined in terms of Φ ( x , y , p , t ) {\displaystyle {\Phi (x,y,p,t)}} , the vorticity equation can be used to diagnose vertical motion provided that the fields of both Φ {\displaystyle {\Phi }} and ∂ Φ ∂ t {\displaystyle {\partial \Phi \over \partial t}} are known. == References ==
Wikipedia/Quasi-geostrophic_equations
Meteorology is the scientific study of the Earth's atmosphere and short-term atmospheric phenomena (i.e. weather), with a focus on weather forecasting. It has applications in the military, aviation, energy production, transport, agriculture, construction, weather warnings and disaster management. Along with climatology, atmospheric physics and atmospheric chemistry, meteorology forms the broader field of the atmospheric sciences. The interactions between Earth's atmosphere and its oceans (notably El Niño and La Niña) are studied in the interdisciplinary field of hydrometeorology. Other interdisciplinary areas include biometeorology, space weather and planetary meteorology. Marine weather forecasting relates meteorology to maritime and coastal safety, based on atmospheric interactions with large bodies of water. Meteorologists study meteorological phenomena driven by solar radiation, Earth's rotation, ocean currents and other factors. These include everyday weather like clouds, precipitation, wind patterns as well as severe weather events such as tropical cyclones and severe winter storms. Such phenomena are quantified using variables like temperature, pressure and humidity, which are then used to forecast weather at local (microscale), regional (mesoscale and synoptic scale), and global scales. Meteorologists collect data using basic instruments like thermometers, barometers and weather vanes (for surface-level measurements), alongside advanced tools like weather satellites, balloons, reconnaissance aircraft, buoys and radars. The World Meteorological Organization (WMO) ensures international standardization of meteorological research. The study of meteorology dates back millennia. Ancient civilizations tried to predict weather through folklore, astrology and religious rituals. Aristotle's treatise Meteorology sums up early observations of the field, which advanced little during early medieval times, but experienced a resurgence during the Renaissance, when Alhazen and Descartes challenged Aristotelian theories, emphasizing scientific methods. In the 18th century, accurate measurement tools (e.g. barometer and thermometer) were developed and the first meteorological society was founded. In the 19th century, telegraph-based weather observation networks were formed across broad regions. In the 20th century, numerical weather prediction (NWP), coupled with advanced satellite and radar technology, introduced sophisticated forecasting models. Later, computers revolutionized forecasting by processing vast datasets in real time and automatically solving modelling equations. 21st-century meteorology is highly accurate and driven by big data and supercomputing. It is adopting innovations like machine learning, ensemble forecasting and high-resolution global climate modeling. Climate change-induced extreme weather poses new challenges for forecasting and research, while inherent uncertainty remains because of the atmosphere's chaotic nature (see butterfly effect). == Etymology == The word meteorology is from the Ancient Greek μετέωρος metéōros (meteor) and -λογία -logia (-(o)logy), meaning "the study of things high in the air". == History == === Ancient meteorology up to the time of Aristotle === Early attempts at predicting weather were often related to prophecy and divining, and were sometimes based on astrological ideas. Ancient religions believed meteorological phenomena to be under the control of the gods. The ability to predict rains and floods based on annual cycles was evidently used by humans at least from the time of agricultural settlement if not earlier. Early approaches to predicting weather were based on astrology and were practiced by priests. The Egyptians had rain-making rituals as early as 3500 BC. Ancient Indian Upanishads contain mentions of clouds and seasons. The Samaveda mentions sacrifices to be performed when certain phenomena were noticed. Varāhamihira's classical work Brihatsamhita, written about 500 AD, provides evidence of weather observation. Cuneiform inscriptions on Babylonian tablets included associations between thunder and rain. The Chaldeans differentiated the 22° and 46° halos. The ancient Greeks were the first to make theories about the weather. Many natural philosophers studied the weather. However, as meteorological instruments did not exist, the inquiry was largely qualitative, and could only be judged by more general theoretical speculations. Herodotus states that Thales predicted the solar eclipse of 585 BC. He studied Babylonian equinox tables. According to Seneca, he explained that the cause of the Nile's annual floods was due to northerly winds hindering its descent by the sea. Anaximander and Anaximenes thought that thunder and lightning was caused by air smashing against the cloud, thus kindling the flame. Early meteorological theories generally considered that there was a fire-like substance in the atmosphere. Anaximander defined wind as a flowing of air, but this was not generally accepted for centuries. A theory to explain summer hail was first proposed by Anaxagoras. He observed that air temperature decreased with increasing height and that clouds contain moisture. He also noted that heat caused objects to rise, and therefore the heat on a summer day would drive clouds to an altitude where the moisture would freeze. Empedocles theorized on the change of the seasons. He believed that fire and water opposed each other in the atmosphere, and when fire gained the upper hand, the result was summer, and when water did, it was winter. Democritus also wrote about the flooding of the Nile. He said that snow in northern parts of the world melted during the summer solstice. This would cause vapors to form clouds, which would cause storms when driven to the Nile by northerly winds, thus filling the lakes and the Nile. Hippocrates inquired into the effect of weather on health. Eudoxus claimed that bad weather followed four-year periods, according to Pliny. === Aristotelian meteorology === These early observations would form the basis for Aristotle's Meteorology, written in 350 BC. Aristotle is considered the founder of meteorology. One of the most impressive achievements described in the Meteorology is the description of what is now known as the hydrologic cycle. His work would remain an authority on meteorology for nearly 2,000 years. The book De Mundo (composed before 250 BC or between 350 and 200 BC) noted: If the flashing body is set on fire and rushes violently to the earth it is called a thunderbolt ; if it be only half of fire, but violent also and massive, it is called a meteor ; if it is entirely free from fire, it is called a smoking bolt. They are all called 'swooping bolts', because they swoop down upon the earth. Lightning is sometimes smoky, and is then called 'smouldering lightning' ; sometimes it darts quickly along, and is then said to be 'vivid' ; at other times it travels in crooked lines, and is called 'forked lightning' ; when it swoops down upon some object it is called 'swooping lightning'. After Aristotle, progress in meteorology stalled for a long time. Theophrastus compiled a book on weather forecasting, called the Book of Signs, as well as On Winds. He gave hundreds of signs for weather phenomena for a period up to a year. His system was based on dividing the year by the setting and the rising of the Pleiad, halves into solstices and equinoxes, and the continuity of the weather for those periods. He also divided months into the new moon, fourth day, eighth day and full moon, in likelihood of a change in the weather occurring. The day was divided into sunrise, mid-morning, noon, mid-afternoon and sunset, with corresponding divisions of the night, with change being likely at one of these divisions. Applying the divisions and a principle of balance in the yearly weather, he came up with forecasts like that if a lot of rain falls in the winter, the spring is usually dry. Rules based on actions of animals are also present in his work, like that if a dog rolls on the ground, it is a sign of a storm. Shooting stars and the Moon were also considered significant. However, he made no attempt to explain these phenomena, referring only to the Aristotelian method. The work of Theophrastus remained a dominant influence in weather forecasting for nearly 2,000 years. === Meteorology after Aristotle === Meteorology continued to be studied and developed over the centuries, but it was not until the Renaissance in the 14th to 17th centuries that significant advancements were made in the field. Scientists such as Galileo and Descartes introduced new methods and ideas, leading to the scientific revolution in meteorology. Speculation on the cause of the flooding of the Nile ended when Eratosthenes, according to Proclus, stated that it was known that man had gone to the sources of the Nile and observed the rains, although interest in its implications continued. During the era of Roman Greece and Europe, scientific interest in meteorology waned. In the 1st century BC, most natural philosophers claimed that the clouds and winds extended up to 111 miles, but Posidonius thought that they reached up to five miles, after which the air is clear, liquid and luminous. He closely followed Aristotle's theories. By the end of the second century BC, the center of science shifted from Athens to Alexandria, home to the ancient Library of Alexandria. In the 2nd century AD, Ptolemy's Almagest dealt with meteorology, because it was considered a subset of astronomy. He gave several astrological weather predictions. He constructed a map of the world divided into climatic zones by their illumination, in which the length of the Summer solstice increased by half an hour per zone between the equator and the Arctic. Ptolemy wrote on the atmospheric refraction of light in the context of astronomical observations. In 25 AD, Pomponius Mela, a Roman geographer, formalized the climatic zone system. In 63–64 AD, Seneca wrote Naturales quaestiones. It was a compilation and synthesis of ancient Greek theories. However, theology was of foremost importance to Seneca, and he believed that phenomena such as lightning were tied to fate. The second book(chapter) of Pliny's Natural History covers meteorology. He states that more than twenty ancient Greek authors studied meteorology. He did not make any personal contributions, and the value of his work is in preserving earlier speculation, much like Seneca's work. From 400 to 1100, scientific learning in Europe was preserved by the clergy. Isidore of Seville devoted a considerable attention to meteorology in Etymologiae, De ordine creaturum and De natura rerum. Bede the Venerable was the first Englishman to write about the weather in De Natura Rerum in 703. The work was a summary of then extant classical sources. However, Aristotle's works were largely lost until the twelfth century, including Meteorologica. Isidore and Bede were scientifically minded, but they adhered to the letter of Scripture. Islamic civilization translated many ancient works into Arabic which were transmitted and translated in western Europe to Latin. In the 9th century, Al-Dinawari wrote the Kitab al-Nabat (Book of Plants), in which he deals with the application of meteorology to agriculture during the Arab Agricultural Revolution. He describes the meteorological character of the sky, the planets and constellations, the sun and moon, the lunar phases indicating seasons and rain, the anwa (heavenly bodies of rain), and atmospheric phenomena such as winds, thunder, lightning, snow, floods, valleys, rivers, lakes. In 1021, Alhazen showed that atmospheric refraction is also responsible for twilight in Opticae thesaurus; he estimated that twilight begins when the sun is 19 degrees below the horizon, and also used a geometric determination based on this to estimate the maximum possible height of the Earth's atmosphere as 52,000 passim (about 49 miles, or 79 km). Adelard of Bath was one of the early translators of the classics. He also discussed meteorological topics in his Quaestiones naturales. He thought dense air produced propulsion in the form of wind. He explained thunder by saying that it was due to ice colliding in clouds, and in Summer it melted. In the thirteenth century, Aristotelian theories reestablished dominance in meteorology. For the next four centuries, meteorological work by and large was mostly commentary. It has been estimated over 156 commentaries on the Meteorologica were written before 1650. Experimental evidence was less important than appeal to the classics and authority in medieval thought. In the thirteenth century, Roger Bacon advocated experimentation and the mathematical approach. In his Opus majus, he followed Aristotle's theory on the atmosphere being composed of water, air, and fire, supplemented by optics and geometric proofs. He noted that Ptolemy's climatic zones had to be adjusted for topography. St. Albert the Great was the first to propose that each drop of falling rain had the form of a small sphere, and that this form meant that the rainbow was produced by light interacting with each raindrop. Roger Bacon was the first to calculate the angular size of the rainbow. He stated that a rainbow summit cannot appear higher than 42 degrees above the horizon. In the late 13th century and early 14th century, Kamāl al-Dīn al-Fārisī and Theodoric of Freiberg were the first to give the correct explanations for the primary rainbow phenomenon. Theoderic went further and also explained the secondary rainbow. By the middle of the sixteenth century, meteorology had developed along two lines: theoretical science based on Meteorologica, and astrological weather forecasting. The pseudoscientific prediction by natural signs became popular and enjoyed protection of the church and princes. This was supported by scientists like Johannes Muller, Leonard Digges, and Johannes Kepler. However, there were skeptics. In the 14th century, Nicole Oresme believed that weather forecasting was possible, but that the rules for it were unknown at the time. Astrological influence in meteorology persisted until the eighteenth century. Gerolamo Cardano's De Subilitate (1550) was the first work to challenge fundamental aspects of Aristotelian theory. Cardano maintained that there were only three basic elements- earth, air, and water. He discounted fire because it needed material to spread and produced nothing. Cardano thought there were two kinds of air: free air and enclosed air. The former destroyed inanimate things and preserved animate things, while the latter had the opposite effect. Rene Descartes's Discourse on the Method (1637) typifies the beginning of the scientific revolution in meteorology. His scientific method had four principles: to never accept anything unless one clearly knew it to be true; to divide every difficult problem into small problems to tackle; to proceed from the simple to the complex, always seeking relationships; to be as complete and thorough as possible with no prejudice. In the appendix Les Meteores, he applied these principles to meteorology. He discussed terrestrial bodies and vapors which arise from them, proceeding to explain the formation of clouds from drops of water, and winds, clouds then dissolving into rain, hail and snow. He also discussed the effects of light on the rainbow. Descartes hypothesized that all bodies were composed of small particles of different shapes and interwovenness. All of his theories were based on this hypothesis. He explained the rain as caused by clouds becoming too large for the air to hold, and that clouds became snow if the air was not warm enough to melt them, or hail if they met colder wind. Like his predecessors, Descartes's method was deductive, as meteorological instruments were not developed and extensively used yet. He introduced the Cartesian coordinate system to meteorology and stressed the importance of mathematics in natural science. His work established meteorology as a legitimate branch of physics. In the 18th century, the invention of the thermometer and barometer allowed for more accurate measurements of temperature and pressure, leading to a better understanding of atmospheric processes. This century also saw the birth of the first meteorological society, the Societas Meteorologica Palatina in 1780. In the 19th century, advances in technology such as the telegraph and photography led to the creation of weather observing networks and the ability to track storms. Additionally, scientists began to use mathematical models to make predictions about the weather. The 20th century saw the development of radar and satellite technology, which greatly improved the ability to observe and track weather systems. In addition, meteorologists and atmospheric scientists started to create the first weather forecasts and temperature predictions. In the 20th and 21st centuries, with the advent of computer models and big data, meteorology has become increasingly dependent on numerical methods and computer simulations. This has greatly improved weather forecasting and climate predictions. Additionally, meteorology has expanded to include other areas such as air quality, atmospheric chemistry, and climatology. The advancement in observational, theoretical and computational technologies has enabled ever more accurate weather predictions and understanding of weather pattern and air pollution. In current time, with the advancement in weather forecasting and satellite technology, meteorology has become an integral part of everyday life, and is used for many purposes such as aviation, agriculture, and disaster management. === Instruments and classification scales === In 1441, King Sejong's son, Prince Munjong of Korea, invented the first standardized rain gauge. These were sent throughout the Joseon dynasty of Korea as an official tool to assess land taxes based upon a farmer's potential harvest. In 1450, Leone Battista Alberti developed a swinging-plate anemometer, and was known as the first anemometer. In 1607, Galileo Galilei constructed a thermoscope. In 1611, Johannes Kepler wrote the first scientific treatise on snow crystals: "Strena Seu de Nive Sexangula (A New Year's Gift of Hexagonal Snow)." In 1643, Evangelista Torricelli invented the mercury barometer. In 1662, Sir Christopher Wren invented the mechanical, self-emptying, tipping bucket rain gauge. In 1714, Gabriel Fahrenheit created a reliable scale for measuring temperature with a mercury-type thermometer. In 1742, Anders Celsius, a Swedish astronomer, proposed the "centigrade" temperature scale, the predecessor of the current Celsius scale. In 1783, the first hair hygrometer was demonstrated by Horace-Bénédict de Saussure. In 1802–1803, Luke Howard wrote On the Modification of Clouds, in which he assigns cloud types Latin names. In 1806, Francis Beaufort introduced his system for classifying wind speeds. Near the end of the 19th century the first cloud atlases were published, including the International Cloud Atlas, which has remained in print ever since. The April 1960 launch of the first successful weather satellite, TIROS-1, marked the beginning of the age where weather information became available globally. === Atmospheric composition research === In 1648, Blaise Pascal rediscovered that atmospheric pressure decreases with height, and deduced that there is a vacuum above the atmosphere. In 1738, Daniel Bernoulli published Hydrodynamics, initiating the Kinetic theory of gases and established the basic laws for the theory of gases. In 1761, Joseph Black discovered that ice absorbs heat without changing its temperature when melting. In 1772, Black's student Daniel Rutherford discovered nitrogen, which he called phlogisticated air, and together they developed the phlogiston theory. In 1777, Antoine Lavoisier discovered oxygen and developed an explanation for combustion. In 1783, in Lavoisier's essay "Reflexions sur le phlogistique," he deprecates the phlogiston theory and proposes a caloric theory. In 1804, John Leslie observed that a matte black surface radiates heat more effectively than a polished surface, suggesting the importance of black-body radiation. In 1808, John Dalton defended caloric theory in A New System of Chemistry and described how it combines with matter, especially gases; he proposed that the heat capacity of gases varies inversely with atomic weight. In 1824, Sadi Carnot analyzed the efficiency of steam engines using caloric theory; he developed the notion of a reversible process and, in postulating that no such thing exists in nature, laid the foundation for the second law of thermodynamics. In 1716, Edmund Halley suggested that aurorae are caused by "magnetic effluvia" moving along the Earth's magnetic field lines. === Research into cyclones and air flow === In 1494, Christopher Columbus experienced a tropical cyclone, which led to the first written European account of a hurricane. In 1686, Edmund Halley presented a systematic study of the trade winds and monsoons and identified solar heating as the cause of atmospheric motions. In 1735, an ideal explanation of global circulation through study of the trade winds was written by George Hadley. In 1743, when Benjamin Franklin was prevented from seeing a lunar eclipse by a hurricane, he decided that cyclones move in a contrary manner to the winds at their periphery. Understanding the kinematics of how exactly the rotation of the Earth affects airflow was partial at first. Gaspard-Gustave Coriolis published a paper in 1835 on the energy yield of machines with rotating parts, such as waterwheels. In 1856, William Ferrel proposed the existence of a circulation cell in the mid-latitudes, and the air within deflected by the Coriolis force resulting in the prevailing westerly winds. Late in the 19th century, the motion of air masses along isobars was understood to be the result of the large-scale interaction of the pressure gradient force and the deflecting force. By 1912, this deflecting force was named the Coriolis effect. Just after World War I, a group of meteorologists in Norway led by Vilhelm Bjerknes developed the Norwegian cyclone model that explains the generation, intensification and ultimate decay (the life cycle) of mid-latitude cyclones, and introduced the idea of fronts, that is, sharply defined boundaries between air masses. The group included Carl-Gustaf Rossby (who was the first to explain the large scale atmospheric flow in terms of fluid dynamics), Tor Bergeron (who first determined how rain forms) and Jacob Bjerknes. === Observation networks and weather forecasting === In the late 16th century and first half of the 17th century a range of meteorological instruments were invented – the thermometer, barometer, hydrometer, as well as wind and rain gauges. In the 1650s natural philosophers started using these instruments to systematically record weather observations. Scientific academies established weather diaries and organised observational networks. In 1654, Ferdinando II de Medici established the first weather observing network, that consisted of meteorological stations in Florence, Cutigliano, Vallombrosa, Bologna, Parma, Milan, Innsbruck, Osnabrück, Paris and Warsaw. The collected data were sent to Florence at regular time intervals. In the 1660s Robert Hooke of the Royal Society of London sponsored networks of weather observers. Hippocrates' treatise Airs, Waters, and Places had linked weather to disease. Thus early meteorologists attempted to correlate weather patterns with epidemic outbreaks, and the climate with public health. During the Age of Enlightenment meteorology tried to rationalise traditional weather lore, including astrological meteorology. But there were also attempts to establish a theoretical understanding of weather phenomena. Edmond Halley and George Hadley tried to explain trade winds. They reasoned that the rising mass of heated equator air is replaced by an inflow of cooler air from high latitudes. A flow of warm air at high altitude from equator to poles in turn established an early picture of circulation. Frustration with the lack of discipline among weather observers, and the poor quality of the instruments, led the early modern nation states to organise large observation networks. Thus, by the end of the 18th century, meteorologists had access to large quantities of reliable weather data. In 1832, an electromagnetic telegraph was created by Baron Schilling. The arrival of the electrical telegraph in 1837 afforded, for the first time, a practical method for quickly gathering surface weather observations from a wide area. This data could be used to produce maps of the state of the atmosphere for a region near the Earth's surface and to study how these states evolved through time. To make frequent weather forecasts based on these data required a reliable network of observations, but it was not until 1849 that the Smithsonian Institution began to establish an observation network across the United States under the leadership of Joseph Henry. Similar observation networks were established in Europe at this time. The Reverend William Clement Ley was key in understanding of cirrus clouds and early understandings of Jet Streams. Charles Kenneth Mackinnon Douglas, known as 'CKM' Douglas read Ley's papers after his death and carried on the early study of weather systems. Nineteenth century researchers in meteorology were drawn from military or medical backgrounds, rather than trained as dedicated scientists. In 1854, the United Kingdom government appointed Robert FitzRoy to the new office of Meteorological Statist to the Board of Trade with the task of gathering weather observations at sea. FitzRoy's office became the United Kingdom Meteorological Office in 1854, the second oldest national meteorological service in the world (the Central Institution for Meteorology and Geodynamics (ZAMG) in Austria was founded in 1851 and is the oldest weather service in the world). The first daily weather forecasts made by FitzRoy's Office were published in The Times newspaper in 1860. The following year a system was introduced of hoisting storm warning cones at principal ports when a gale was expected. FitzRoy coined the term "weather forecast" and tried to separate scientific approaches from prophetic ones. Over the next 50 years, many countries established national meteorological services. The India Meteorological Department (1875) was established to follow tropical cyclone and monsoon. The Finnish Meteorological Central Office (1881) was formed from part of Magnetic Observatory of Helsinki University. Japan's Tokyo Meteorological Observatory, the forerunner of the Japan Meteorological Agency, began constructing surface weather maps in 1883. The United States Weather Bureau (1890) was established under the United States Department of Agriculture. The Australian Bureau of Meteorology (1906) was established by a Meteorology Act to unify existing state meteorological services. === Numerical weather prediction === In 1904, Norwegian scientist Vilhelm Bjerknes first argued in his paper Weather Forecasting as a Problem in Mechanics and Physics that it should be possible to forecast weather from calculations based upon natural laws. It was not until later in the 20th century that advances in the understanding of atmospheric physics led to the foundation of modern numerical weather prediction. In 1922, Lewis Fry Richardson published "Weather Prediction By Numerical Process," after finding notes and derivations he worked on as an ambulance driver in World War I. He described how small terms in the prognostic fluid dynamics equations that govern atmospheric flow could be neglected, and a numerical calculation scheme that could be devised to allow predictions. Richardson envisioned a large auditorium of thousands of people performing the calculations. However, the sheer number of calculations required was too large to complete without electronic computers, and the size of the grid and time steps used in the calculations led to unrealistic results. Though numerical analysis later found that this was due to numerical instability. Starting in the 1950s, numerical forecasts with computers became feasible. The first weather forecasts derived this way used barotropic (single-vertical-level) models, and could successfully predict the large-scale movement of midlatitude Rossby waves, that is, the pattern of atmospheric lows and highs. In 1959, the UK Meteorological Office received its first computer, a Ferranti Mercury. In the 1960s, the chaotic nature of the atmosphere was first observed and mathematically described by Edward Lorenz, founding the field of chaos theory. These advances have led to the current use of ensemble forecasting in most major forecasting centers, to take into account uncertainty arising from the chaotic nature of the atmosphere. Mathematical models used to predict the long term weather of the Earth (climate models), have been developed that have a resolution today that are as coarse as the older weather prediction models. These climate models are used to investigate long-term climate shifts, such as what effects might be caused by human emission of greenhouse gases. == Meteorologists == Meteorologists are scientists who study and work in the field of meteorology. The American Meteorological Society publishes and continually updates an authoritative electronic Meteorology Glossary. Meteorologists work in government agencies, private consulting and research services, industrial enterprises, utilities, radio and television stations, and in education. In the United States, meteorologists held about 10,000 jobs in 2018. Although weather forecasts and warnings are the best known products of meteorologists for the public, weather presenters on radio and television are not necessarily professional meteorologists. They are most often reporters with little formal meteorological training, using unregulated titles such as weather specialist or weatherman. The American Meteorological Society and National Weather Association issue "Seals of Approval" to weather broadcasters who meet certain requirements but this is not mandatory to be hired by the media. == Equipment == Each science has its own unique sets of laboratory equipment. In the atmosphere, there are many things or qualities of the atmosphere that can be measured. Rain, which can be observed, or seen anywhere and anytime was one of the first atmospheric qualities measured historically. Also, two other accurately measured qualities are wind and humidity. Neither of these can be seen but can be felt. The devices to measure these three sprang up in the mid-15th century and were respectively the rain gauge, the anemometer, and the hygrometer. Many attempts had been made prior to the 15th century to construct adequate equipment to measure the many atmospheric variables. Many were faulty in some way or were simply not reliable. Even Aristotle noted this in some of his work as the difficulty to measure the air. Sets of surface measurements are important data to meteorologists. They give a snapshot of a variety of weather conditions at one single location and are usually at a weather station, a ship or a weather buoy. The measurements taken at a weather station can include any number of atmospheric observables. Usually, temperature, pressure, wind measurements, and humidity are the variables that are measured by a thermometer, barometer, anemometer, and hygrometer, respectively. Professional stations may also include air quality sensors (carbon monoxide, carbon dioxide, methane, ozone, dust, and smoke), ceilometer (cloud ceiling), falling precipitation sensor, flood sensor, lightning sensor, microphone (explosions, sonic booms, thunder), pyranometer/pyrheliometer/spectroradiometer (IR/Vis/UV photodiodes), rain gauge/snow gauge, scintillation counter (background radiation, fallout, radon), seismometer (earthquakes and tremors), transmissometer (visibility), and a GPS clock for data logging. Upper air data are of crucial importance for weather forecasting. The most widely used technique is launches of radiosondes. Supplementing the radiosondes a network of aircraft collection is organized by the World Meteorological Organization. Remote sensing, as used in meteorology, is the concept of collecting data from remote weather events and subsequently producing weather information. The common types of remote sensing are Radar, Lidar, and satellites (or photogrammetry). Each collects data about the atmosphere from a remote location and, usually, stores the data where the instrument is located. Radar and Lidar are not passive because both use EM radiation to illuminate a specific portion of the atmosphere. Weather satellites along with more general-purpose Earth-observing satellites circling the earth at various altitudes have become an indispensable tool for studying a wide range of phenomena from forest fires to El Niño. == Spatial scales == The study of the atmosphere can be divided into distinct areas that depend on both time and spatial scales. At one extreme of this scale is climatology. In the timescales of hours to days, meteorology separates into micro-, meso-, and synoptic scale meteorology. Respectively, the geospatial size of each of these three scales relates directly with the appropriate timescale. Other subclassifications are used to describe the unique, local, or broad effects within those subclasses. === Microscale === Microscale meteorology is the study of atmospheric phenomena on a scale of about 1 kilometre (0.62 mi) or less. Individual thunderstorms, clouds, and local turbulence caused by buildings and other obstacles (such as individual hills) are modeled on this scale. Misoscale meteorology is an informal subdivision. === Mesoscale === Mesoscale meteorology is the study of atmospheric phenomena that has horizontal scales ranging from 1 km to 1000 km and a vertical scale that starts at the Earth's surface and includes the atmospheric boundary layer, troposphere, tropopause, and the lower section of the stratosphere. The terms meso-alpha, meso-beta, and meso-gamma to classify the horizontal scales of atmospheric processes were introduced to the field of mesoscale meteorology by Isidoro Orlanski. Mesoscale timescales last from less than a day to multiple weeks. The events typically of interest are thunderstorms, squall lines, fronts, precipitation bands in tropical and extratropical cyclones, and topographically generated weather systems such as mountain waves and sea and land breezes. === Synoptic scale === Synoptic scale meteorology predicts atmospheric changes at scales up to 1000 km and 105 sec (28 days), in time and space. At the synoptic scale, the Coriolis acceleration acting on moving air masses (outside of the tropics) plays a dominant role in predictions. The phenomena typically described by synoptic meteorology include events such as extratropical cyclones, baroclinic troughs and ridges, frontal zones, and to some extent jet streams. All of these are typically given on weather maps for a specific time. The minimum horizontal scale of synoptic phenomena is limited to the spacing between surface observation stations. === Global scale === Global scale meteorology is the study of weather patterns related to the transport of heat from the tropics to the poles. Very large scale oscillations are of importance at this scale. These oscillations have time periods typically on the order of months, such as the Madden–Julian oscillation, or years, such as the El Niño–Southern Oscillation and the Pacific decadal oscillation. Global scale meteorology pushes into the range of climatology. The traditional definition of climate is pushed into larger timescales and with the understanding of the longer time scale global oscillations, their effect on climate and weather disturbances can be included in the synoptic and mesoscale timescales predictions. Numerical Weather Prediction is a main focus in understanding air–sea interaction, tropical meteorology, atmospheric predictability, and tropospheric/stratospheric processes. The Naval Research Laboratory in Monterey, California, developed a global atmospheric model called Navy Operational Global Atmospheric Prediction System (NOGAPS). NOGAPS is run operationally at Fleet Numerical Meteorology and Oceanography Center for the United States Military. Many other global atmospheric models are run by national meteorological agencies. == Some meteorological principles == === Boundary layer meteorology === Boundary layer meteorology is the study of processes in the air layer directly above Earth's surface, known as the atmospheric boundary layer (ABL). The effects of the surface – heating, cooling, and friction – cause turbulent mixing within the air layer. Significant movement of heat, matter, or momentum on time scales of less than a day are caused by turbulent motions. Boundary layer meteorology includes the study of all types of surface–atmosphere boundary, including ocean, lake, urban land and non-urban land for the study of meteorology. === Dynamic meteorology === Dynamic meteorology generally focuses on the fluid dynamics of the atmosphere. The idea of air parcel is used to define the smallest element of the atmosphere, while ignoring the discrete molecular and chemical nature of the atmosphere. An air parcel is defined as an infinitesimal region in the fluid continuum of the atmosphere. The fundamental laws of fluid dynamics, thermodynamics, and motion are used to study the atmosphere. The physical quantities that characterize the state of the atmosphere are temperature, density, pressure, etc. These variables have unique values in the continuum. == Applications == === Weather forecasting === Weather forecasting is the application of science and technology to predict the state of the atmosphere at a future time and given location. Humans have attempted to predict the weather informally for millennia and formally since at least the 19th century. Weather forecasts are made by collecting quantitative data about the current state of the atmosphere and using scientific understanding of atmospheric processes to project how the atmosphere will evolve. Once an all-human endeavor based mainly upon changes in barometric pressure, current weather conditions, and sky condition, forecast models are now used to determine future conditions. Human input is still required to pick the best possible forecast model to base the forecast upon, which involves pattern recognition skills, teleconnections, knowledge of model performance, and knowledge of model biases. The chaotic nature of the atmosphere, the massive computational power required to solve the equations that describe the atmosphere, error involved in measuring the initial conditions, and an incomplete understanding of atmospheric processes mean that forecasts become less accurate as the difference in current time and the time for which the forecast is being made (the range of the forecast) increases. The use of ensembles and model consensus help narrow the error and pick the most likely outcome. There are a variety of end uses to weather forecasts. Weather warnings are important forecasts because they are used to protect life and property. Forecasts based on temperature and precipitation are important to agriculture, and therefore to commodity traders within stock markets. Temperature forecasts are used by utility companies to estimate demand over coming days. On an everyday basis, people use weather forecasts to determine what to wear. Since outdoor activities are severely curtailed by heavy rain, snow, and wind chill, forecasts can be used to plan activities around these events, and to plan ahead and survive them. === Aviation meteorology === Aviation meteorology deals with the impact of weather on air traffic management. It is important for air crews to understand the implications of weather on their flight plan as well as their aircraft, as noted by the Aeronautical Information Manual: The effects of ice on aircraft are cumulative—thrust is reduced, drag increases, lift lessens, and weight increases. The results are an increase in stall speed and a deterioration of aircraft performance. In extreme cases, 2 to 3 inches of ice can form on the leading edge of the airfoil in less than 5 minutes. It takes but 1/2 inch of ice to reduce the lifting power of some aircraft by 50 percent and increases the frictional drag by an equal percentage. === Agricultural meteorology === Meteorologists, soil scientists, agricultural hydrologists, and agronomists are people concerned with studying the effects of weather and climate on plant distribution, crop yield, water-use efficiency, phenology of plant and animal development, and the energy balance of managed and natural ecosystems. Conversely, they are interested in the role of vegetation on climate and weather. === Hydrometeorology === Hydrometeorology is the branch of meteorology that deals with the hydrologic cycle, the water budget, and the rainfall statistics of storms. A hydrometeorologist prepares and issues forecasts of accumulating (quantitative) precipitation, heavy rain, heavy snow, and highlights areas with the potential for flash flooding. Typically the range of knowledge that is required overlaps with climatology, mesoscale and synoptic meteorology, and other geosciences. The multidisciplinary nature of the branch can result in technical challenges, since tools and solutions from each of the individual disciplines involved may behave slightly differently, be optimized for different hard- and software platforms and use different data formats. There are some initiatives – such as the DRIHM project – that are trying to address this issue. === Nuclear meteorology === Nuclear meteorology investigates the distribution of radioactive aerosols and gases in the atmosphere. === Maritime meteorology === Maritime meteorology deals with air and wave forecasts for ships operating at sea. Organizations such as the Ocean Prediction Center, Honolulu National Weather Service forecast office, United Kingdom Met Office, KNMI and JMA prepare high seas forecasts for the world's oceans. === Military meteorology === Military meteorology is the research and application of meteorology for military purposes. In the United States, the United States Navy's Commander, Naval Meteorology and Oceanography Command oversees meteorological efforts for the Navy and Marine Corps while the United States Air Force's Air Force Weather Agency is responsible for the Air Force and Army. === Environmental meteorology === Environmental meteorology mainly analyzes industrial pollution dispersion physically and chemically based on meteorological parameters such as temperature, humidity, wind, and various weather conditions. === Renewable energy === Meteorology applications in renewable energy includes basic research, "exploration," and potential mapping of wind power and solar radiation for wind and solar energy. == See also == == References == == Further reading == Byers, Horace. General Meteorology. New York: McGraw-Hill, 1994. Garret, J.R. (1992) [1992]. The atmospheric boundary layer. Cambridge University Press. ISBN 978-0-521-38052-2. Glossary of Meteorology. American Meteorological Society (2nd ed.). Allen Press. 2000. Archived from the original on 21 May 2006. Retrieved 13 October 2004.{{cite book}}: CS1 maint: others (link) Bluestein, H (1992) [1992]. Synoptic-Dynamic Meteorology in Midlatitudes: Principles of Kinematics and Dynamics, Vol. 1. Oxford University Press. ISBN 978-0-19-506267-0. Bluestein, H (1993) [1993]. Synoptic-Dynamic Meteorology in Midlatitudes: Volume II: Observations and Theory of Weather Systems. Oxford University Press. ISBN 978-0-19-506268-7. Reynolds, R (2005) [2005]. Guide to Weather. Buffalo, New York: Firefly Books Inc. p. 208. ISBN 978-1-55407-110-4. Holton, J.R. (2004) [2004]. An Introduction to Dynamic Meteorology (4th ed.). Burlington, Md: Elsevier Inc. ISBN 978-0-12-354015-7. Archived from the original on 19 July 2013. Retrieved 21 May 2017. Roulstone, Ian & Norbury, John (2013). Invisible in the Storm: the role of mathematics in understanding weather. Princeton University Press. ISBN 978-0691152721. === Dictionaries and encyclopedias === Glickman, Todd S. (June 2000). Meteorology Glossary (electronic) (2nd ed.). Cambridge, Massachusetts: American Meteorological Society. Archived from the original on 10 March 2014. Retrieved 10 March 2014. Gustavo Herrera, Roberto; García-Herrera, Ricardo; Prieto, Luis; Gallego, David; Hernández, Emiliano; Gimeno, Luis; Können, Gunther; Koek, Frits; Wheeler, Dennis; Wilkinson, Clive; Del Rosario Prieto, Maria; Báez, Carlos; Woodruff, Scott. A Dictionary of Nautical Meteorological Terms: CLIWOC Multilingual Dictionary of Meteorological Terms; An English/Spanish/French/Dutch Dictionary of Windforce Terms Used by Mariners from 1750 to 1850 (PDF). CLIWOC. Archived from the original (PDF) on 21 April 2021. Retrieved 13 April 2014. "Meteorology Encyclopedia". Central Weather Bureau. 6 December 2018. Archived from the original on 21 September 2014. Retrieved 14 September 2014. === History === Lawrence-Mathers, Anne (2020). Medieval Meteorology: Forecasting the Weather from Aristotle to the Almanac. Cambridge: Cambridge University Press. Bibcode:2020mmfw.book.....L. doi:10.1017/9781108289948. ISBN 978-1-108-40600-0. S2CID 211658964. == External links == Please see weather forecasting for weather forecast sites. Air Quality Meteorology Archived 25 July 2009 at the Wayback Machine – Online course that introduces the basic concepts of meteorology and air quality necessary to understand meteorological computer models. Written at a bachelor's degree level. The GLOBE Program Archived 11 May 2023 at the Wayback Machine – (Global Learning and Observations to Benefit the Environment) An international environmental science and education program that links students, teachers, and the scientific research community in an effort to learn more about the environment through student data collection and observation. Glossary of Meteorology Archived 13 May 2023 at the Wayback Machine – From the American Meteorological Society, an excellent reference of nomenclature, equations, and concepts for the more advanced reader. JetStream – An Online School for Weather Archived 11 May 2023 at the Wayback Machine – National Weather Service Learn About Meteorology Archived 11 July 2006 at the Wayback Machine – Australian Bureau of Meteorology The Weather Guide Archived 24 February 2017 at the Wayback Machine – Weather Tutorials and News at About.com Meteorology Education and Training (MetEd) Archived 11 May 2023 at the Wayback Machine – The COMET Program NOAA Central Library – National Oceanic & Atmospheric Administration The World Weather 2010 Project Archived 19 August 2008 at the Wayback Machine The University of Illinois at Urbana–Champaign Ogimet – online data from meteorological stations of the world, obtained through NOAA free services Archived 24 July 2009 at the Wayback Machine National Center for Atmospheric Research Archives, documents the history of meteorology Weather forecasting and Climate science Archived 14 August 2011 at the Wayback Machine – United Kingdom Meteorological Office Meteorology Archived 29 January 2018 at the Wayback Machine, BBC Radio 4 discussion with Vladimir Janković, Richard Hambyn and Iba Taub (In Our Time, 6 March 2003) Virtual exhibition about meteorology Archived 24 November 2020 at the Wayback Machine on the digital library of Paris Observatory
Wikipedia/Atmospheric_dynamics
In glaciology, an ice sheet, also known as a continental glacier, is a mass of glacial ice that covers surrounding terrain and is greater than 50,000 km2 (19,000 sq mi). The only current ice sheets are the Antarctic ice sheet and the Greenland ice sheet. Ice sheets are bigger than ice shelves or alpine glaciers. Masses of ice covering less than 50,000 km2 are termed an ice cap. An ice cap will typically feed a series of glaciers around its periphery. Although the surface is cold, the base of an ice sheet is generally warmer due to geothermal heat. In places, melting occurs and the melt-water lubricates the ice sheet so that it flows more rapidly. This process produces fast-flowing channels in the ice sheet — these are ice streams. Even stable ice sheets are continually in motion as the ice gradually flows outward from the central plateau, which is the tallest point of the ice sheet, and towards the margins. The ice sheet slope is low around the plateau but increases steeply at the margins. Increasing global air temperatures due to climate change take around 10,000 years to directly propagate through the ice before they influence bed temperatures, but may have an effect through increased surface melting, producing more supraglacial lakes. These lakes may feed warm water to glacial bases and facilitate glacial motion. In previous geologic time spans (glacial periods) there were other ice sheets. During the Last Glacial Period at Last Glacial Maximum, the Laurentide Ice Sheet covered much of North America. In the same period, the Weichselian ice sheet covered Northern Europe and the Patagonian Ice Sheet covered southern South America. == Overview == An ice sheet is a body of ice which covers a land area of continental size - meaning that it exceeds 50,000 km2. The currently existing two ice sheets in Greenland and Antarctica have a much greater area than this minimum definition, measuring at 1.7 million km2 and 14 million km2, respectively. Both ice sheets are also very thick, as they consist of a continuous ice layer with an average thickness of 2 km (1 mi). This ice layer forms because most of the snow which falls onto the ice sheet never melts, and is instead compressed by the mass of newer snow layers. This process of ice sheet growth is still occurring nowadays, as can be clearly seen in an example that occurred in World War II. A Lockheed P-38 Lightning fighter plane crashed in Greenland in 1942. It was only recovered 50 years later. By then, it had been buried under 81 m (268 feet) of ice which had formed over that time period. == Dynamics == === Glacial flows === Even stable ice sheets are continually in motion as the ice gradually flows outward from the central plateau, which is the tallest point of the ice sheet, and towards the margins. The ice sheet slope is low around the plateau but increases steeply at the margins. This difference in slope occurs due to an imbalance between high ice accumulation in the central plateau and lower accumulation, as well as higher ablation, at the margins. This imbalance increases the shear stress on a glacier until it begins to flow. The flow velocity and deformation will increase as the equilibrium line between these two processes is approached. This motion is driven by gravity but is controlled by temperature and the strength of individual glacier bases. A number of processes alter these two factors, resulting in cyclic surges of activity interspersed with longer periods of inactivity, on time scales ranging from hourly (i.e. tidal flows) to the centennial (Milankovich cycles). On an unrelated hour-to-hour basis, surges of ice motion can be modulated by tidal activity. The influence of a 1 m tidal oscillation can be felt as much as 100 km from the sea. During larger spring tides, an ice stream will remain almost stationary for hours at a time, before a surge of around a foot in under an hour, just after the peak high tide; a stationary period then takes hold until another surge towards the middle or end of the falling tide. At neap tides, this interaction is less pronounced, and surges instead occur approximately every 12 hours. Increasing global air temperatures due to climate change take around 10,000 years to directly propagate through the ice before they influence bed temperatures, but may have an effect through increased surface melting, producing more supraglacial lakes. These lakes may feed warm water to glacial bases and facilitate glacial motion. Lakes of a diameter greater than ~300 m are capable of creating a fluid-filled crevasse to the glacier/bed interface. When these crevasses form, the entirety of the lake's (relatively warm) contents can reach the base of the glacier in as little as 2–18 hours – lubricating the bed and causing the glacier to surge. Water that reaches the bed of a glacier may freeze there, increasing the thickness of the glacier by pushing it up from below. === Boundary conditions === As the margins end at the marine boundary, excess ice is discharged through ice streams or outlet glaciers. Then, it either falls directly into the sea or is accumulated atop the floating ice shelves.: 2234  Those ice shelves then calve icebergs at their periphery if they experience excess of ice. Ice shelves would also experience accelerated calving due to basal melting. In Antarctica, this is driven by heat fed to the shelf by the circumpolar deep water current, which is 3 °C above the ice's melting point. The presence of ice shelves has a stabilizing influence on the glacier behind them, while an absence of an ice shelf becomes destabilizing. For instance, when Larsen B ice shelf in the Antarctic Peninsula had collapsed over three weeks in February 2002, the four glaciers behind it - Crane Glacier, Green Glacier, Hektoria Glacier and Jorum Glacier - all started to flow at a much faster rate, while the two glaciers (Flask and Leppard) stabilized by the remnants of the ice shelf did not accelerate. The collapse of the Larsen B shelf was preceded by thinning of just 1 metre per year, while some other Antarctic ice shelves have displayed thinning of tens of metres per year. Further, increased ocean temperatures of 1 °C may lead to up to 10 metres per year of basal melting. Ice shelves are always stable under mean annual temperatures of −9 °C, but never stable above −5 °C; this places regional warming of 1.5 °C, as preceded the collapse of Larsen B, in context. === Marine ice sheet instability === In the 1970s, Johannes Weertman proposed that because seawater is denser than ice, then any ice sheets grounded below sea level inherently become less stable as they melt due to Archimedes' principle. Effectively, these marine ice sheets must have enough mass to exceed the mass of the seawater displaced by the ice, which requires excess thickness. As the ice sheet melts and becomes thinner, the weight of the overlying ice decreases. At a certain point, sea water could force itself into the gaps which form at the base of the ice sheet, and marine ice sheet instability (MISI) would occur. Even if the ice sheet is grounded below the sea level, MISI cannot occur as long as there is a stable ice shelf in front of it. The boundary between the ice sheet and the ice shelf, known as the grounding line, is particularly stable if it is constrained in an embayment. In that case, the ice sheet may not be thinning at all, as the amount of ice flowing over the grounding line would be likely to match the annual accumulation of ice from snow upstream. Otherwise, ocean warming at the base of an ice shelf tends to thin it through basal melting. As the ice shelf becomes thinner, it exerts less of a buttressing effect on the ice sheet, the so-called back stress increases and the grounding line is pushed backwards. The ice sheet is likely to start losing more ice from the new location of the grounding line and so become lighter and less capable of displacing seawater. This eventually pushes the grounding line back even further, creating a self-reinforcing mechanism. ==== Vulnerable locations ==== Because the entire West Antarctic Ice Sheet is grounded below the sea level, it would be vulnerable to geologically rapid ice loss in this scenario. In particular, the Thwaites and Pine Island glaciers are most likely to be prone to MISI, and both glaciers have been rapidly thinning and accelerating in recent decades. As a result, sea level rise from the ice sheet could be accelerated by tens of centimeters within the 21st century alone. The majority of the East Antarctic Ice Sheet would not be affected. Totten Glacier is the largest glacier there which is known to be subject to MISI - yet, its potential contribution to sea level rise is comparable to that of the entire West Antarctic Ice Sheet. Totten Glacier has been losing mass nearly monotonically in recent decades, suggesting rapid retreat is possible in the near future, although the dynamic behavior of Totten Ice Shelf is known to vary on seasonal to interannual timescales. The Wilkes Basin is the only major submarine basin in Antarctica that is not thought to be sensitive to warming. Ultimately, even geologically rapid sea level rise would still most likely require several millennia for the entirety of these ice masses (WAIS and the subglacial basins) to be lost. ==== Marine ice cliff instability ==== A related process known as Marine Ice Cliff Instability (MICI) posits that ice cliffs which exceed ~90 m (295+1⁄2 ft) in above-ground height and are ~800 m (2,624+1⁄2 ft) in basal (underground) height are likely to collapse under their own weight once the peripheral ice stabilizing them is gone. Their collapse then exposes the ice masses following them to the same instability, potentially resulting in a self-sustaining cycle of cliff collapse and rapid ice sheet retreat - i.e. sea level rise of a meter or more by 2100 from Antarctica alone. This theory had been highly influential - in a 2020 survey of 106 experts, the paper which had advanced this theory was considered more important than even the year 2014 IPCC Fifth Assessment Report. Sea level rise projections which involve MICI are much larger than the others, particularly under high warming rate. At the same time, this theory has also been highly controversial. It was originally proposed in order to describe how the large sea level rise during the Pliocene and the Last Interglacial could have occurred - yet more recent research found that these sea level rise episodes can be explained without any ice cliff instability taking place. Research in Pine Island Bay in West Antarctica (the location of Thwaites and Pine Island Glacier) had found seabed gouging by ice from the Younger Dryas period which appears consistent with MICI. However, it indicates "relatively rapid" yet still prolonged ice sheet retreat, with a movement of >200 km (120 mi) inland taking place over an estimated 1100 years (from ~12,300 years Before Present to ~11,200 B.P.) In recent years, 2002-2004 fast retreat of Crane Glacier immediately after the collapse of the Larsen B ice shelf (before it reached a shallow fjord and stabilized) could have involved MICI, but there weren't enough observations to confirm or refute this theory. The retreat of Greenland ice sheet's three largest glaciers - Jakobshavn, Helheim, and Kangerdlugssuaq Glacier - did not resemble predictions from ice cliff collapse at least up until the end of 2013, but an event observed at Helheim Glacier in August 2014 may fit the definition. Further, modelling done after the initial hypothesis indicates that ice-cliff instability would require implausibly fast ice shelf collapse (i.e. within an hour for ~90 m (295+1⁄2 ft)-tall cliffs), unless the ice had already been substantially damaged beforehand. Further, ice cliff breakdown would produce a large number of debris in the coastal waters - known as ice mélange - and multiple studies indicate their build-up would slow or even outright stop the instability soon after it started. Some scientists - including the originators of the hypothesis, Robert DeConto and David Pollard - have suggested that the best way to resolve the question would be to precisely determine sea level rise during the Last Interglacial. MICI can be effectively ruled out if SLR at the time was lower than 4 m (13 ft), while it is very likely if the SLR was greater than 6 m (19+1⁄2 ft). As of 2023, the most recent analysis indicates that the Last Interglacial SLR is unlikely to have been higher than 2.7 m (9 ft), as higher values in other research, such as 5.7 m (18+1⁄2 ft), appear inconsistent with the new paleoclimate data from The Bahamas and the known history of the Greenland Ice Sheet. == Earth's current two ice sheets == === Antarctic ice sheet === ==== West Antarctic ice sheet ==== ==== East Antarctic ice sheet ==== === Greenland ice sheet === == Role in carbon cycle == Historically, ice sheets were viewed as inert components of the carbon cycle and were largely disregarded in global models. In 2010s, research had demonstrated the existence of uniquely adapted microbial communities, high rates of biogeochemical and physical weathering in ice sheets, and storage and cycling of organic carbon in excess of 100 billion tonnes. There is a massive contrast in carbon storage between the two ice sheets. While only about 0.5-27 billion tonnes of pure carbon are present underneath the Greenland ice sheet, 6000-21,000 billion tonnes of pure carbon are thought to be located underneath Antarctica. This carbon can act as a climate change feedback if it is gradually released through meltwater, thus increasing overall carbon dioxide emissions. For comparison, 1400–1650 billion tonnes are contained within the Arctic permafrost. Also for comparison, the annual human caused carbon dioxide emissions amount to around 40 billion tonnes of CO2.: 1237  In Greenland, there is one known area, at Russell Glacier, where meltwater carbon is released into the atmosphere as methane, which has a much larger global warming potential than carbon dioxide. However, it also harbours large numbers of methanotrophic bacteria, which limit those emissions. == In geologic timescales == Normally, the transitions between glacial and interglacial states are governed by Milankovitch cycles, which are patterns in insolation (the amount of sunlight reaching the Earth). These patterns are caused by the variations in shape of the Earth's orbit and its angle relative to the Sun, caused by the gravitational pull of other planets as they go through their own orbits. For instance, during at least the last 100,000 years, portions of the ice sheet covering much of North America, the Laurentide Ice Sheet broke apart sending large flotillas of icebergs into the North Atlantic. When these icebergs melted they dropped the boulders and other continental rocks they carried, leaving layers known as ice rafted debris. These so-called Heinrich events, named after their discoverer Hartmut Heinrich, appear to have a 7,000–10,000-year periodicity, and occur during cold periods within the last interglacial. Internal ice sheet "binge-purge" cycles may be responsible for the observed effects, where the ice builds to unstable levels, then a portion of the ice sheet collapses. External factors might also play a role in forcing ice sheets. Dansgaard–Oeschger events are abrupt warmings of the northern hemisphere occurring over the space of perhaps 40 years. While these D–O events occur directly after each Heinrich event, they also occur more frequently – around every 1500 years; from this evidence, paleoclimatologists surmise that the same forcings may drive both Heinrich and D–O events. Hemispheric asynchrony in ice sheet behavior has been observed by linking short-term spikes of methane in Greenland ice cores and Antarctic ice cores. During Dansgaard–Oeschger events, the northern hemisphere warmed considerably, dramatically increasing the release of methane from wetlands, that were otherwise tundra during glacial times. This methane quickly distributes evenly across the globe, becoming incorporated in Antarctic and Greenland ice. With this tie, paleoclimatologists have been able to say that the ice sheets on Greenland only began to warm after the Antarctic ice sheet had been warming for several thousand years. Why this pattern occurs is still open for debate. === Antarctic ice sheet during geologic timescales === === Greenland ice sheet during geologic timescales === == See also == Cryosphere – Earth's surface where water is frozen Ice planet – Planet with an icy surface Quaternary glaciation – Series of alternating glacial and interglacial periods Snowball Earth – Worldwide glaciation episodes during the Proterozoic eon Wisconsin glaciation – Glaciation in North America during the Last Glacial Period Ice-sheet model – Simulation of ice sheet change == References == == External links == United Nations Environment Programme: Global Outlook for Ice and Snow Marine Ice Sheet Instability "For Dummies"
Wikipedia/Ice-sheet_dynamics
The shallow-water equations (SWE) are a set of hyperbolic partial differential equations (or parabolic if viscous shear is considered) that describe the flow below a pressure surface in a fluid (sometimes, but not necessarily, a free surface). The shallow-water equations in unidirectional form are also called (de) Saint-Venant equations, after Adhémar Jean Claude Barré de Saint-Venant (see the related section below). The equations are derived from depth-integrating the Navier–Stokes equations, in the case where the horizontal length scale is much greater than the vertical length scale. Under this condition, conservation of mass implies that the vertical velocity scale of the fluid is small compared to the horizontal velocity scale. It can be shown from the momentum equation that vertical pressure gradients are nearly hydrostatic, and that horizontal pressure gradients are due to the displacement of the pressure surface, implying that the horizontal velocity field is constant throughout the depth of the fluid. Vertically integrating allows the vertical velocity to be removed from the equations. The shallow-water equations are thus derived. While a vertical velocity term is not present in the shallow-water equations, note that this velocity is not necessarily zero. This is an important distinction because, for example, the vertical velocity cannot be zero when the floor changes depth, and thus if it were zero only flat floors would be usable with the shallow-water equations. Once a solution (i.e. the horizontal velocities and free surface displacement) has been found, the vertical velocity can be recovered via the continuity equation. Situations in fluid dynamics where the horizontal length scale is much greater than the vertical length scale are common, so the shallow-water equations are widely applicable. They are used with Coriolis forces in atmospheric and oceanic modeling, as a simplification of the primitive equations of atmospheric flow. Shallow-water equation models have only one vertical level, so they cannot directly encompass any factor that varies with height. However, in cases where the mean state is sufficiently simple, the vertical variations can be separated from the horizontal and several sets of shallow-water equations can describe the state. == Equations == === Conservative form === The shallow-water equations are derived from equations of conservation of mass and conservation of linear momentum (the Navier–Stokes equations), which hold even when the assumptions of shallow-water break down, such as across a hydraulic jump. In the case of a horizontal bed, with negligible Coriolis forces, frictional and viscous forces, the shallow-water equations are: ∂ ( ρ η ) ∂ t + ∂ ( ρ η u ) ∂ x + ∂ ( ρ η v ) ∂ y = 0 , ∂ ( ρ η u ) ∂ t + ∂ ∂ x ( ρ η u 2 + 1 2 ρ g η 2 ) + ∂ ( ρ η u v ) ∂ y = 0 , ∂ ( ρ η v ) ∂ t + ∂ ∂ y ( ρ η v 2 + 1 2 ρ g η 2 ) + ∂ ( ρ η u v ) ∂ x = 0. {\displaystyle {\begin{aligned}{\frac {\partial (\rho \eta )}{\partial t}}&+{\frac {\partial (\rho \eta u)}{\partial x}}+{\frac {\partial (\rho \eta v)}{\partial y}}=0,\\[3pt]{\frac {\partial (\rho \eta u)}{\partial t}}&+{\frac {\partial }{\partial x}}\left(\rho \eta u^{2}+{\frac {1}{2}}\rho g\eta ^{2}\right)+{\frac {\partial (\rho \eta uv)}{\partial y}}=0,\\[3pt]{\frac {\partial (\rho \eta v)}{\partial t}}&+{\frac {\partial }{\partial y}}\left(\rho \eta v^{2}+{\frac {1}{2}}\rho g\eta ^{2}\right)+{\frac {\partial (\rho \eta uv)}{\partial x}}=0.\end{aligned}}} Here η is the total fluid column height (instantaneous fluid depth as a function of x, y and t), and the 2D vector (u,v) is the fluid's horizontal flow velocity, averaged across the vertical column. Further g is acceleration due to gravity and ρ is the fluid density. The first equation is derived from mass conservation, the second two from momentum conservation. === Non-conservative form === Expanding the derivatives in the above using the product rule, the non-conservative form of the shallow-water equations is obtained. Since velocities are not subject to a fundamental conservation equation, the non-conservative forms do not hold across a shock or hydraulic jump. Also included are the appropriate terms for Coriolis, frictional and viscous forces, to obtain (for constant fluid density): ∂ h ∂ t + ∂ ∂ x ( ( H + h ) u ) + ∂ ∂ y ( ( H + h ) v ) = 0 , ∂ u ∂ t + u ∂ u ∂ x + v ∂ u ∂ y − f v = − g ∂ h ∂ x − k u + ν ( ∂ 2 u ∂ x 2 + ∂ 2 u ∂ y 2 ) , ∂ v ∂ t + u ∂ v ∂ x + v ∂ v ∂ y + f u = − g ∂ h ∂ y − k v + ν ( ∂ 2 v ∂ x 2 + ∂ 2 v ∂ y 2 ) , {\displaystyle {\begin{aligned}{\frac {\partial h}{\partial t}}&+{\frac {\partial }{\partial x}}{\Bigl (}(H+h)u{\Bigr )}+{\frac {\partial }{\partial y}}{\Bigl (}(H+h)v{\Bigr )}=0,\\[3pt]{\frac {\partial u}{\partial t}}&+u{\frac {\partial u}{\partial x}}+v{\frac {\partial u}{\partial y}}-fv=-g{\frac {\partial h}{\partial x}}-ku+\nu \left({\frac {\partial ^{2}u}{\partial x^{2}}}+{\frac {\partial ^{2}u}{\partial y^{2}}}\right),\\[3pt]{\frac {\partial v}{\partial t}}&+u{\frac {\partial v}{\partial x}}+v{\frac {\partial v}{\partial y}}+fu=-g{\frac {\partial h}{\partial y}}-kv+\nu \left({\frac {\partial ^{2}v}{\partial x^{2}}}+{\frac {\partial ^{2}v}{\partial y^{2}}}\right),\end{aligned}}} where It is often the case that the terms quadratic in u and v, which represent the effect of bulk advection, are small compared to the other terms. This is called geostrophic balance, and is equivalent to saying that the Rossby number is small. Assuming also that the wave height is very small compared to the mean height (h ≪ H), we have (without lateral viscous forces): ∂ h ∂ t + H ( ∂ u ∂ x + ∂ v ∂ y ) = 0 , ∂ u ∂ t − f v = − g ∂ h ∂ x − k u , ∂ v ∂ t + f u = − g ∂ h ∂ y − k v . {\displaystyle {\begin{aligned}{\frac {\partial h}{\partial t}}&+H\left({\frac {\partial u}{\partial x}}+{\frac {\partial v}{\partial y}}\right)=0,\\[3pt]{\frac {\partial u}{\partial t}}&-fv=-g{\frac {\partial h}{\partial x}}-ku,\\[3pt]{\frac {\partial v}{\partial t}}&+fu=-g{\frac {\partial h}{\partial y}}-kv.\end{aligned}}} == One-dimensional Saint-Venant equations == The one-dimensional (1-D) Saint-Venant equations were derived by Adhémar Jean Claude Barré de Saint-Venant, and are commonly used to model transient open-channel flow and surface runoff. They can be viewed as a contraction of the two-dimensional (2-D) shallow-water equations, which are also known as the two-dimensional Saint-Venant equations. The 1-D Saint-Venant equations contain to a certain extent the main characteristics of the channel cross-sectional shape. The 1-D equations are used extensively in computer models such as TUFLOW, Mascaret (EDF), SIC (Irstea), HEC-RAS, SWMM5, InfoWorks, Flood Modeller, SOBEK 1DFlow, MIKE 11, and MIKE SHE because they are significantly easier to solve than the full shallow-water equations. Common applications of the 1-D Saint-Venant equations include flood routing along rivers (including evaluation of measures to reduce the risks of flooding), dam break analysis, storm pulses in an open channel, as well as storm runoff in overland flow. === Equations === The system of partial differential equations which describe the 1-D incompressible flow in an open channel of arbitrary cross section – as derived and posed by Saint-Venant in his 1871 paper (equations 19 & 20) – is: and where x is the space coordinate along the channel axis, t denotes time, A(x,t) is the cross-sectional area of the flow at location x, u(x,t) is the flow velocity, ζ(x,t) is the free surface elevation and τ(x,t) is the wall shear stress along the wetted perimeter P(x,t) of the cross section at x. Further ρ is the (constant) fluid density and g is the gravitational acceleration. Closure of the hyperbolic system of equations (1)–(2) is obtained from the geometry of cross sections – by providing a functional relationship between the cross-sectional area A and the surface elevation ζ at each position x. For example, for a rectangular cross section, with constant channel width B and channel bed elevation zb, the cross sectional area is: A = B (ζ − zb) = B h. The instantaneous water depth is h(x,t) = ζ(x,t) − zb(x), with zb(x) the bed level (i.e. elevation of the lowest point in the bed above datum, see the cross-section figure). For non-moving channel walls the cross-sectional area A in equation (1) can be written as: A ( x , t ) = ∫ 0 h ( x , t ) b ( x , h ′ ) d h ′ , {\displaystyle A(x,t)=\int _{0}^{h(x,t)}b(x,h')\,dh',} with b(x,h) the effective width of the channel cross section at location x when the fluid depth is h – so b(x, h) = B(x) for rectangular channels. The wall shear stress τ is dependent on the flow velocity u, they can be related by using e.g. the Darcy–Weisbach equation, Manning formula or Chézy formula. Further, equation (1) is the continuity equation, expressing conservation of water volume for this incompressible homogeneous fluid. Equation (2) is the momentum equation, giving the balance between forces and momentum change rates. The bed slope S(x), friction slope Sf(x, t) and hydraulic radius R(x, t) are defined as: S = − d z b d x , {\displaystyle S=-{\frac {\mathrm {d} z_{\mathrm {b} }}{\mathrm {d} x}},} S f = τ ρ g R {\displaystyle S_{\mathrm {f} }={\frac {\tau }{\rho gR}}} and R = A P . {\displaystyle R={\frac {A}{P}}.} Consequently, the momentum equation (2) can be written as: === Conservation of momentum === The momentum equation (3) can also be cast in the so-called conservation form, through some algebraic manipulations on the Saint-Venant equations, (1) and (3). In terms of the discharge Q = Au: where A, I1 and I2 are functions of the channel geometry, described in the terms of the channel width B(σ,x). Here σ is the height above the lowest point in the cross section at location x, see the cross-section figure. So σ is the height above the bed level zb(x) (of the lowest point in the cross section): A ( σ , x ) = ∫ 0 σ B ( σ ′ , x ) d σ ′ , I 1 ( σ , x ) = ∫ 0 σ ( σ − σ ′ ) B ( σ ′ , x ) d σ ′ and I 2 ( σ , x ) = ∫ 0 σ ( σ − σ ′ ) ∂ B ( σ ′ , x ) ∂ x d σ ′ . {\displaystyle {\begin{aligned}A(\sigma ,x)&=\int _{0}^{\sigma }B(\sigma ',x)\;\mathrm {d} \sigma ',\\I_{1}(\sigma ,x)&=\int _{0}^{\sigma }(\sigma -\sigma ')\,B(\sigma ^{\prime },x)\;\mathrm {d} \sigma '\qquad {\text{and}}\\I_{2}(\sigma ,x)&=\int _{0}^{\sigma }(\sigma -\sigma ')\,{\frac {\partial B(\sigma ',x)}{\partial x}}\;\mathrm {d} \sigma '.\end{aligned}}} Above – in the momentum equation (4) in conservation form – A, I1 and I2 are evaluated at σ = h(x,t). The term g I1 describes the hydrostatic force in a certain cross section. And, for a non-prismatic channel, g I2 gives the effects of geometry variations along the channel axis x. In applications, depending on the problem at hand, there often is a preference for using either the momentum equation in non-conservation form, (2) or (3), or the conservation form (4). For instance in case of the description of hydraulic jumps, the conservation form is preferred since the momentum flux is continuous across the jump. === Characteristics === The Saint-Venant equations (1)–(2) can be analysed using the method of characteristics. The two celerities dx/dt on the characteristic curves are: d x d t = u ± c , {\displaystyle {\frac {\mathrm {d} x}{\mathrm {d} t}}=u\pm c,} with c = g A B . {\displaystyle c={\sqrt {\frac {gA}{B}}}.} The Froude number Fr = |u| / c determines whether the flow is subcritical (Fr < 1) or supercritical (Fr > 1). For a rectangular and prismatic channel of constant width B, i.e. with A = B h and c = √gh, the Riemann invariants are: r + = u + 2 g h {\displaystyle r_{+}=u+2{\sqrt {gh}}} and r − = u − 2 g h , {\displaystyle r_{-}=u-2{\sqrt {gh}},} so the equations in characteristic form are: d d t ( u + 2 g h ) = g ( S − S f ) along d x d t = u + g h and d d t ( u − 2 g h ) = g ( S − S f ) along d x d t = u − g h . {\displaystyle {\begin{aligned}&{\frac {\mathrm {d} }{\mathrm {d} t}}\left(u+2{\sqrt {gh}}\right)=g\left(S-S_{f}\right)&&{\text{along}}\quad {\frac {\mathrm {d} x}{\mathrm {d} t}}=u+{\sqrt {gh}}\quad {\text{and}}\\&{\frac {\mathrm {d} }{\mathrm {d} t}}\left(u-2{\sqrt {gh}}\right)=g\left(S-S_{f}\right)&&{\text{along}}\quad {\frac {\mathrm {d} x}{\mathrm {d} t}}=u-{\sqrt {gh}}.\end{aligned}}} The Riemann invariants and method of characteristics for a prismatic channel of arbitrary cross-section are described by Didenkulova & Pelinovsky (2011). The characteristics and Riemann invariants provide important information on the behavior of the flow, as well as that they may be used in the process of obtaining (analytical or numerical) solutions. === Hamiltonian structure for frictionless flow === In case there is no friction and the channel has a rectangular prismatic cross section, the Saint-Venant equations have a Hamiltonian structure. The Hamiltonian H is equal to the energy of the free-surface flow: H = ρ ∫ ( 1 2 A u 2 + 1 2 g B ζ 2 ) d x , {\displaystyle H=\rho \int \left({\frac {1}{2}}Au^{2}+{\frac {1}{2}}gB\zeta ^{2}\right)\mathrm {d} x,} with constant B the channel width and ρ the constant fluid density. Hamilton's equations then are: ρ B ∂ ζ ∂ t + ∂ ∂ x ( ∂ H ∂ u ) = ρ ( B ∂ ζ ∂ t + ∂ ( A u ) ∂ x ) = ρ ( ∂ A ∂ t + ∂ ( A u ) ∂ x ) = 0 , ρ B ∂ u ∂ t + ∂ ∂ x ( ∂ H ∂ ζ ) = ρ B ( ∂ u ∂ t + u ∂ u ∂ x + g ∂ ζ ∂ x ) = 0 , {\displaystyle {\begin{aligned}&\rho B{\frac {\partial \zeta }{\partial t}}+{\frac {\partial }{\partial x}}\left({\frac {\partial H}{\partial u}}\right)=\rho \left(B{\frac {\partial \zeta }{\partial t}}+{\frac {\partial (Au)}{\partial x}}\right)=\rho \left({\frac {\partial A}{\partial t}}+{\frac {\partial (Au)}{\partial x}}\right)=0,\\&\rho B{\frac {\partial u}{\partial t}}+{\frac {\partial }{\partial x}}\left({\frac {\partial H}{\partial \zeta }}\right)=\rho B\left({\frac {\partial u}{\partial t}}+u{\frac {\partial u}{\partial x}}+g{\frac {\partial \zeta }{\partial x}}\right)=0,\end{aligned}}} since ∂A/∂ζ = B). === Derived modelling === ==== Dynamic wave ==== The dynamic wave is the full one-dimensional Saint-Venant equation. It is numerically challenging to solve, but is valid for all channel flow scenarios. The dynamic wave is used for modeling transient storms in modeling programs including Mascaret (EDF), SIC (Irstea), HEC-RAS, Infoworks ICM MIKE 11, Wash 123d and SWMM5. In the order of increasing simplifications, by removing some terms of the full 1D Saint-Venant equations (aka Dynamic wave equation), we get the also classical Diffusive wave equation and Kinematic wave equation. ==== Diffusive wave ==== For the diffusive wave it is assumed that the inertial terms are less than the gravity, friction, and pressure terms. The diffusive wave can therefore be more accurately described as a non-inertia wave, and is written as: g ∂ h ∂ x + g ( S f − S ) = 0. {\displaystyle g{\frac {\partial h}{\partial x}}+g(S_{f}-S)=0.} The diffusive wave is valid when the inertial acceleration is much smaller than all other forms of acceleration, or in other words when there is primarily subcritical flow, with low Froude values. Models that use the diffusive wave assumption include MIKE SHE and LISFLOOD-FP. In the SIC (Irstea) software this options is also available, since the 2 inertia terms (or any of them) can be removed in option from the interface. ==== Kinematic wave ==== For the kinematic wave it is assumed that the flow is uniform, and that the friction slope is approximately equal to the slope of the channel. This simplifies the full Saint-Venant equation to the kinematic wave: S f − S = 0. {\displaystyle S_{f}-S=0.} The kinematic wave is valid when the change in wave height over distance and velocity over distance and time is negligible relative to the bed slope, e.g. for shallow flows over steep slopes. The kinematic wave is used in HEC-HMS. === Derivation from Navier–Stokes equations === The 1-D Saint-Venant momentum equation can be derived from the Navier–Stokes equations that describe fluid motion. The x-component of the Navier–Stokes equations – when expressed in Cartesian coordinates in the x-direction – can be written as: ∂ u ∂ t + u ∂ u ∂ x + v ∂ u ∂ y + w ∂ u ∂ z = − ∂ p ∂ x 1 ρ + ν ( ∂ 2 u ∂ x 2 + ∂ 2 u ∂ y 2 + ∂ 2 u ∂ z 2 ) + f x , {\displaystyle {\frac {\partial u}{\partial t}}+u{\frac {\partial u}{\partial x}}+v{\frac {\partial u}{\partial y}}+w{\frac {\partial u}{\partial z}}=-{\frac {\partial p}{\partial x}}{\frac {1}{\rho }}+\nu \left({\frac {\partial ^{2}u}{\partial x^{2}}}+{\frac {\partial ^{2}u}{\partial y^{2}}}+{\frac {\partial ^{2}u}{\partial z^{2}}}\right)+f_{x},} where u is the velocity in the x-direction, v is the velocity in the y-direction, w is the velocity in the z-direction, t is time, p is the pressure, ρ is the density of water, ν is the kinematic viscosity, and fx is the body force in the x-direction. If it is assumed that friction is taken into account as a body force, then ν {\displaystyle \nu } can be assumed as zero so: ν ( ∂ 2 u ∂ x 2 + ∂ 2 u ∂ y 2 + ∂ 2 u ∂ z 2 ) = 0. {\displaystyle \nu \left({\frac {\partial ^{2}u}{\partial x^{2}}}+{\frac {\partial ^{2}u}{\partial y^{2}}}+{\frac {\partial ^{2}u}{\partial z^{2}}}\right)=0.} Assuming one-dimensional flow in the x-direction it follows that: v ∂ u ∂ y + w ∂ u ∂ z = 0 {\displaystyle v{\frac {\partial u}{\partial y}}+w{\frac {\partial u}{\partial z}}=0} Assuming also that the pressure distribution is approximately hydrostatic it follows that: p = ρ g h {\displaystyle p=\rho gh} or in differential form: ∂ p = ρ g ( ∂ h ) . {\displaystyle \partial p=\rho g(\partial h).} And when these assumptions are applied to the x-component of the Navier–Stokes equations: − ∂ p ∂ x 1 ρ = − 1 ρ ρ g ( ∂ h ) ∂ x = − g ∂ h ∂ x . {\displaystyle -{\frac {\partial p}{\partial x}}{\frac {1}{\rho }}=-{\frac {1}{\rho }}{\frac {\rho g\left(\partial h\right)}{\partial x}}=-g{\frac {\partial h}{\partial x}}.} There are 2 body forces acting on the channel fluid, namely, gravity and friction: f x = f x , g + f x , f {\displaystyle f_{x}=f_{x,g}+f_{x,f}} where fx,g is the body force due to gravity and fx,f is the body force due to friction. fx,g can be calculated using basic physics and trigonometry: F g = sin ⁡ ( θ ) g M {\displaystyle F_{g}=\sin(\theta )gM} where Fg is the force of gravity in the x-direction, θ is the angle, and M is the mass. The expression for sin θ can be simplified using trigonometry as: sin ⁡ θ = opp hyp . {\displaystyle \sin \theta ={\frac {\text{opp}}{\text{hyp}}}.} For small θ (reasonable for almost all streams) it can be assumed that: sin ⁡ θ = tan ⁡ θ = opp adj = S {\displaystyle \sin \theta =\tan \theta ={\frac {\text{opp}}{\text{adj}}}=S} and given that fx represents a force per unit mass, the expression becomes: f x , g = g S . {\displaystyle f_{x,g}=gS.} Assuming the energy grade line is not the same as the channel slope, and for a reach of consistent slope there is a consistent friction loss, it follows that: f x , f = S f g . {\displaystyle f_{x,f}=S_{f}g.} All of these assumptions combined arrives at the 1-dimensional Saint-Venant equation in the x-direction: ∂ u ∂ t + u ∂ u ∂ x + g ∂ h ∂ x + g ( S f − S ) = 0 , {\displaystyle {\frac {\partial u}{\partial t}}+u{\frac {\partial u}{\partial x}}+g{\frac {\partial h}{\partial x}}+g(S_{f}-S)=0,} ( a ) ( b ) ( c ) ( d ) ( e ) {\displaystyle (a)\quad \ \ (b)\quad \ \ \ (c)\qquad \ \ \ (d)\quad (e)\ } where (a) is the local acceleration term, (b) is the convective acceleration term, (c) is the pressure gradient term, (d) is the friction term, and (e) is the gravity term. Terms The local acceleration (a) can also be thought of as the "unsteady term" as this describes some change in velocity over time. The convective acceleration (b) is an acceleration caused by some change in velocity over position, for example the speeding up or slowing down of a fluid entering a constriction or an opening, respectively. Both these terms make up the inertia terms of the 1-dimensional Saint-Venant equation. The pressure gradient term (c) describes how pressure changes with position, and since the pressure is assumed hydrostatic, this is the change in head over position. The friction term (d) accounts for losses in energy due to friction, while the gravity term (e) is the acceleration due to bed slope. == Wave modelling by shallow-water equations == Shallow-water equations can be used to model Rossby and Kelvin waves in the atmosphere, rivers, lakes and oceans as well as gravity waves in a smaller domain (e.g. surface waves in a bath). In order for shallow-water equations to be valid, the wavelength of the phenomenon they are supposed to model has to be much larger than the depth of the basin where the phenomenon takes place. Somewhat smaller wavelengths can be handled by extending the shallow-water equations using the Boussinesq approximation to incorporate dispersion effects. Shallow-water equations are especially suitable to model tides which have very large length scales (over hundreds of kilometers). For tidal motion, even a very deep ocean may be considered as shallow as its depth will always be much smaller than the tidal wavelength. == Turbulence modelling using non-linear shallow-water equations == Shallow-water equations, in its non-linear form, is an obvious candidate for modelling turbulence in the atmosphere and oceans, i.e. geophysical turbulence. An advantage of this, over Quasi-geostrophic equations, is that it allows solutions like gravity waves, while also conserving energy and potential vorticity. However, there are also some disadvantages as far as geophysical applications are concerned - it has a non-quadratic expression for total energy and a tendency for waves to become shock waves. Some alternate models have been proposed which prevent shock formation. One alternative is to modify the "pressure term" in the momentum equation, but it results in a complicated expression for kinetic energy. Another option is to modify the non-linear terms in all equations, which gives a quadratic expression for kinetic energy, avoids shock formation, but conserves only linearized potential vorticity. == See also == Waves and shallow water == Notes == == Further reading == == External links == Derivation of the shallow-water equations from first principles (instead of simplifying the Navier–Stokes equations, some analytical solutions)
Wikipedia/Shallow-water_equations
Bernoulli's principle is a key concept in fluid dynamics that relates pressure, speed and height. For example, for a fluid flowing horizontally Bernoulli's principle states that an increase in the speed occurs simultaneously with a decrease in pressure: Ch.3 : 156–164, § 3.5  The principle is named after the Swiss mathematician and physicist Daniel Bernoulli, who published it in his book Hydrodynamica in 1738. Although Bernoulli deduced that pressure decreases when the flow speed increases, it was Leonhard Euler in 1752 who derived Bernoulli's equation in its usual form. Bernoulli's principle can be derived from the principle of conservation of energy. This states that, in a steady flow, the sum of all forms of energy in a fluid is the same at all points that are free of viscous forces. This requires that the sum of kinetic energy, potential energy and internal energy remains constant.: § 3.5  Thus an increase in the speed of the fluid—implying an increase in its kinetic energy—occurs with a simultaneous decrease in (the sum of) its potential energy (including the static pressure) and internal energy. If the fluid is flowing out of a reservoir, the sum of all forms of energy is the same because in a reservoir the energy per unit volume (the sum of pressure and gravitational potential ρ g h) is the same everywhere.: Example 3.5 and p.116  Bernoulli's principle can also be derived directly from Isaac Newton's second law of motion. When a fluid is flowing horizontally from a region of high pressure to a region of low pressure, there is more pressure from behind than in front. This gives a net force on the volume, accelerating it along the streamline. Fluid particles are subject only to pressure and their own weight. If a fluid is flowing horizontally and along a section of a streamline, where the speed increases it can only be because the fluid on that section has moved from a region of higher pressure to a region of lower pressure; and if its speed decreases, it can only be because it has moved from a region of lower pressure to a region of higher pressure. Consequently, within a fluid flowing horizontally, the highest speed occurs where the pressure is lowest, and the lowest speed occurs where the pressure is highest. Bernoulli's principle is only applicable for isentropic flows: when the effects of irreversible processes (like turbulence) and non-adiabatic processes (e.g. thermal radiation) are small and can be neglected. However, the principle can be applied to various types of flow within these bounds, resulting in various forms of Bernoulli's equation. The simple form of Bernoulli's equation is valid for incompressible flows (e.g. most liquid flows and gases moving at low Mach number). More advanced forms may be applied to compressible flows at higher Mach numbers. == Incompressible flow equation == In most flows of liquids, and of gases at low Mach number, the density of a fluid parcel can be considered to be constant, regardless of pressure variations in the flow. Therefore, the fluid can be considered to be incompressible, and these flows are called incompressible flows. Bernoulli performed his experiments on liquids, so his equation in its original form is valid only for incompressible flow. A common form of Bernoulli's equation is: where: v {\displaystyle v} is the fluid flow speed at a point, g {\displaystyle g} is the acceleration due to gravity, z {\displaystyle z} is the elevation of the point above a reference plane, with the positive z {\displaystyle z} -direction pointing upward—so in the direction opposite to the gravitational acceleration, p {\displaystyle p} is the static pressure at the chosen point, and ρ {\displaystyle \rho } is the density of the fluid at all points in the fluid. Bernoulli's equation and the Bernoulli constant are applicable throughout any region of flow where the energy per unit mass is uniform. Because the energy per unit mass of liquid in a well-mixed reservoir is uniform throughout, Bernoulli's equation can be used to analyze the fluid flow everywhere in that reservoir (including pipes or flow fields that the reservoir feeds) except where viscous forces dominate and erode the energy per unit mass.: Example 3.5 and p.116  The following assumptions must be met for this Bernoulli equation to apply:: 265  the flow must be steady, that is, the flow parameters (velocity, density, etc.) at any point cannot change with time, the flow must be incompressible—even though pressure varies, the density must remain constant along a streamline; friction by viscous forces must be negligible. For conservative force fields (not limited to the gravitational field), Bernoulli's equation can be generalized as:: 265  v 2 2 + Ψ + p ρ = constant {\displaystyle {\frac {v^{2}}{2}}+\Psi +{\frac {p}{\rho }}={\text{constant}}} where Ψ is the force potential at the point considered. For example, for the Earth's gravity Ψ = gz. By multiplying with the fluid density ρ, equation (A) can be rewritten as: 1 2 ρ v 2 + ρ g z + p = constant {\displaystyle {\tfrac {1}{2}}\rho v^{2}+\rho gz+p={\text{constant}}} or: q + ρ g h = p 0 + ρ g z = constant {\displaystyle q+\rho gh=p_{0}+\rho gz={\text{constant}}} where q = ⁠1/2⁠ρv2 is dynamic pressure, h = z + ⁠p/ρg⁠ is the piezometric head or hydraulic head (the sum of the elevation z and the pressure head) and p0 = p + q is the stagnation pressure (the sum of the static pressure p and dynamic pressure q). The constant in the Bernoulli equation can be normalized. A common approach is in terms of total head or energy head H: H = z + p ρ g + v 2 2 g = h + v 2 2 g , {\displaystyle H=z+{\frac {p}{\rho g}}+{\frac {v^{2}}{2g}}=h+{\frac {v^{2}}{2g}},} The above equations suggest there is a flow speed at which pressure is zero, and at even higher speeds the pressure is negative. Most often, gases and liquids are not capable of negative absolute pressure, or even zero pressure, so clearly Bernoulli's equation ceases to be valid before zero pressure is reached. In liquids—when the pressure becomes too low—cavitation occurs. The above equations use a linear relationship between flow speed squared and pressure. At higher flow speeds in gases, or for sound waves in liquid, the changes in mass density become significant so that the assumption of constant density is invalid. === Simplified form === In many applications of Bernoulli's equation, the change in the ρgz term is so small compared with the other terms that it can be ignored. For example, in the case of aircraft in flight, the change in height z is so small the ρgz term can be omitted. This allows the above equation to be presented in the following simplified form: p + q = p 0 {\displaystyle p+q=p_{0}} where p0 is called total pressure, and q is dynamic pressure. Many authors refer to the pressure p as static pressure to distinguish it from total pressure p0 and dynamic pressure q. In Aerodynamics, L.J. Clancy writes: "To distinguish it from the total and dynamic pressures, the actual pressure of the fluid, which is associated not with its motion but with its state, is often referred to as the static pressure, but where the term pressure alone is used it refers to this static pressure.": § 3.5  The simplified form of Bernoulli's equation can be summarized in the following memorable word equation:: § 3.5  Every point in a steadily flowing fluid, regardless of the fluid speed at that point, has its own unique static pressure p and dynamic pressure q. Their sum p + q is defined to be the total pressure p0. The significance of Bernoulli's principle can now be summarized as "total pressure is constant in any region free of viscous forces". If the fluid flow is brought to rest at some point, this point is called a stagnation point, and at this point the static pressure is equal to the stagnation pressure. If the fluid flow is irrotational, the total pressure is uniform and Bernoulli's principle can be summarized as "total pressure is constant everywhere in the fluid flow".: Equation 3.12  It is reasonable to assume that irrotational flow exists in any situation where a large body of fluid is flowing past a solid body. Examples are aircraft in flight and ships moving in open bodies of water. However, Bernoulli's principle importantly does not apply in the boundary layer such as in flow through long pipes. === Unsteady potential flow === The Bernoulli equation for unsteady potential flow is used in the theory of ocean surface waves and acoustics. For an irrotational flow, the flow velocity can be described as the gradient ∇φ of a velocity potential φ. In that case, and for a constant density ρ, the momentum equations of the Euler equations can be integrated to:: 383  ∂ φ ∂ t + 1 2 v 2 + p ρ + g z = f ( t ) , {\displaystyle {\frac {\partial \varphi }{\partial t}}+{\tfrac {1}{2}}v^{2}+{\frac {p}{\rho }}+gz=f(t),} which is a Bernoulli equation valid also for unsteady—or time dependent—flows. Here ⁠∂φ/∂t⁠ denotes the partial derivative of the velocity potential φ with respect to time t, and v = |∇φ| is the flow speed. The function f(t) depends only on time and not on position in the fluid. As a result, the Bernoulli equation at some moment t applies in the whole fluid domain. This is also true for the special case of a steady irrotational flow, in which case f and ⁠∂φ/∂t⁠ are constants so equation (A) can be applied in every point of the fluid domain.: 383  Further f(t) can be made equal to zero by incorporating it into the velocity potential using the transformation: Φ = φ − ∫ t 0 t f ( τ ) d τ , {\displaystyle \Phi =\varphi -\int _{t_{0}}^{t}f(\tau )\,\mathrm {d} \tau ,} resulting in: ∂ Φ ∂ t + 1 2 v 2 + p ρ + g z = 0. {\displaystyle {\frac {\partial \Phi }{\partial t}}+{\tfrac {1}{2}}v^{2}+{\frac {p}{\rho }}+gz=0.} Note that the relation of the potential to the flow velocity is unaffected by this transformation: ∇Φ = ∇φ. The Bernoulli equation for unsteady potential flow also appears to play a central role in Luke's variational principle, a variational description of free-surface flows using the Lagrangian mechanics. == Compressible flow equation == Bernoulli developed his principle from observations on liquids, and Bernoulli's equation is valid for ideal fluids: those that are inviscid, incompressible and subjected only to conservative forces. It is sometimes valid for the flow of gases as well, provided that there is no transfer of kinetic or potential energy from the gas flow to the compression or expansion of the gas. If both the gas pressure and volume change simultaneously, then work will be done on or by the gas. In this case, Bernoulli's equation in its incompressible flow form cannot be assumed to be valid. However, if the gas process is entirely isobaric, or isochoric, then no work is done on or by the gas (so the simple energy balance is not upset). According to the gas law, an isobaric or isochoric process is ordinarily the only way to ensure constant density in a gas. Also the gas density will be proportional to the ratio of pressure and absolute temperature; however, this ratio will vary upon compression or expansion, no matter what non-zero quantity of heat is added or removed. The only exception is if the net heat transfer is zero, as in a complete thermodynamic cycle or in an individual isentropic (frictionless adiabatic) process, and even then this reversible process must be reversed, to restore the gas to the original pressure and specific volume, and thus density. Only then is the original, unmodified Bernoulli equation applicable. In this case the equation can be used if the flow speed of the gas is sufficiently below the speed of sound, such that the variation in density of the gas (due to this effect) along each streamline can be ignored. Adiabatic flow at less than Mach 0.3 is generally considered to be slow enough. It is possible to use the fundamental principles of physics to develop similar equations applicable to compressible fluids. There are numerous equations, each tailored for a particular application, but all are analogous to Bernoulli's equation and all rely on nothing more than the fundamental principles of physics such as Newton's laws of motion or the first law of thermodynamics. === Compressible flow in fluid dynamics === For a compressible fluid, with a barotropic equation of state, and under the action of conservative forces, v 2 2 + ∫ p 1 p d p ~ ρ ( p ~ ) + Ψ = constant (along a streamline) {\displaystyle {\frac {v^{2}}{2}}+\int _{p_{1}}^{p}{\frac {\mathrm {d} {\tilde {p}}}{\rho \left({\tilde {p}}\right)}}+\Psi ={\text{constant (along a streamline)}}} where: p is the pressure ρ is the density and ρ(p) indicates that it is a function of pressure v is the flow speed Ψ is the potential associated with the conservative force field, often the gravitational potential In engineering situations, elevations are generally small compared to the size of the Earth, and the time scales of fluid flow are small enough to consider the equation of state as adiabatic. In this case, the above equation for an ideal gas becomes:: § 3.11  v 2 2 + g z + ( γ γ − 1 ) p ρ = constant (along a streamline) {\displaystyle {\frac {v^{2}}{2}}+gz+\left({\frac {\gamma }{\gamma -1}}\right){\frac {p}{\rho }}={\text{constant (along a streamline)}}} where, in addition to the terms listed above: γ is the ratio of the specific heats of the fluid g is the acceleration due to gravity z is the elevation of the point above a reference plane In many applications of compressible flow, changes in elevation are negligible compared to the other terms, so the term gz can be omitted. A very useful form of the equation is then: v 2 2 + ( γ γ − 1 ) p ρ = ( γ γ − 1 ) p 0 ρ 0 {\displaystyle {\frac {v^{2}}{2}}+\left({\frac {\gamma }{\gamma -1}}\right){\frac {p}{\rho }}=\left({\frac {\gamma }{\gamma -1}}\right){\frac {p_{0}}{\rho _{0}}}} where: p0 is the total pressure ρ0 is the total density === Compressible flow in thermodynamics === The most general form of the equation, suitable for use in thermodynamics in case of (quasi) steady flow, is:: § 3.5 : § 5 : § 5.9  v 2 2 + Ψ + w = constant . {\displaystyle {\frac {v^{2}}{2}}+\Psi +w={\text{constant}}.} Here w is the enthalpy per unit mass (also known as specific enthalpy), which is also often written as h (not to be confused with "head" or "height"). Note that w = e + p ρ ( = γ γ − 1 p ρ ) {\displaystyle w=e+{\frac {p}{\rho }}~~~\left(={\frac {\gamma }{\gamma -1}}{\frac {p}{\rho }}\right)} where e is the thermodynamic energy per unit mass, also known as the specific internal energy. So, for constant internal energy e {\displaystyle e} the equation reduces to the incompressible-flow form. The constant on the right-hand side is often called the Bernoulli constant and denoted b. For steady inviscid adiabatic flow with no additional sources or sinks of energy, b is constant along any given streamline. More generally, when b may vary along streamlines, it still proves a useful parameter, related to the "head" of the fluid (see below). When the change in Ψ can be ignored, a very useful form of this equation is: v 2 2 + w = w 0 {\displaystyle {\frac {v^{2}}{2}}+w=w_{0}} where w0 is total enthalpy. For a calorically perfect gas such as an ideal gas, the enthalpy is directly proportional to the temperature, and this leads to the concept of the total (or stagnation) temperature. When shock waves are present, in a reference frame in which the shock is stationary and the flow is steady, many of the parameters in the Bernoulli equation suffer abrupt changes in passing through the shock. The Bernoulli parameter remains unaffected. An exception to this rule is radiative shocks, which violate the assumptions leading to the Bernoulli equation, namely the lack of additional sinks or sources of energy. === Unsteady potential flow === For a compressible fluid, with a barotropic equation of state, the unsteady momentum conservation equation ∂ v → ∂ t + ( v → ⋅ ∇ ) v → = − g → − ∇ p ρ {\displaystyle {\frac {\partial {\vec {v}}}{\partial t}}+\left({\vec {v}}\cdot \nabla \right){\vec {v}}=-{\vec {g}}-{\frac {\nabla p}{\rho }}} With the irrotational assumption, namely, the flow velocity can be described as the gradient ∇φ of a velocity potential φ. The unsteady momentum conservation equation becomes ∂ ∇ ϕ ∂ t + ∇ ( ∇ ϕ ⋅ ∇ ϕ 2 ) = − ∇ Ψ − ∇ ∫ p 1 p d p ~ ρ ( p ~ ) {\displaystyle {\frac {\partial \nabla \phi }{\partial t}}+\nabla \left({\frac {\nabla \phi \cdot \nabla \phi }{2}}\right)=-\nabla \Psi -\nabla \int _{p_{1}}^{p}{\frac {d{\tilde {p}}}{\rho ({\tilde {p}})}}} which leads to ∂ ϕ ∂ t + ∇ ϕ ⋅ ∇ ϕ 2 + Ψ + ∫ p 1 p d p ~ ρ ( p ~ ) = constant {\displaystyle {\frac {\partial \phi }{\partial t}}+{\frac {\nabla \phi \cdot \nabla \phi }{2}}+\Psi +\int _{p_{1}}^{p}{\frac {d{\tilde {p}}}{\rho ({\tilde {p}})}}={\text{constant}}} In this case, the above equation for isentropic flow becomes: ∂ ϕ ∂ t + ∇ ϕ ⋅ ∇ ϕ 2 + Ψ + γ γ − 1 p ρ = constant {\displaystyle {\frac {\partial \phi }{\partial t}}+{\frac {\nabla \phi \cdot \nabla \phi }{2}}+\Psi +{\frac {\gamma }{\gamma -1}}{\frac {p}{\rho }}={\text{constant}}} == Derivations == == Applications == In modern everyday life there are many observations that can be successfully explained by application of Bernoulli's principle, even though no real fluid is entirely inviscid, and a small viscosity often has a large effect on the flow. Bernoulli's principle can be used to calculate the lift force on an airfoil, if the behaviour of the fluid flow in the vicinity of the foil is known. For example, if the air flowing past the top surface of an aircraft wing is moving faster than the air flowing past the bottom surface, then Bernoulli's principle implies that the pressure on the surfaces of the wing will be lower above than below. This pressure difference results in an upwards lifting force. Whenever the distribution of speed past the top and bottom surfaces of a wing is known, the lift forces can be calculated (to a good approximation) using Bernoulli's equations, which were established by Bernoulli over a century before the first man-made wings were used for the purpose of flight. The basis of a carburetor used in many reciprocating engines is a throat in the air flow to create a region of low pressure to draw fuel into the carburetor and mix it thoroughly with the incoming air. The low pressure in the throat can be explained by Bernoulli's principle, where air in the throat is moving at its fastest speed and therefore it is at its lowest pressure. The carburetor may or may not use the difference between the two static pressures which result from the Venturi effect on the air flow in order to force the fuel to flow, and as a basis a carburetor may use the difference in pressure between the throat and local air pressure in the float bowl, or between the throat and a Pitot tube at the air entry. An injector on a steam locomotive or a static boiler. The pitot tube and static port on an aircraft are used to determine the airspeed of the aircraft. These two devices are connected to the airspeed indicator, which determines the dynamic pressure of the airflow past the aircraft. Bernoulli's principle is used to calibrate the airspeed indicator so that it displays the indicated airspeed appropriate to the dynamic pressure.: § 3.8  A De Laval nozzle utilizes Bernoulli's principle to create a force by turning pressure energy generated by the combustion of propellants into velocity. This then generates thrust by way of Newton's third law of motion. The flow speed of a fluid can be measured using a device such as a Venturi meter or an orifice plate, which can be placed into a pipeline to reduce the diameter of the flow. For a horizontal device, the continuity equation shows that for an incompressible fluid, the reduction in diameter will cause an increase in the fluid flow speed. Subsequently, Bernoulli's principle then shows that there must be a decrease in the pressure in the reduced diameter region. This phenomenon is known as the Venturi effect. The maximum possible drain rate for a tank with a hole or tap at the base can be calculated directly from Bernoulli's equation and is found to be proportional to the square root of the height of the fluid in the tank. This is Torricelli's law, which is compatible with Bernoulli's principle. Increased viscosity lowers this drain rate; this is reflected in the discharge coefficient, which is a function of the Reynolds number and the shape of the orifice. The Bernoulli grip relies on this principle to create a non-contact adhesive force between a surface and the gripper. During a cricket match, bowlers continually polish one side of the ball. After some time, one side is quite rough and the other is still smooth. Hence, when the ball is bowled and passes through air, the speed on one side of the ball is faster than on the other, and this results in a pressure difference between the sides; this leads to the ball rotating ("swinging") while travelling through the air, giving advantage to the bowlers. == Misconceptions == === Airfoil lift === One of the most common erroneous explanations of aerodynamic lift asserts that the air must traverse the upper and lower surfaces of a wing in the same amount of time, implying that since the upper surface presents a longer path the air must be moving over the top of the wing faster than over the bottom. Bernoulli's principle is then cited to conclude that the pressure on top of the wing must be lower than on the bottom. Equal transit time applies to the flow around a body generating no lift, but there is no physical principle that requires equal transit time in cases of bodies generating lift. In fact, theory predicts – and experiments confirm – that the air traverses the top surface of a body experiencing lift in a shorter time than it traverses the bottom surface; the explanation based on equal transit time is false. While the equal-time explanation is false, it is not the Bernoulli principle that is false, because this principle is well established; Bernoulli's equation is used correctly in common mathematical treatments of aerodynamic lift. === Common classroom demonstrations === There are several common classroom demonstrations that are sometimes incorrectly explained using Bernoulli's principle. One involves holding a piece of paper horizontally so that it droops downward and then blowing over the top of it. As the demonstrator blows over the paper, the paper rises. It is then asserted that this is because "faster moving air has lower pressure". One problem with this explanation can be seen by blowing along the bottom of the paper: if the deflection was caused by faster moving air, then the paper should deflect downward; but the paper deflects upward regardless of whether the faster moving air is on the top or the bottom. Another problem is that when the air leaves the demonstrator's mouth it has the same pressure as the surrounding air; the air does not have lower pressure just because it is moving; in the demonstration, the static pressure of the air leaving the demonstrator's mouth is equal to the pressure of the surrounding air. A third problem is that it is false to make a connection between the flow on the two sides of the paper using Bernoulli's equation since the air above and below are different flow fields and Bernoulli's principle only applies within a flow field. As the wording of the principle can change its implications, stating the principle correctly is important. What Bernoulli's principle actually says is that within a flow of constant energy, when fluid flows through a region of lower pressure it speeds up and vice versa. Thus, Bernoulli's principle concerns itself with changes in speed and changes in pressure within a flow field. It cannot be used to compare different flow fields. A correct explanation of why the paper rises would observe that the plume follows the curve of the paper and that a curved streamline will develop a pressure gradient perpendicular to the direction of flow, with the lower pressure on the inside of the curve. Bernoulli's principle predicts that the decrease in pressure is associated with an increase in speed; in other words, as the air passes over the paper, it speeds up and moves faster than it was moving when it left the demonstrator's mouth. But this is not apparent from the demonstration. Other common classroom demonstrations, such as blowing between two suspended spheres, inflating a large bag, or suspending a ball in an airstream are sometimes explained in a similarly misleading manner by saying "faster moving air has lower pressure". == See also == Torricelli's law Coandă effect Euler equations – for the flow of an inviscid fluid Hydraulics – applied fluid mechanics for liquids Navier–Stokes equations – for the flow of a viscous fluid Teapot effect Terminology in fluid dynamics == Notes == == References == == External links == The Flow of Dry Water - The Feynman Lectures on Physics Science 101 Q: Is It Really Caused by the Bernoulli Effect? Bernoulli equation calculator Millersville University – Applications of Euler's equation NASA – Beginner's guide to aerodynamics Archived 2012-07-15 at the Wayback Machine Misinterpretations of Bernoulli's equation – Weltner and Ingelman-Sundberg Archived 2012-02-08 at the Wayback Machine
Wikipedia/Bernoulli's_equation
In fluid dynamics, turbulence modeling is the construction and use of a mathematical model to predict the effects of turbulence. Turbulent flows are commonplace in most real-life scenarios. In spite of decades of research, there is no analytical theory to predict the evolution of these turbulent flows. The equations governing turbulent flows can only be solved directly for simple cases of flow. For most real-life turbulent flows, CFD simulations use turbulent models to predict the evolution of turbulence. These turbulence models are simplified constitutive equations that predict the statistical evolution of turbulent flows. == Closure problem == The Navier–Stokes equations govern the velocity and pressure of a fluid flow. In a turbulent flow, each of these quantities may be decomposed into a mean part and a fluctuating part. Averaging the equations gives the Reynolds-averaged Navier–Stokes (RANS) equations, which govern the mean flow. However, the nonlinearity of the Navier–Stokes equations means that the velocity fluctuations still appear in the RANS equations, in the nonlinear term − ρ v i ′ v j ′ ¯ {\displaystyle -\rho {\overline {v_{i}^{\prime }v_{j}^{\prime }}}} from the convective acceleration. This term is known as the Reynolds stress, R i j {\displaystyle R_{ij}} . Its effect on the mean flow is like that of a stress term, such as from pressure or viscosity. To obtain equations containing only the mean velocity and pressure, we need to close the RANS equations by modelling the Reynolds stress term R i j {\displaystyle R_{ij}} as a function of the mean flow, removing any reference to the fluctuating part of the velocity. This is the closure problem. == Eddy viscosity == Joseph Valentin Boussinesq was the first to attack the closure problem, by introducing the concept of eddy viscosity. In 1877 Boussinesq proposed relating the turbulence stresses to the mean flow to close the system of equations. Here the Boussinesq hypothesis is applied to model the Reynolds stress term. Note that a new proportionality constant ν t > 0 {\displaystyle \nu _{t}>0} , the (kinematic) turbulence eddy viscosity, has been introduced. Models of this type are known as eddy viscosity models (EVMs). − v i ′ v j ′ ¯ = ν t ( ∂ v i ¯ ∂ x j + ∂ v j ¯ ∂ x i ) − 2 3 k δ i j {\displaystyle -{\overline {v_{i}^{\prime }v_{j}^{\prime }}}=\nu _{t}\left({\frac {\partial {\overline {v_{i}}}}{\partial x_{j}}}+{\frac {\partial {\overline {v_{j}}}}{\partial x_{i}}}\right)-{\frac {2}{3}}k\delta _{ij}} which can be written in shorthand as − v i ′ v j ′ ¯ = 2 ν t S i j − 2 3 k δ i j {\displaystyle -{\overline {v_{i}^{\prime }v_{j}^{\prime }}}=2\nu _{t}S_{ij}-{\tfrac {2}{3}}k\delta _{ij}} where S i j {\displaystyle S_{ij}} is the mean rate of strain tensor ν t {\displaystyle \nu _{t}} is the (kinematic) turbulence eddy viscosity k = 1 2 v i ′ v i ′ ¯ {\displaystyle k={\tfrac {1}{2}}{\overline {v_{i}'v_{i}'}}} is the turbulence kinetic energy and δ i j {\displaystyle \delta _{ij}} is the Kronecker delta. In this model, the additional turbulence stresses are given by augmenting the molecular viscosity with an eddy viscosity. This can be a simple constant eddy viscosity (which works well for some free shear flows such as axisymmetric jets, 2-D jets, and mixing layers). The Boussinesq hypothesis – although not explicitly stated by Boussinesq at the time – effectively consists of the assumption that the Reynolds stress tensor is aligned with the strain tensor of the mean flow (i.e.: that the shear stresses due to turbulence act in the same direction as the shear stresses produced by the averaged flow). It has since been found to be significantly less accurate than most practitioners would assume. Still, turbulence models which employ the Boussinesq hypothesis have demonstrated significant practical value. In cases with well-defined shear layers, this is likely due the dominance of streamwise shear components, so that considerable relative errors in flow-normal components are still negligible in absolute terms. Beyond this, most eddy viscosity turbulence models contain coefficients which are calibrated against measurements, and thus produce reasonably accurate overall outcomes for flow fields of similar type as used for calibration. == Prandtl's mixing-length concept == Later, Ludwig Prandtl introduced the additional concept of the mixing length, along with the idea of a boundary layer. For wall-bounded turbulent flows, the eddy viscosity must vary with distance from the wall, hence the addition of the concept of a 'mixing length'. In the simplest wall-bounded flow model, the eddy viscosity is given by the equation: ν t = | ∂ u ∂ y | l m 2 {\displaystyle \nu _{t}=\left|{\frac {\partial u}{\partial y}}\right|l_{m}^{2}} where ∂ u ∂ y {\displaystyle {\frac {\partial u}{\partial y}}} is the partial derivative of the streamwise velocity (u) with respect to the wall normal direction (y) l m {\displaystyle l_{m}} is the mixing length. This simple model is the basis for the "law of the wall", which is a surprisingly accurate model for wall-bounded, attached (not separated) flow fields with small pressure gradients. More general turbulence models have evolved over time, with most modern turbulence models given by field equations similar to the Navier–Stokes equations. == Smagorinsky model for the sub-grid scale eddy viscosity == Joseph Smagorinsky was the first who proposed a formula for the eddy viscosity in Large Eddy Simulation models, based on the local derivatives of the velocity field and the local grid size: ν t = Δ x Δ y ( ∂ u ∂ x ) 2 + ( ∂ v ∂ y ) 2 + 1 2 ( ∂ u ∂ y + ∂ v ∂ x ) 2 {\displaystyle \nu _{t}=\Delta x\Delta y{\sqrt {\left({\frac {\partial u}{\partial x}}\right)^{2}+\left({\frac {\partial v}{\partial y}}\right)^{2}+{\frac {1}{2}}\left({\frac {\partial u}{\partial y}}+{\frac {\partial v}{\partial x}}\right)^{2}}}} In the context of Large Eddy Simulation, turbulence modeling refers to the need to parameterize the subgrid scale stress in terms of features of the filtered velocity field. This field is called subgrid-scale modeling. == Spalart–Allmaras, k–ε and k–ω models == The Boussinesq hypothesis is employed in the Spalart–Allmaras (S–A), k–ε (k–epsilon), and k–ω (k–omega) models and offers a relatively low cost computation for the turbulence viscosity ν t {\displaystyle \nu _{t}} . The S–A model uses only one additional equation to model turbulence viscosity transport, while the k–ε and k–ω models use two. == Common models == The following is a brief overview of commonly employed models in modern engineering applications. == References == === Notes === === Other === Absi, R. (2019) "Eddy Viscosity and Velocity Profiles in Fully-Developed Turbulent Channel Flows" Fluid Dyn (2019) 54: 137. https://doi.org/10.1134/S0015462819010014 Absi, R. (2021) "Reinvestigating the Parabolic-Shaped Eddy Viscosity Profile for Free Surface Flows" Hydrology 2021, 8(3), 126. https://doi.org/10.3390/hydrology8030126 Townsend, A. A. (1980) "The Structure of Turbulent Shear Flow" 2nd Edition (Cambridge Monographs on Mechanics), ISBN 0521298199 Bradshaw, P. (1971) "An introduction to turbulence and its measurement" (Pergamon Press), ISBN 0080166210 Wilcox, C. D. (1998), "Turbulence Modeling for CFD" 2nd Ed., (DCW Industries, La Cañada), ISBN 0963605100
Wikipedia/Turbulence_modelling
Hemodynamics or haemodynamics are the dynamics of blood flow. The circulatory system is controlled by homeostatic mechanisms of autoregulation, just as hydraulic circuits are controlled by control systems. The hemodynamic response continuously monitors and adjusts to conditions in the body and its environment. Hemodynamics explains the physical laws that govern the flow of blood in the blood vessels. Blood flow ensures the transportation of nutrients, hormones, metabolic waste products, oxygen, and carbon dioxide throughout the body to maintain cell-level metabolism, the regulation of the pH, osmotic pressure and temperature of the whole body, and the protection from microbial and mechanical harm. Blood is a non-Newtonian fluid, and is most efficiently studied using rheology rather than hydrodynamics. Because blood vessels are not rigid tubes, classic hydrodynamics and fluids mechanics based on the use of classical viscometers are not capable of explaining haemodynamics. The study of the blood flow is called hemodynamics, and the study of the properties of the blood flow is called hemorheology. == Blood == Blood is a complex liquid. Blood is composed of plasma and formed elements. The plasma contains 91.5% water, 7% proteins and 1.5% other solutes. The formed elements are platelets, white blood cells, and red blood cells. The presence of these formed elements and their interaction with plasma molecules are the main reasons why blood differs so much from ideal Newtonian fluids. === Viscosity of plasma === Normal blood plasma behaves like a Newtonian fluid at physiological rates of shear. Typical values for the viscosity of normal human plasma at 37 °C is 1.4 mN·s/m2. The viscosity of normal plasma varies with temperature in the same way as does that of its solvent water;a 3°C change in temperature in the physiological range (36.5°C to 39.5°C)reduces plasma viscosity by about 10%. === Osmotic pressure of plasma === The osmotic pressure of solution is determined by the number of particles present and by the temperature. For example, a 1 molar solution of a substance contains 6.022×1023 molecules per liter of that substance and at 0 °C it has an osmotic pressure of 2.27 MPa (22.4 atm). The osmotic pressure of the plasma affects the mechanics of the circulation in several ways. An alteration of the osmotic pressure difference across the membrane of a blood cell causes a shift of water and a change of cell volume. The changes in shape and flexibility affect the mechanical properties of whole blood. A change in plasma osmotic pressure alters the hematocrit, that is, the volume concentration of red cells in the whole blood by redistributing water between the intravascular and extravascular spaces. This in turn affects the mechanics of the whole blood. === Red blood cells === The red blood cell is highly flexible and biconcave in shape. Its membrane has a Young's modulus in the region of 106 Pa. Deformation in red blood cells is induced by shear stress. When a suspension is sheared, the red blood cells deform and spin because of the velocity gradient, with the rate of deformation and spin depending on the shear rate and the concentration. This can influence the mechanics of the circulation and may complicate the measurement of blood viscosity. It is true that in a steady state flow of a viscous fluid through a rigid spherical body immersed in the fluid, where we assume the inertia is negligible in such a flow, it is believed that the downward gravitational force of the particle is balanced by the viscous drag force. From this force balance the speed of fall can be shown to be given by Stokes' law U s = 2 9 ( ρ p − ρ f ) μ g a 2 {\displaystyle U_{s}={\frac {2}{9}}{\frac {\left(\rho _{p}-\rho _{f}\right)}{\mu }}g\,a^{2}} Where a is the particle radius, ρp, ρf are the respectively particle and fluid density μ is the fluid viscosity, g is the gravitational acceleration. From the above equation we can see that the sedimentation velocity of the particle depends on the square of the radius. If the particle is released from rest in the fluid, its sedimentation velocity Us increases until it attains the steady value called the terminal velocity (U), as shown above. === Hemodilution === Hemodilution is the dilution of the concentration of red blood cells and plasma constituents by partially substituting the blood with colloids or crystalloids. It is a strategy to avoid exposure of patients to the potential hazards of homologous blood transfusions. Hemodilution can be normovolemic, which implies the dilution of normal blood constituents by the use of expanders. During acute normovolemic hemodilution (ANH), blood subsequently lost during surgery contains proportionally fewer red blood cells per milliliter, thus minimizing intraoperative loss of the whole blood. Therefore, blood lost by the patient during surgery is not actually lost by the patient, for this volume is purified and redirected into the patient. On the other hand, hypervolemic hemodilution (HVH) uses acute preoperative volume expansion without any blood removal. In choosing a fluid, however, it must be assured that when mixed, the remaining blood behaves in the microcirculation as in the original blood fluid, retaining all its properties of viscosity. In presenting what volume of ANH should be applied one study suggests a mathematical model of ANH which calculates the maximum possible RCM savings using ANH, given the patients weight Hi and Hm. To maintain the normovolemia, the withdrawal of autologous blood must be simultaneously replaced by a suitable hemodilute. Ideally, this is achieved by isovolemia exchange transfusion of a plasma substitute with a colloid osmotic pressure (OP). A colloid is a fluid containing particles that are large enough to exert an oncotic pressure across the micro-vascular membrane. When debating the use of colloid or crystalloid, it is imperative to think about all the components of the starling equation: Q = K ( [ P c − P i ] S − [ P c − P i ] ) {\displaystyle \ Q=K([P_{c}-P_{i}]S-[P_{c}-P_{i}])} To identify the minimum safe hematocrit desirable for a given patient the following equation is useful: B L s = E B V ln ⁡ H i H m {\displaystyle \ BL_{s}=EBV\ln {\frac {H_{i}}{H_{m}}}} where EBV is the estimated blood volume; 70 mL/kg was used in this model and Hi (initial hematocrit) is the patient's initial hematocrit. From the equation above it is clear that the volume of blood removed during the ANH to the Hm is the same as the BLs. How much blood is to be removed is usually based on the weight, not the volume. The number of units that need to be removed to hemodilute to the maximum safe hematocrit (ANH) can be found by A N H = B L s 450 {\displaystyle ANH={\frac {BL_{s}}{450}}} This is based on the assumption that each unit removed by hemodilution has a volume of 450 mL (the actual volume of a unit will vary somewhat since completion of collection is dependent on weight and not volume). The model assumes that the hemodilute value is equal to the Hm prior to surgery, therefore, the re-transfusion of blood obtained by hemodilution must begin when SBL begins. The RCM available for retransfusion after ANH (RCMm) can be calculated from the patient's Hi and the final hematocrit after hemodilution(Hm) R C M = E V B × ( H i − H m ) {\displaystyle RCM=EVB\times (H_{i}-H_{m})} The maximum SBL that is possible when ANH is used without falling below Hm(BLH) is found by assuming that all the blood removed during ANH is returned to the patient at a rate sufficient to maintain the hematocrit at the minimum safe level B L H = R C M H H m {\displaystyle BL_{H}={\frac {RCM_{H}}{H_{m}}}} If ANH is used as long as SBL does not exceed BLH there will not be any need for blood transfusion. We can conclude from the foregoing that H should therefore not exceed s. The difference between the BLH and the BLs therefore is the incremental surgical blood loss (BLi) possible when using ANH. B L i = B L H − B L s {\displaystyle \ {BL_{i}}={BL_{H}}-{BL_{s}}} When expressed in terms of the RCM R C M i = B L i × H m {\displaystyle {RCM_{i}}={BL_{i}}\times {H_{m}}} Where RCMi is the red cell mass that would have to be administered using homologous blood to maintain the Hm if ANH is not used and blood loss equals BLH. The model used assumes ANH used for a 70 kg patient with an estimated blood volume of 70 ml/kg (4900 ml). A range of Hi and Hm was evaluated to understand conditions where hemodilution is necessary to benefit the patient. ==== Result ==== The result of the model calculations are presented in a table given in the appendix for a range of Hi from 0.30 to 0.50 with ANH performed to minimum hematocrits from 0.30 to 0.15. Given a Hi of 0.40, if the Hm is assumed to be 0.25.then from the equation above the RCM count is still high and ANH is not necessary, if BLs does not exceed 2303 ml, since the hemotocrit will not fall below Hm, although five units of blood must be removed during hemodilution. Under these conditions, to achieve the maximum benefit from the technique if ANH is used, no homologous blood will be required to maintain the Hm if blood loss does not exceed 2940 ml. In such a case, ANH can save a maximum of 1.1 packed red blood cell unit equivalent, and homologous blood transfusion is necessary to maintain Hm, even if ANH is used. This model can be used to identify when ANH may be used for a given patient and the degree of ANH necessary to maximize that benefit. For example, if Hi is 0.30 or less it is not possible to save a red cell mass equivalent to two units of homologous PRBC even if the patient is hemodiluted to an Hm of 0.15. That is because from the RCM equation the patient RCM falls short from the equation giving above. If Hi is 0.40 one must remove at least 7.5 units of blood during ANH, resulting in an Hm of 0.20 to save two units equivalence. Clearly, the greater the Hi and the greater the number of units removed during hemodilution, the more effective ANH is for preventing homologous blood transfusion. The model here is designed to allow doctors to determine where ANH may be beneficial for a patient based on their knowledge of the Hi, the potential for SBL, and an estimate of the Hm. Though the model used a 70 kg patient, the result can be applied to any patient. To apply these result to any body weight, any of the values BLs, BLH and ANHH or PRBC given in the table need to be multiplied by the factor we will call T T = patient's weight in kg 70 {\displaystyle T={\frac {\text{patient's weight in kg}}{70}}} Basically, the model considered above is designed to predict the maximum RCM that can save ANH. In summary, the efficacy of ANH has been described mathematically by means of measurements of surgical blood loss and blood volume flow measurement. This form of analysis permits accurate estimation of the potential efficiency of the techniques and shows the application of measurement in the medical field. == Blood flow == === Cardiac output === The heart is the driver of the circulatory system, pumping blood through rhythmic contraction and relaxation. The rate of blood flow out of the heart (often expressed in L/min) is known as the cardiac output (CO). Blood being pumped out of the heart first enters the aorta, the largest artery of the body. It then proceeds to divide into smaller and smaller arteries, then into arterioles, and eventually capillaries, where oxygen transfer occurs. The capillaries connect to venules, and the blood then travels back through the network of veins to the venae cavae into the right heart. The micro-circulation — the arterioles, capillaries, and venules —constitutes most of the area of the vascular system and is the site of the transfer of O2, glucose, and enzyme substrates into the cells. The venous system returns the de-oxygenated blood to the right heart where it is pumped into the lungs to become oxygenated and CO2 and other gaseous wastes exchanged and expelled during breathing. Blood then returns to the left side of the heart where it begins the process again. In a normal circulatory system, the volume of blood returning to the heart each minute is approximately equal to the volume that is pumped out each minute (the cardiac output). Because of this, the velocity of blood flow across each level of the circulatory system is primarily determined by the total cross-sectional area of that level. Cardiac output is determined by two methods. One is to use the Fick equation: C O = V O 2 / C a O 2 − C v O 2 {\displaystyle CO=VO2/C_{a}O_{2}-C_{v}O_{2}} The other thermodilution method is to sense the temperature change from a liquid injected in the proximal port of a Swan-Ganz to the distal port. Cardiac output is mathematically expressed by the following equation: C O = S V × H R {\displaystyle CO=SV\times HR} where CO = cardiac output (L/sec) SV = stroke volume (ml) HR = heart rate (bpm) The normal human cardiac output is 5-6 L/min at rest. Not all blood that enters the left ventricle exits the heart. What is left at the end of diastole (EDV) minus the stroke volume make up the end systolic volume (ESV). ==== Anatomical features ==== Circulatory system of species subjected to orthostatic blood pressure (such as arboreal snakes) has evolved with physiological and morphological features to overcome the circulatory disturbance. For instance, in arboreal snakes the heart is closer to the head, in comparison with aquatic snakes. This facilitates blood perfusion to the brain. === Turbulence === Blood flow is also affected by the smoothness of the vessels, resulting in either turbulent (chaotic) or laminar (smooth) flow. Smoothness is reduced by the buildup of fatty deposits on the arterial walls. The Reynolds number (denoted NR or Re) is a relationship that helps determine the behavior of a fluid in a tube, in this case blood in the vessel. The equation for this dimensionless relationship is written as: N R = ρ v L μ {\displaystyle NR={\frac {\rho vL}{\mu }}} ρ: density of the blood v: mean velocity of the blood L: characteristic dimension of the vessel, in this case diameter μ: viscosity of blood The Reynolds number is directly proportional to the velocity and diameter of the tube. Note that NR is directly proportional to the mean velocity as well as the diameter. A Reynolds number of less than 2300 is laminar fluid flow, which is characterized by constant flow motion, whereas a value of over 4000, is represented as turbulent flow. Due to its smaller radius and lowest velocity compared to other vessels, the Reynolds number at the capillaries is very low, resulting in laminar instead of turbulent flow. === Velocity === Often expressed in cm/s. This value is inversely related to the total cross-sectional area of the blood vessel and also differs per cross-section, because in normal condition the blood flow has laminar characteristics. For this reason, the blood flow velocity is the fastest in the middle of the vessel and slowest at the vessel wall. In most cases, the mean velocity is used. There are many ways to measure blood flow velocity, like videocapillary microscoping with frame-to-frame analysis, or laser Doppler anemometry. Blood velocities in arteries are higher during systole than during diastole. One parameter to quantify this difference is the pulsatility index (PI), which is equal to the difference between the peak systolic velocity and the minimum diastolic velocity divided by the mean velocity during the cardiac cycle. This value decreases with distance from the heart. P I = v s y s t o l e − v d i a s t o l e v m e a n {\displaystyle PI={\frac {v_{systole}-v_{diastole}}{v_{mean}}}} == Blood vessels == === Vascular resistance === Resistance is also related to vessel radius, vessel length, and blood viscosity. In a first approach based on fluids, as indicated by the Hagen–Poiseuille equation. The equation is as follows: Δ P = 8 μ l Q π r 4 {\displaystyle \Delta P={\frac {8\mu lQ}{\pi r^{4}}}} ∆P: pressure drop/gradient μ: viscosity l: length of tube. In the case of vessels with infinitely long lengths, l is replaced with diameter of the vessel. Q: flow rate of the blood in the vessel r: radius of the vessel In a second approach, more realistic of the vascular resistance and coming from experimental observations on blood flows, according to Thurston, there is a plasma release-cell layering at the walls surrounding a plugged flow. It is a fluid layer in which at a distance δ, viscosity η is a function of δ written as η(δ), and these surrounding layers do not meet at the vessel centre in real blood flow. Instead, there is the plugged flow which is hyperviscous because holding high concentration of RBCs. Thurston assembled this layer to the flow resistance to describe blood flow by means of a viscosity η(δ) and thickness δ from the wall layer. The blood resistance law appears as R adapted to blood flow profile : R = c L η ( δ ) ( π δ r 3 ) {\displaystyle R={\frac {cL\eta (\delta )}{(\pi \delta r^{3})}}} where R = resistance to blood flow c = constant coefficient of flow L = length of the vessel η(δ) = viscosity of blood in the wall plasma release-cell layering r = radius of the blood vessel δ = distance in the plasma release-cell layer Blood resistance varies depending on blood viscosity and its plugged flow (or sheath flow since they are complementary across the vessel section) size as well, and on the size of the vessels. Assuming steady, laminar flow in the vessel, the blood vessels behavior is similar to that of a pipe. For instance if p1 and p2 are pressures are at the ends of the tube, the pressure drop/gradient is: p 1 − p 2 l = Δ P {\displaystyle {\frac {p_{1}-p_{2}}{l}}=\Delta P} The larger arteries, including all large enough to see without magnification, are conduits with low vascular resistance (assuming no advanced atherosclerotic changes) with high flow rates that generate only small drops in pressure. The smaller arteries and arterioles have higher resistance, and confer the main blood pressure drop across major arteries to capillaries in the circulatory system. In the arterioles blood pressure is lower than in the major arteries. This is due to bifurcations, which cause a drop in pressure. The more bifurcations, the higher the total cross-sectional area, therefore the pressure across the surface drops. This is why the arterioles have the highest pressure-drop. The pressure drop of the arterioles is the product of flow rate and resistance: ∆P=Q xresistance. The high resistance observed in the arterioles, which factor largely in the ∆P is a result of a smaller radius of about 30 μm. The smaller the radius of a tube, the larger the resistance to fluid flow. Immediately following the arterioles are the capillaries. Following the logic observed in the arterioles, we expect the blood pressure to be lower in the capillaries compared to the arterioles. Since pressure is a function of force per unit area, (P = F/A), the larger the surface area, the lesser the pressure when an external force acts on it. Though the radii of the capillaries are very small, the network of capillaries has the largest surface area in the vascular network. They are known to have the largest surface area (485 mm^2) in the human vascular network. The larger the total cross-sectional area, the lower the mean velocity as well as the pressure. Substances called vasoconstrictors can reduce the size of blood vessels, thereby increasing blood pressure. Vasodilators (such as nitroglycerin) increase the size of blood vessels, thereby decreasing arterial pressure. If the blood viscosity increases (gets thicker), the result is an increase in arterial pressure. Certain medical conditions can change the viscosity of the blood. For instance, anemia (low red blood cell concentration) reduces viscosity, whereas increased red blood cell concentration increases viscosity. It had been thought that aspirin and related "blood thinner" drugs decreased the viscosity of blood, but instead studies found that they act by reducing the tendency of the blood to clot. To determine the systemic vascular resistance (SVR) the formula for calculating all resistance is used. R = ( Δ p r e s s u r e ) / f l o w . {\displaystyle R=(\Delta pressure)/flow.} This translates for SVR into: S V R = ( M A P − C V P ) / C O {\displaystyle SVR=(MAP-CVP)/CO} Where SVR = systemic vascular resistance (mmHg/L/min) MAP = mean arterial pressure (mmHg) CVP = central venous pressure (mmHg) CO = cardiac output (L/min) To get this in Wood units the answer is multiplied by 80. Normal systemic vascular resistance is between 900 and 1440 dynes/sec/cm−5. === Wall tension === Regardless of site, blood pressure is related to the wall tension of the vessel according to the Young–Laplace equation (assuming that the thickness of the vessel wall is very small as compared to the diameter of the lumen): σ θ = P r t {\displaystyle \sigma _{\theta }={\dfrac {Pr}{t}}\ } where P is the blood pressure t is the wall thickness r is the inside radius of the cylinder. σ θ {\displaystyle \sigma _{\theta }\!} is the cylinder stress or "hoop stress". For the thin-walled assumption to be valid the vessel must have a wall thickness of no more than about one-tenth (often cited as one twentieth) of its radius. The cylinder stress, in turn, is the average force exerted circumferentially (perpendicular both to the axis and to the radius of the object) in the cylinder wall, and can be described as: σ θ = F t l {\displaystyle \sigma _{\theta }={\dfrac {F}{tl}}\ } where: F is the force exerted circumferentially on an area of the cylinder wall that has the following two lengths as sides: t is the radial thickness of the cylinder l is the axial length of the cylinder === Stress === When force is applied to a material it starts to deform or move. As the force needed to deform a material (e.g. to make a fluid flow) increases with the size of the surface of the material A., the magnitude of this force F is proportional to the area A of the portion of the surface. Therefore, the quantity (F/A) that is the force per unit area is called the stress. The shear stress at the wall that is associated with blood flow through an artery depends on the artery size and geometry and can range between 0.5 and 4 Pa. σ = F A {\displaystyle \sigma ={\frac {F}{A}}} . Under normal conditions, to avoid atherogenesis, thrombosis, smooth muscle proliferation and endothelial apoptosis, shear stress maintains its magnitude and direction within an acceptable range. In some cases occurring due to blood hammer, shear stress reaches larger values. While the direction of the stress may also change by the reverse flow, depending on the hemodynamic conditions. Therefore, this situation can lead to atherosclerosis disease. === Capacitance === Veins are described as the "capacitance vessels" of the body because over 70% of the blood volume resides in the venous system. Veins are more compliant than arteries and expand to accommodate changing volume. == Blood pressure == The blood pressure in the circulation is principally due to the pumping action of the heart. The pumping action of the heart generates pulsatile blood flow, which is conducted into the arteries, across the micro-circulation and eventually, back via the venous system to the heart. During each heartbeat, systemic arterial blood pressure varies between a maximum (systolic) and a minimum (diastolic) pressure. In physiology, these are often simplified into one value, the mean arterial pressure (MAP), which is calculated as follows: M A P = D P + 1 / 3 ( P P ) {\displaystyle MAP=DP+1/3(PP)} where: MAP = Mean Arterial Pressure DP = Diastolic blood pressure PP = Pulse pressure which is systolic pressure minus diastolic pressure. Differences in mean blood pressure are responsible for blood flow from one location to another in the circulation. The rate of mean blood flow depends on both blood pressure and the resistance to flow presented by the blood vessels. Mean blood pressure decreases as the circulating blood moves away from the heart through arteries and capillaries due to viscous losses of energy. Mean blood pressure drops over the whole circulation, although most of the fall occurs along the small arteries and arterioles. Gravity affects blood pressure via hydrostatic forces (e.g., during standing), and valves in veins, breathing, and pumping from contraction of skeletal muscles also influence blood pressure in veins. The relationship between pressure, flow, and resistance is expressed in the following equation: F l o w = P r e s s u r e / R e s i s t a n c e {\displaystyle Flow=Pressure/Resistance} When applied to the circulatory system, we get: C O = ( M A P − R A P ) / S V R {\displaystyle CO=(MAP-RAP)/SVR} where CO = cardiac output (in L/min) MAP = mean arterial pressure (in mmHg), the average pressure of blood as it leaves the heart RAP = right atrial pressure (in mmHg), the average pressure of blood as it returns to the heart SVR = systemic vascular resistance (in mmHg * min/L) A simplified form of this equation assumes right atrial pressure is approximately 0: C O ≈ M A P / S V R {\displaystyle CO\approx MAP/SVR} The ideal blood pressure in the brachial artery, where standard blood pressure cuffs measure pressure, is <120/80 mmHg. Other major arteries have similar levels of blood pressure recordings indicating very low disparities among major arteries. In the innominate artery, the average reading is 110/70 mmHg, the right subclavian artery averages 120/80 and the abdominal aorta is 110/70 mmHg. The relatively uniform pressure in the arteries indicate that these blood vessels act as a pressure reservoir for fluids that are transported within them. Pressure drops gradually as blood flows from the major arteries, through the arterioles, the capillaries until blood is pushed up back into the heart via the venules, the veins through the vena cava with the help of the muscles. At any given pressure drop, the flow rate is determined by the resistance to the blood flow. In the arteries, with the absence of diseases, there is very little or no resistance to blood. The vessel diameter is the most principal determinant to control resistance. Compared to other smaller vessels in the body, the artery has a much bigger diameter (4 mm), therefore the resistance is low. The arm–leg (blood pressure) gradient is the difference between the blood pressure measured in the arms and that measured in the legs. It is normally less than 10 mm Hg, but may be increased in e.g. coarctation of the aorta. == Clinical significance == === Pressure monitoring === Hemodynamic monitoring is the observation of hemodynamic parameters over time, such as blood pressure and heart rate. Blood pressure can be monitored either invasively through an inserted blood pressure transducer assembly (providing continuous monitoring), or noninvasively by repeatedly measuring the blood pressure with an inflatable blood pressure cuff. Hypertension is diagnosed by the presence of arterial blood pressures of 140/90 or greater for two clinical visits. Pulmonary Artery Wedge Pressure can show if there is congestive heart failure, mitral and aortic valve disorders, hypervolemia, shunts, or cardiac tamponade. === Remote, indirect monitoring of blood flow by laser Doppler === Noninvasive hemodynamic monitoring of eye fundus vessels can be performed by Laser Doppler holography, with near infrared light. The eye offers a unique opportunity for the non-invasive exploration of cardiovascular diseases. Laser Doppler imaging by digital holography can measure blood flow in the retina and choroid, whose Doppler responses exhibit a pulse-shaped profile with time This technique enables non invasive functional microangiography by high-contrast measurement of Doppler responses from endoluminal blood flow profiles in vessels in the posterior segment of the eye. Differences in blood pressure drive the flow of blood throughout the circulation. The rate of mean blood flow depends on both blood pressure and the hemodynamic resistance to flow presented by the blood vessels. == Glossary == ANH Acute Normovolemic Hemodilution ANHu Number of Units During ANH BLH Maximum Blood Loss Possible When ANH Is Used Before Homologous Blood Transfusion Is Needed BLI Incremental Blood Loss Possible with ANH.(BLH – BLs) BLs Maximum blood loss without ANH before homologous blood transfusion is required EBV Estimated Blood Volume(70 mL/kg) Hct Haematocrit Always Expressed Here As A Fraction Hi Initial Haematocrit Hm Minimum Safe Haematocrit PRBC Packed Red Blood Cell Equivalent Saved by ANH RCM Red cell mass. RCMH Cell Mass Available For Transfusion after ANH RCMI Red Cell Mass Saved by ANH SBL Surgical Blood Loss == Etymology and pronunciation == The word hemodynamics () uses combining forms of hemo- (which comes from the ancient Greek haima, meaning blood) and dynamics, thus "the dynamics of blood". The vowel of the hemo- syllable is variously written according to the ae/e variation. == Notes and references == == Bibliography == Berne RM, Levy MN. Cardiovascular physiology. 7th Ed Mosby 1997 Rowell LB. Human Cardiovascular Control. Oxford University press 1993 Braunwald E (Editor). Heart Disease: A Textbook of Cardiovascular Medicine. 5th Ed. W.B.Saunders 1997 Siderman S, Beyar R, Kleber AG. Cardiac Electrophysiology, Circulation and Transport. Kluwer Academic Publishers 1991 American Heart Association Otto CM, Stoddard M, Waggoner A, Zoghbi WA. Recommendations for Quantification of Doppler Echocardiography: A Report from the Doppler Quantification Task Force of the Nomenclature and Standards Committee of the American Society of Echocardiography. J Am Soc Echocardiogr 2002;15:167-184 Peterson LH, The Dynamics of Pulsatile Blood Flow, Circ. Res. 1954;2;127-139 Hemodynamic Monitoring, Bigatello LM, George E., Minerva Anestesiol, 2002 Apr;68(4):219-25 Claude Franceschi L'investigation vasculaire par ultrasonographie Doppler Masson 1979 ISBN Nr 2-225-63679-6 Claude Franceschi; Paolo Zamboni Principles of Venous Hemodynamics Nova Science Publishers 2009-01 ISBN Nr 1606924850/9781606924853 Claude Franceschi Venous Insufficiency of the pelvis and lower extremities-Hemodynamic Rationale WR Milnor: Hemodynamics, Williams & Wilkins, 1982 B Bo Sramek: Systemic Hemodynamics and Hemodynamic Management, 4th Edition, ESBN 1-59196-046-0 == External links == Learn hemodynamics
Wikipedia/Haemodynamics
In mathematics, an expression or equation is in closed form if it is formed with constants, variables, and a set of functions considered as basic and connected by arithmetic operations (+, −, ×, /, and integer powers) and function composition. Commonly, the basic functions that are allowed in closed forms are nth root, exponential function, logarithm, and trigonometric functions. However, the set of basic functions depends on the context. For example, if one adds polynomial roots to the basic functions, the functions that have a closed form are called elementary functions. The closed-form problem arises when new ways are introduced for specifying mathematical objects, such as limits, series, and integrals: given an object specified with such tools, a natural problem is to find, if possible, a closed-form expression of this object; that is, an expression of this object in terms of previous ways of specifying it. == Example: roots of polynomials == The quadratic formula x = − b ± b 2 − 4 a c 2 a {\displaystyle x={\frac {-b\pm {\sqrt {b^{2}-4ac}}}{2a}}} is a closed form of the solutions to the general quadratic equation a x 2 + b x + c = 0. {\displaystyle ax^{2}+bx+c=0.} More generally, in the context of polynomial equations, a closed form of a solution is a solution in radicals; that is, a closed-form expression for which the allowed functions are only nth-roots and field operations ( + , − , × , / ) . {\displaystyle (+,-,\times ,/).} In fact, field theory allows showing that if a solution of a polynomial equation has a closed form involving exponentials, logarithms or trigonometric functions, then it has also a closed form that does not involve these functions. There are expressions in radicals for all solutions of cubic equations (degree 3) and quartic equations (degree 4). The size of these expressions increases significantly with the degree, limiting their usefulness. In higher degrees, the Abel–Ruffini theorem states that there are equations whose solutions cannot be expressed in radicals, and, thus, have no closed forms. A simple example is the equation x 5 − x − 1 = 0. {\displaystyle x^{5}-x-1=0.} Galois theory provides an algorithmic method for deciding whether a particular polynomial equation can be solved in radicals. == Symbolic integration == Symbolic integration consists essentially of the search of closed forms for antiderivatives of functions that are specified by closed-form expressions. In this context, the basic functions used for defining closed forms are commonly logarithms, exponential function and polynomial roots. Functions that have a closed form for these basic functions are called elementary functions and include trigonometric functions, inverse trigonometric functions, hyperbolic functions, and inverse hyperbolic functions. The fundamental problem of symbolic integration is thus, given an elementary function specified by a closed-form expression, to decide whether its antiderivative is an elementary function, and, if it is, to find a closed-form expression for this antiderivative. For rational functions; that is, for fractions of two polynomial functions; antiderivatives are not always rational fractions, but are always elementary functions that may involve logarithms and polynomial roots. This is usually proved with partial fraction decomposition. The need for logarithms and polynomial roots is illustrated by the formula ∫ f ( x ) g ( x ) d x = ∑ α ∈ Roots ⁡ ( g ( x ) ) f ( α ) g ′ ( α ) ln ⁡ ( x − α ) , {\displaystyle \int {\frac {f(x)}{g(x)}}\,dx=\sum _{\alpha \in \operatorname {Roots} (g(x))}{\frac {f(\alpha )}{g'(\alpha )}}\ln(x-\alpha ),} which is valid if f {\displaystyle f} and g {\displaystyle g} are coprime polynomials such that g {\displaystyle g} is square free and deg ⁡ f < deg ⁡ g . {\displaystyle \deg f<\deg g.} == Alternative definitions == Changing the basic functions to include additional functions can change the set of equations with closed-form solutions. Many cumulative distribution functions cannot be expressed in closed form, unless one considers special functions such as the error function or gamma function to be basic. It is possible to solve the quintic equation if general hypergeometric functions are included, although the solution is far too complicated algebraically to be useful. For many practical computer applications, it is entirely reasonable to assume that the gamma function and other special functions are basic since numerical implementations are widely available. == Analytic expression == This is a term that is sometimes understood as a synonym for closed-form (see "Wolfram Mathworld".) but this usage is contested (see "Math Stackexchange".). It is unclear the extent to which this term is genuinely in use as opposed to the result of uncited earlier versions of this page. == Comparison of different classes of expressions == The closed-form expressions do not include infinite series or continued fractions; neither includes integrals or limits. Indeed, by the Stone–Weierstrass theorem, any continuous function on the unit interval can be expressed as a limit of polynomials, so any class of functions containing the polynomials and closed under limits will necessarily include all continuous functions. Similarly, an equation or system of equations is said to have a closed-form solution if and only if at least one solution can be expressed as a closed-form expression; and it is said to have an analytic solution if and only if at least one solution can be expressed as an analytic expression. There is a subtle distinction between a "closed-form function" and a "closed-form number" in the discussion of a "closed-form solution", discussed in (Chow 1999) and below. A closed-form or analytic solution is sometimes referred to as an explicit solution. == Dealing with non-closed-form expressions == === Transformation into closed-form expressions === The expression: f ( x ) = ∑ n = 0 ∞ x 2 n {\displaystyle f(x)=\sum _{n=0}^{\infty }{\frac {x}{2^{n}}}} is not in closed form because the summation entails an infinite number of elementary operations. However, by summing a geometric series this expression can be expressed in the closed form: f ( x ) = 2 x . {\displaystyle f(x)=2x.} === Differential Galois theory === The integral of a closed-form expression may or may not itself be expressible as a closed-form expression. This study is referred to as differential Galois theory, by analogy with algebraic Galois theory. The basic theorem of differential Galois theory is due to Joseph Liouville in the 1830s and 1840s and hence referred to as Liouville's theorem. A standard example of an elementary function whose antiderivative does not have a closed-form expression is: e − x 2 , {\displaystyle e^{-x^{2}},} whose one antiderivative is (up to a multiplicative constant) the error function: erf ⁡ ( x ) = 2 π ∫ 0 x e − t 2 d t . {\displaystyle \operatorname {erf} (x)={\frac {2}{\sqrt {\pi }}}\int _{0}^{x}e^{-t^{2}}\,dt.} === Mathematical modelling and computer simulation === Equations or systems too complex for closed-form or analytic solutions can often be analysed by mathematical modelling and computer simulation (for an example in physics, see). == Closed-form number == Three subfields of the complex numbers C have been suggested as encoding the notion of a "closed-form number"; in increasing order of generality, these are the Liouvillian numbers (not to be confused with Liouville numbers in the sense of rational approximation), EL numbers and elementary numbers. The Liouvillian numbers, denoted L, form the smallest algebraically closed subfield of C closed under exponentiation and logarithm (formally, intersection of all such subfields)—that is, numbers which involve explicit exponentiation and logarithms, but allow explicit and implicit polynomials (roots of polynomials); this is defined in (Ritt 1948, p. 60). L was originally referred to as elementary numbers, but this term is now used more broadly to refer to numbers defined explicitly or implicitly in terms of algebraic operations, exponentials, and logarithms. A narrower definition proposed in (Chow 1999, pp. 441–442), denoted E, and referred to as EL numbers, is the smallest subfield of C closed under exponentiation and logarithm—this need not be algebraically closed, and corresponds to explicit algebraic, exponential, and logarithmic operations. "EL" stands both for "exponential–logarithmic" and as an abbreviation for "elementary". Whether a number is a closed-form number is related to whether a number is transcendental. Formally, Liouvillian numbers and elementary numbers contain the algebraic numbers, and they include some but not all transcendental numbers. In contrast, EL numbers do not contain all algebraic numbers, but do include some transcendental numbers. Closed-form numbers can be studied via transcendental number theory, in which a major result is the Gelfond–Schneider theorem, and a major open question is Schanuel's conjecture. == Numerical computations == For purposes of numeric computations, being in closed form is not in general necessary, as many limits and integrals can be efficiently computed. Some equations have no closed form solution, such as those that represent the Three-body problem or the Hodgkin–Huxley model. Therefore, the future states of these systems must be computed numerically. == Conversion from numerical forms == There is software that attempts to find closed-form expressions for numerical values, including RIES, identify in Maple and SymPy, Plouffe's Inverter, and the Inverse Symbolic Calculator. == See also == Algebraic solution – Solution in radicals of a polynomial equationPages displaying short descriptions of redirect targets Computer simulation – Process of mathematical modelling, performed on a computer Elementary function – A kind of mathematical function Finitary operation – Addition, multiplication, division, ...Pages displaying short descriptions of redirect targets Numerical solution – Methods for numerical approximationsPages displaying short descriptions of redirect targets Liouvillian function – Elementary functions and their finitely iterated integrals Symbolic regression – Type of regression analysis Tarski's high school algebra problem – Mathematical problem Term (logic) – Components of a mathematical or logical formula Tupper's self-referential formula – Formula that visually represents itself when graphed == Notes == == References == == Further reading == Ritt, J. F. (1948), Integration in finite terms Chow, Timothy Y. (May 1999), "What is a Closed-Form Number?", American Mathematical Monthly, 106 (5): 440–448, arXiv:math/9805045, doi:10.2307/2589148, JSTOR 2589148 Jonathan M. Borwein and Richard E. Crandall (January 2013), "Closed Forms: What They Are and Why We Care", Notices of the American Mathematical Society, 60 (1): 50–65, doi:10.1090/noti936 == External links == Weisstein, Eric W. "Closed-Form Solution". MathWorld. Closed-form continuous-time neural networks
Wikipedia/Solution_in_closed_form
Astrophysical fluid dynamics is a branch of modern astronomy which deals with the motion of fluids in outer space using fluid mechanics, such as those that make up the Sun and other stars. The subject covers the fundamentals of fluid mechanics using various equations, such as continuity equations, the Navier–Stokes equations, and Euler's equations of collisional fluids. Some of the applications of astrophysical fluid dynamics include dynamics of stellar systems, accretion disks, astrophysical jets, Newtonian fluids, and the fluid dynamics of galaxies. == Introduction == Astrophysical fluid dynamics applies fluid dynamics and its equations to the movement of the fluids in space. The applications are different from regular fluid mechanics in that nearly all calculations take place in a vacuum with zero gravity. Most of the interstellar medium is not at rest, but is in supersonic motion due to supernova explosions, stellar winds, radiation fields and a time dependent gravitational field caused by spiral density waves in the stellar discs of galaxies. Since supersonic motions almost always involve shock waves, shock waves must be accounted for in calculations. The galaxy also contains a dynamically significant magnetic field, meaning that the dynamics are governed by the equations of compressible magnetohydrodynamics. In many cases, the electrical conductivity is large enough for the ideal MHD equations to be a good approximation, but this is not true in star forming regions where the gas density is high and the degree of ionization is low. === Star formation === An example problem is that of star formation. Stars form out of the interstellar medium, with this formation mostly occurring in giant molecular clouds such as the Rosette Nebula. An interstellar cloud can collapse due to its self-gravity if it is large enough; however, in the ordinary interstellar medium this can only happen if the cloud has a mass of several thousands of solar masses—much larger than that of any star. Stars may still form, however, from processes that occur if the magnetic pressure is much larger than the thermal pressure, which is the case in giant molecular clouds. These processes rely on the interaction of magnetohydrodynamic waves with a thermal instability. A magnetohydrodynamic wave in a medium in which the magnetic pressure is much larger than the thermal pressure can produce dense regions, but they cannot by themselves make the density high enough for self-gravity to act. However, the gas in star forming regions is heated by cosmic rays and is cooled by radiative processes. The net result is that a gas in a thermal equilibrium state in which heating balances cooling can exist in three different phases at the same pressure: a warm phase with a low density, an unstable phase with intermediate density and a cold phase at low temperature. An increase in pressure due to a supernova or a spiral density wave can shift the gas from the warm phase to the unstable phase, with a magnetohydrodynamic wave then being able to produce dense fragments in the cold phase whose self-gravity is strong enough for them to collapse into stars. == Basic concepts == === Concepts of fluid dynamics === Many regular fluid dynamics equations are used in astrophysical fluid dynamics. Some of these equations are: Continuity equations The Navier–Stokes equations Euler's equations Conservation of mass The continuity equation is an extension of conservation of mass to fluid flow. Consider a fluid flowing through a fixed volume tank having one inlet and one outlet. If the flow is steady (no accumulation of fluid within the tank), then the rate of fluid flow at entry must be equal to the rate of fluid flow at the exit for mass conservation. If, at an entry (or exit) having a cross-sectional area A {\displaystyle A} m2, a fluid parcel travels a distance d L {\displaystyle dL} in time d t {\displaystyle dt} , then the volume flow rate ( V {\displaystyle V} m3 ⋅ {\displaystyle \cdot } s−1) is given by: V = A ⋅ d L Δ t {\displaystyle V=A\cdot {\frac {dL}{\Delta t}}} but since d L Δ t {\displaystyle {\frac {dL}{\Delta t}}} is the fluid velocity ( v {\displaystyle v} m ⋅ {\displaystyle \cdot } s−1) we can write: Q = V × A {\displaystyle Q=V\times A} The mass flow rate ( m {\displaystyle m} kg ⋅ {\displaystyle \cdot } s−1) is given by the product of density and volume flow rate m = ρ ⋅ Q = ρ ⋅ V ⋅ A {\displaystyle m=\rho \cdot Q=\rho \cdot V\cdot A} Because of conservation of mass, between two points in a flowing fluid we can write m 1 = m 2 {\displaystyle m_{1}=m_{2}} . This is equivalent to: ρ 1 V 1 A 1 = ρ 2 V 2 A 2 {\displaystyle \rho _{1}V_{1}A_{1}=\rho _{2}V_{2}A_{2}} If the fluid is incompressible, ( ρ 1 = ρ 2 {\displaystyle \rho _{1}=\rho _{2}} ) then: V 1 A 1 = V 2 A 2 {\displaystyle V_{1}A_{1}=V_{2}A_{2}} This result can be applied to many areas in astrophysical fluid dynamics, such as neutron stars. == References == == Further reading == Clarke, C.J. & Carswell, R.F. Principles of Astrophysical Fluid Dynamics, Cambridge University Press (2014) Introduction to Magnetohydrodynamics by P. A. Davidson, Cambridge University Press
Wikipedia/Astrophysical_fluid_dynamics
In fluid dynamics, lubrication theory describes the flow of fluids (liquids or gases) in a geometry in which one dimension is significantly smaller than the others. An example is the flow above air hockey tables, where the thickness of the air layer beneath the puck is much smaller than the dimensions of the puck itself. Internal flows are those where the fluid is fully bounded. Internal flow lubrication theory has many industrial applications because of its role in the design of fluid bearings. Here a key goal of lubrication theory is to determine the pressure distribution in the fluid volume, and hence the forces on the bearing components. The working fluid in this case is often termed a lubricant. Free film lubrication theory is concerned with the case in which one of the surfaces containing the fluid is a free surface. In that case, the position of the free surface is itself unknown, and one goal of lubrication theory is then to determine this. Examples include the flow of a viscous fluid over an inclined plane or over topography. Surface tension may be significant, or even dominant. Issues of wetting and dewetting then arise. For very thin films (thickness less than one micrometre), additional intermolecular forces, such as Van der Waals forces or disjoining forces, may become significant. == Theoretical basis == Mathematically, lubrication theory can be seen as exploiting the disparity between two length scales. The first is the characteristic film thickness, H {\displaystyle H} , and the second is a characteristic substrate length scale L {\displaystyle L} . The key requirement for lubrication theory is that the ratio ϵ = H / L {\displaystyle \epsilon =H/L} is small, that is, ϵ ≪ 1 {\displaystyle \epsilon \ll 1} . The Navier–Stokes equations (or Stokes equations, when fluid inertia may be neglected) are expanded in this small parameter, and the leading-order equations are then ∂ p ∂ z = 0 ∂ p ∂ x = μ ∂ 2 u ∂ z 2 {\displaystyle {\begin{aligned}{\frac {\partial p}{\partial z}}&=0\\[6pt]{\frac {\partial p}{\partial x}}&=\mu {\frac {\partial ^{2}u}{\partial z^{2}}}\end{aligned}}} where x {\displaystyle x} and z {\displaystyle z} are coordinates in the direction of the substrate and perpendicular to it respectively. Here p {\displaystyle p} is the fluid pressure, and u {\displaystyle u} is the fluid velocity component parallel to the substrate; μ {\displaystyle \mu } is the fluid viscosity. The equations show, for example, that pressure variations across the gap are small, and that those along the gap are proportional to the fluid viscosity. A more general formulation of the lubrication approximation would include a third dimension, and the resulting differential equation is known as the Reynolds equation. Further details can be found in the literature or in the textbooks given in the bibliography. == Applications == An important application area is lubrication of machinery components such as fluid bearings and mechanical seals. Coating is another major application area including the preparation of thin films, printing, painting and adhesives. Biological applications have included studies of red blood cells in narrow capillaries and of liquid flow in the lung and eye. == Notes == == References == Aksel, N.; Schörner M. (2018) "Films over topography: from creeping flow to linear stability, theory, and experiments, a review", Acta Mechanica 229: 1453–1482 doi:10.1007/s00707-018-2146-y Batchelor, G. K. (1976), An Introduction to Fluid Mechanics, Cambridge University Press. ISBN 978-0-521-09817-5. Hinton E. M.; Hogg A. J.; Huppert H. E. (2019), "Interaction of viscous free-surface flows with topography", Journal of Fluid Mechanics 876: 912–938 doi:10.1017/jfm.2019.588 Lister J. R. (1992) "Viscous flows down an inclined plane from point and line sources", Journal of Fluid Mechanics 242: 631–653. doi:10.1017/S0022112092002520 Panton, R. L. (2005), Incompressible Flow (3rd ed.), New York: Wiley. ISBN 978-0-471-26122-3. San Andres, L. (2010) MEEN334 Mechanical Systems Course Notes via Internet Archive
Wikipedia/Lubrication_theory
In fluid dynamics and electrostatics, slender-body theory is a methodology that can be used to take advantage of the slenderness of a body to obtain an approximation to a field surrounding it and/or the net effect of the field on the body. Principal applications are to Stokes flow — at very low Reynolds numbers — and in electrostatics. == Theory for Stokes flow == Consider slender body of length ℓ {\displaystyle \ell } and typical diameter 2 a {\displaystyle 2a} with ℓ ≫ a {\displaystyle \ell \gg a} , surrounded by fluid of viscosity μ {\displaystyle \mu } whose motion is governed by the Stokes equations. Note that the Stokes' paradox implies that the limit of infinite aspect ratio ℓ / a → ∞ {\displaystyle \ell /a\rightarrow \infty } is singular, as no Stokes flow can exist around an infinite cylinder. Slender-body theory allows us to derive an approximate relationship between the velocity of the body at each point along its length and the force per unit length experienced by the body at that point. Let the axis of the body be described by X ( s , t ) {\displaystyle {\boldsymbol {X}}(s,t)} , where s {\displaystyle s} is an arc-length coordinate, and t {\displaystyle t} is time. By virtue of the slenderness of the body, the force exerted on the fluid at the surface of the body may be approximated by a distribution of Stokeslets along the axis with force density f ( s ) {\displaystyle {\boldsymbol {f}}(s)} per unit length. f {\displaystyle {\boldsymbol {f}}} is assumed to vary only over lengths much greater than a {\displaystyle a} , and the fluid velocity at the surface adjacent to X ( s , t ) {\displaystyle {\boldsymbol {X}}(s,t)} is well-approximated by ∂ X / ∂ t {\displaystyle \partial {\boldsymbol {X}}/\partial t} . The fluid velocity u ( x ) {\displaystyle {\boldsymbol {u}}({\boldsymbol {x}})} at a general point x {\displaystyle {\boldsymbol {x}}} due to such a distribution can be written in terms of an integral of the Oseen tensor (named after Carl Wilhelm Oseen), which acts as a Green's function for a single Stokeslet. We have u ( x ) = ∫ 0 ℓ f ( s ) 8 π μ ⋅ ( I | x − X | + ( x − X ) ( x − X ) | x − X | 3 ) d s {\displaystyle {\boldsymbol {u}}({\boldsymbol {x}})=\int _{0}^{\ell }{\frac {{\boldsymbol {f}}(s)}{8\pi \mu }}\cdot \left({\frac {\mathbf {I} }{|{\boldsymbol {x}}-{\boldsymbol {X}}|}}+{\frac {({\boldsymbol {x}}-{\boldsymbol {X}})({\boldsymbol {x}}-{\boldsymbol {X}})}{|{\boldsymbol {x}}-{\boldsymbol {X}}|^{3}}}\right)\,\mathrm {d} s} where I {\displaystyle \mathbf {I} } is the identity tensor. Asymptotic analysis can then be used to show that the leading-order contribution to the integral for a point x {\displaystyle {\boldsymbol {x}}} on the surface of the body adjacent to position s 0 {\displaystyle s_{0}} comes from the force distribution at | s − s 0 | = O ( a ) {\displaystyle |s-s_{0}|=O(a)} . Since a ≪ ℓ {\displaystyle a\ll \ell } , we approximate f ( s ) ≈ f ( s 0 ) {\displaystyle {\boldsymbol {f}}(s)\approx {\boldsymbol {f}}(s_{0})} . We then obtain ∂ X ∂ t ∼ ln ⁡ ( ℓ / a ) 4 π μ f ( s ) ⋅ ( I + X ′ X ′ ) {\displaystyle {\frac {\partial {\boldsymbol {X}}}{\partial t}}\sim {\frac {\ln(\ell /a)}{4\pi \mu }}{\boldsymbol {f}}(s)\cdot {\Bigl (}\mathbf {I} +{\boldsymbol {X}}'{\boldsymbol {X}}'{\Bigr )}} where X ′ = ∂ X / ∂ s {\displaystyle {\boldsymbol {X}}'=\partial {\boldsymbol {X}}/\partial s} . The expression may be inverted to give the force density in terms of the motion of the body: f ( s ) ∼ 4 π μ ln ⁡ ( ℓ / a ) ∂ X ∂ t ⋅ ( I − 1 2 X ′ X ′ ) {\displaystyle {\boldsymbol {f}}(s)\sim {\frac {4\pi \mu }{\ln(\ell /a)}}{\frac {\partial {\boldsymbol {X}}}{\partial t}}\cdot {\Bigl (}\mathbf {I} -\textstyle {\frac {1}{2}}{\boldsymbol {X}}'{\boldsymbol {X}}'{\Bigr )}} Two canonical results that follow immediately are for the drag force F {\displaystyle F} on a rigid cylinder (length ℓ {\displaystyle \ell } , radius a {\displaystyle a} ) moving a velocity u {\displaystyle u} either parallel to its axis or perpendicular to it. The parallel case gives F ∼ 2 π μ ℓ u ln ⁡ ( ℓ / a ) {\displaystyle F\sim {\frac {2\pi \mu \ell u}{\ln(\ell /a)}}} while the perpendicular case gives F ∼ 4 π μ ℓ u ln ⁡ ( ℓ / a ) {\displaystyle F\sim {\frac {4\pi \mu \ell u}{\ln(\ell /a)}}} with only a factor of two difference. Note that the dominant length scale in the above expressions is the longer length ℓ {\displaystyle \ell } ; the shorter length has only a weak effect through the logarithm of the aspect ratio. In slender-body theory results, there are O ( 1 ) {\displaystyle O(1)} corrections to the logarithm, so even for relatively large values of ℓ / a {\displaystyle \ell /a} the error terms will not be that small. == References == Batchelor, G. K. (1970), "Slender-body theory for particles of arbitrary cross-section in Stokes flow", J. Fluid Mech., 44 (3): 419–440, Bibcode:1970JFM....44..419B, doi:10.1017/S002211207000191X, S2CID 121986116 Cox, R. G. (1970), "The motion of long slender bodies in a viscous fluid. Part 1. General Theory", J. Fluid Mech., 44 (4): 791–810, Bibcode:1970JFM....44..791C, doi:10.1017/S002211207000215X, S2CID 118908560 Hinch, E. J. (1991), Perturbation Methods, Cambridge University Press, ISBN 978-0-521-37897-0
Wikipedia/Slender-body_theory
The viscous stress tensor is a tensor used in continuum mechanics to model the part of the stress at a point within some material that can be attributed to the strain rate, the rate at which it is deforming around that point. The viscous stress tensor is formally similar to the elastic stress tensor (Cauchy tensor) that describes internal forces in an elastic material due to its deformation. Both tensors map the normal vector of a surface element to the density and direction of the stress acting on that surface element. However, elastic stress is due to the amount of deformation (strain), while viscous stress is due to the rate of change of deformation over time (strain rate). In viscoelastic materials, whose behavior is intermediate between those of liquids and solids, the total stress tensor comprises both viscous and elastic ("static") components. For a completely fluid material, the elastic term reduces to the hydrostatic pressure. In an arbitrary coordinate system, the viscous stress ε and the strain rate E at a specific point and time can be represented by 3 × 3 matrices of real numbers. In many situations there is an approximately linear relation between those matrices; that is, a fourth-order viscosity tensor μ such that ε = μE. The tensor μ has four indices and consists of 3 × 3 × 3 × 3 real numbers (of which only 21 are independent). In a Newtonian fluid, by definition, the relation between ε and E is perfectly linear, and the viscosity tensor μ is independent of the state of motion or stress in the fluid. If the fluid is isotropic as well as Newtonian, the viscosity tensor μ will have only three independent real parameters: a bulk viscosity coefficient, that defines the resistance of the medium to gradual uniform compression; a dynamic viscosity coefficient that expresses its resistance to gradual shearing, and a rotational viscosity coefficient which results from a coupling between the fluid flow and the rotation of the individual particles.: 304  In the absence of such a coupling, the viscous stress tensor will have only two independent parameters and will be symmetric. In non-Newtonian fluids, on the other hand, the relation between ε and E can be extremely non-linear, and ε may even depend on other features of the flow besides E. == Definition == === Viscous versus elastic stress === Internal mechanical stresses in a continuous medium are generally related to deformation of the material from some "relaxed" (unstressed) state. These stresses generally include an elastic ("static") stress component, that is related to the current amount of deformation and acts to restore the material to its rest state; and a viscous stress component, that depends on the rate at which the deformation is changing with time and opposes that change. === The viscous stress tensor === Like the total and elastic stresses, the viscous stress around a certain point in the material, at any time, can be modeled by a stress tensor, a linear relationship between the normal direction vector of an ideal plane through the point and the local stress density on that plane at that point. In any chosen coordinate system with axes numbered 1, 2, 3, this viscous stress tensor can be represented as a 3 × 3 matrix of real numbers: ε ( P , t ) = [ ε 11 ε 12 ε 13 ε 21 ε 22 ε 23 ε 31 ε 32 ε 33 ] . {\displaystyle \varepsilon (P,t)={\begin{bmatrix}\varepsilon _{11}&\varepsilon _{12}&\varepsilon _{13}\\\varepsilon _{21}&\varepsilon _{22}&\varepsilon _{23}\\\varepsilon _{31}&\varepsilon _{32}&\varepsilon _{33}\end{bmatrix}}\,.} Note that these numbers usually change with the point P = ( x P , y P , z P ) . {\displaystyle P=(x_{P},y_{P},z_{P})\,.} and time t. Consider an infinitesimal flat surface element centered on the point p, represented by a vector dA whose length is the area of the element and whose direction is perpendicular to it. Let dF be the infinitesimal force due to viscous stress that is applied across that surface element to the material on the side opposite to dA. The components of dF along each coordinate axis are then given by d F i = ∑ j ε i j d A j . {\displaystyle dF_{i}=\sum _{j}\varepsilon _{ij}\,dA_{j}\,.} In any material, the total stress tensor σ is the sum of this viscous stress tensor ε, the elastic stress tensor τ and the hydrostatic pressure p. In a perfectly fluid material, that by definition cannot have static shear stress, the elastic stress tensor is zero: σ i j = − p δ i j + ε i j , {\displaystyle \sigma _{ij}=-p\delta _{ij}+\varepsilon _{ij}\,,} where δij is the unit tensor, such that δij is 1 if i = j and 0 if i ≠ j. While the viscous stresses are generated by physical phenomena that depend strongly on the nature of the medium, the viscous stress tensor ε is only a description the local momentary forces between adjacent parcels of the material, and not a property of the material. === Symmetry === Ignoring the torque on an element due to the flow ("extrinsic" torque), the viscous "intrinsic" torque per unit volume on a fluid element is written (as an antisymmetric tensor) as τ i j = ε i j − ε j i {\displaystyle \tau _{ij}=\varepsilon _{ij}-\varepsilon _{ji}} and represents the rate of change of intrinsic angular momentum density with time. If the particles have rotational degrees of freedom, this will imply an intrinsic angular momentum and if this angular momentum can be changed by collisions, it is possible that this intrinsic angular momentum can change in time, resulting in an intrinsic torque that is not zero, which will imply that the viscous stress tensor will have an antisymmetric component with a corresponding rotational viscosity coefficient. If the fluid particles have negligible angular momentum or if their angular momentum is not appreciably coupled to the external angular momentum, or if the equilibration time between the external and internal degrees of freedom is practically zero, the torque will be zero and the viscous stress tensor will be symmetric. External forces can result in an asymmetric component to the stress tensor (e.g. ferromagnetic fluids which can suffer torque by external magnetic fields). == Physical causes of viscous stress == In a solid material, the elastic component of the stress can be ascribed to the deformation of the bonds between the atoms and molecules of the material, and may include shear stresses. In a fluid, elastic stress can be attributed to the increase or decrease in the mean spacing of the particles, that affects their collision or interaction rate and hence the transfer of momentum across the fluid; it is therefore related to the microscopic thermal random component of the particles' motion, and manifests itself as an isotropic hydrostatic pressure stress. The viscous component of the stress, on the other hand, arises from the macroscopic mean velocity of the particles. It can be attributed to friction or particle diffusion between adjacent parcels of the medium that have different mean velocities. == The viscosity equation == === The strain rate tensor === In a smooth flow, the rate at which the local deformation of the medium is changing over time (the strain rate) can be approximated by a strain rate tensor E(p, t), which is usually a function of the point p and time t. With respect to any coordinate system, it can be expressed by a 3 × 3 matrix. The strain rate tensor E(p, t) can be defined as the derivative of the strain tensor e(p, t) with respect to time, or, equivalently, as the symmetric part of the gradient (derivative with respect to space) of the flow velocity vector v(p, t): E = ∂ e ∂ t = 1 2 ( ( ∇ v ) + ( ∇ v ) T ) , {\displaystyle E={\frac {\partial e}{\partial t}}={\frac {1}{2}}\left((\nabla v)+(\nabla v)^{\textsf {T}}\right)\,,} where ∇v denotes the velocity gradient. In Cartesian coordinates, ∇v is the Jacobian matrix, ( ∇ v ) i j = ∂ v i ∂ x j {\displaystyle (\nabla v)_{ij}={\frac {\partial v_{i}}{\partial x_{j}}}} and therefore E i j = ∂ e i j ∂ t = 1 2 ( ∂ v j ∂ x i + ∂ v i ∂ x j ) . {\displaystyle E_{ij}={\frac {\partial e_{ij}}{\partial t}}={\frac {1}{2}}\left({\frac {\partial v_{j}}{\partial x_{i}}}+{\frac {\partial v_{i}}{\partial x_{j}}}\right)\,.} Either way, the strain rate tensor E(p, t) expresses the rate at which the mean velocity changes in the medium as one moves away from the point p – except for the changes due to rotation of the medium about p as a rigid body, which do not change the relative distances of the particles and only contribute to the rotational part of the viscous stress via the rotation of the individual particles themselves. (These changes comprise the vorticity of the flow, which is the curl (rotational) ∇ × v of the velocity; which is also the antisymmetric part of the velocity gradient ∇v.) === General flows === The viscous stress tensor is only a linear approximation of the stresses around a point p, and does not account for higher-order terms of its Taylor series. However in almost all practical situations these terms can be ignored, since they become negligible at the size scales where the viscous stress is generated and affects the motion of the medium. The same can be said of the strain rate tensor E as a representation of the velocity pattern around p. Thus, the linear models represented by the tensors E and ε are almost always sufficient to describe the viscous stress and the strain rate around a point, for the purpose of modelling its dynamics. In particular, the local strain rate E(p, t) is the only property of the velocity flow that directly affects the viscous stress ε(p, t) at a given point. On the other hand, the relation between E and ε can be quite complicated, and depends strongly on the composition, physical state, and microscopic structure of the material. It is also often highly non-linear, and may depend on the strains and stresses previously experienced by the material that is now around the point in question. === General Newtonian media === A medium is said to be Newtonian if the viscous stress ε(p, t) is a linear function of the strain rate E(p, t), and this function does not otherwise depend on the stresses and motion of fluid around p. No real fluid is perfectly Newtonian, but many important fluids, including gases and water, can be assumed to be, as long as the flow stresses and strain rates are not too high. In general, a linear relationship between two second-order tensors is a fourth-order tensor. In a Newtonian medium, specifically, the viscous stress and the strain rate are related by the viscosity tensor μ: ε i j = ∑ k l 2 μ i j k l E k l . {\displaystyle \varepsilon _{ij}=\sum _{kl}2{\boldsymbol {\mu }}_{ijkl}E_{kl}\,.} The viscosity coefficient μ is a property of a Newtonian material that, by definition, does not depend otherwise on v or σ. The strain rate tensor E(p, t) is symmetric by definition, so it has only six linearly independent elements. Therefore, the viscosity tensor μ has only 6 × 9 = 54 degrees of freedom rather than 81. In most fluids the viscous stress tensor too is symmetric, which further reduces the number of viscosity parameters to 6 × 6 = 36. == Shear and bulk viscous stress == Absent of rotational effects, the viscous stress tensor will be symmetric. As with any symmetric tensor, the viscous stress tensor ε can be expressed as the sum of a traceless symmetric tensor εs, and a scalar multiple εv of the identity tensor. In coordinate form, ε i j = ε i j v + ε i j s ε i j v = 1 3 δ i j ∑ k ε k k ε i j s = ε i j − 1 3 δ i j ∑ k ε k k . {\displaystyle {\begin{aligned}\varepsilon _{ij}&=\varepsilon _{ij}^{\text{v}}+\varepsilon _{ij}^{\text{s}}\\[3pt]\varepsilon _{ij}^{\text{v}}&={\frac {1}{3}}\delta _{ij}\sum _{k}\varepsilon _{kk}\\\varepsilon _{ij}^{\text{s}}&=\varepsilon _{ij}-{\frac {1}{3}}\delta _{ij}\sum _{k}\varepsilon _{kk}\,.\end{aligned}}} This decomposition is independent of the coordinate system and is therefore physically significant. The constant part εv of the viscous stress tensor manifests itself as a kind of pressure, or bulk stress, that acts equally and perpendicularly on any surface independent of its orientation. Unlike the ordinary hydrostatic pressure, it may appear only while the strain is changing, acting to oppose the change; and it can be negative. === The isotropic Newtonian case === In a Newtonian medium that is isotropic (i.e. whose properties are the same in all directions), each part of the stress tensor is related to a corresponding part of the strain rate tensor. ε v ( p , t ) = 2 μ v E v ( p , t ) , ε s ( p , t ) = 2 μ s E s ( p , t ) , {\displaystyle {\begin{aligned}\varepsilon ^{\text{v}}(p,t)&=2\mu ^{\text{v}}E^{\text{v}}(p,t)\,,\\\varepsilon ^{\text{s}}(p,t)&=2\mu ^{\text{s}}E^{\text{s}}(p,t)\,,\end{aligned}}} where Ev and Es are the scalar isotropic and the zero-trace parts of the strain rate tensor E, and μv and μs are two real numbers. Thus, in this case the viscosity tensor μ has only two independent parameters. The zero-trace part Es of E is a symmetric 3 × 3 tensor that describes the rate at which the medium is being deformed by shearing, ignoring any changes in its volume. Thus the zero-trace part εs of ε is the familiar viscous shear stress that is associated to progressive shearing deformation. It is the viscous stress that occurs in fluid moving through a tube with uniform cross-section (a Poiseuille flow) or between two parallel moving plates (a Couette flow), and resists those motions. The part Ev of E acts as a scalar multiplier (like εv), the average expansion rate of the medium around the point in question. (It is represented in any coordinate system by a 3 × 3 diagonal matrix with equal values along the diagonal.) It is numerically equal to ⁠1/3⁠ of the divergence of the velocity ∇ ⋅ v = ∑ k ∂ v k ∂ x k , {\displaystyle \nabla \cdot v=\sum _{k}{\frac {\partial v_{k}}{\partial x_{k}}}\,,} which in turn is the relative rate of change of volume of the fluid due to the flow. Therefore, the scalar part εv of ε is a stress that may be observed when the material is being compressed or expanded at the same rate in all directions. It is manifested as an extra pressure that appears only while the material is being compressed, but (unlike the true hydrostatic pressure) is proportional to the rate of change of compression rather the amount of compression, and vanishes as soon as the volume stops changing. This part of the viscous stress, usually called bulk viscosity or volume viscosity, is often important in viscoelastic materials, and is responsible for the attenuation of pressure waves in the medium. Bulk viscosity can be neglected when the material can be regarded as incompressible (for example, when modeling the flow of water in a channel). The coefficient μv, often denoted by η, is called the coefficient of bulk viscosity (or "second viscosity"); while μs is the coefficient of common (shear) viscosity. == See also == Vorticity equation Navier–Stokes equations == References ==
Wikipedia/Viscous_stress_tensor
The following outline is provided as an overview and topical guide to space science: Space science – field that encompasses all of the scientific disciplines that involve space exploration and study natural phenomena and physical bodies occurring in outer space, such as space medicine and astrobiology. == Branches of space sciences == === Astronomy === See astronomical object for a list of specific types of entities which scientists study. See Earth's location in the universe for an orientation. Subfields of astronomy: Astrophysics – branch of astronomy that deals with the physics of the universe, including the physical properties of celestial objects, as well as their interactions and behavior. Among the objects studied are galaxies, stars, planets, exoplanets, the interstellar medium and the cosmic microwave background; and the properties examined include luminosity, density, temperature, and chemical composition. The subdisciplines of theoretical astrophysics are: Computational astrophysics – The study of astrophysics using computational methods and tools to develop computational models. Plasma astrophysics – studies properties of plasma in outer space. Space physics – study of plasmas as they occur naturally in the Earth's upper atmosphere (aeronomy) and within the Solar System. Solar physics – Sun and its interaction with the remainder of the Solar System and interstellar space. Stellar astronomy – concerned with Star formation, physical properties, main sequence life span, variability, stellar evolution and extinction. Galactic astronomy – deals with the structure and components of our galaxy and of other galaxies. Extragalactic astronomy – study of objects (mainly galaxies) outside our galaxy, including Galaxy formation and evolution. Cosmology Physical cosmology – origin and evolution of the universe as a whole. The study of cosmology is theoretical astrophysics at its largest scale. Chemical cosmology - study of the chemical composition of matter in the universe and the processes that led to those compositions. Quantum cosmology – the study of cosmology through the use of quantum field theory to explain phenomena general relativity cannot due to limitations in its framework. Planetary Science – study of planets, moons, and planetary systems. Atmospheric science – study of atmospheres and weather. Planetary geology Planetary oceanography Exoplanetology – various planets outside of the Solar System Astrochemistry – studies the abundance and reactions of molecules in the Universe, and their interaction with radiation. Interdisciplinary studies of astronomy: Astrobiology – studies the advent and evolution of biological systems in the universe. Space biology – studies to build a better understanding of how spaceflight affects living systems in spacecraft, or in ground-based experiments that mimic aspects of spaceflight Space chemistry – Reactions of elements to form more complex compounds, such as amino acids, are key to the study of chemistry in space. Astrobotany – Sub-discipline of botany that is the study of plants in space environments. Archaeoastronomy – studies ancient or traditional astronomies in their cultural context, utilizing archaeological and anthropological evidence. Space archaeology – the study of human artifacts in outer space Forensic astronomy – the use of astronomy, the scientific study of celestial objects, to determine the appearance of the sky at specific times in the past. Techniques used in astronomical research: Theoretical astronomy – mathematical modelling of celestial entities and phenomena Astrometry – study of the position of objects in the sky and their changes of position. Defines the system of coordinates used and the kinematics of objects in our galaxy. Photometry – study of how bright celestial objects are when passed through different filters Spectroscopy – study of the spectra of astronomical objects Observational astronomy – practice of observing celestial objects by using telescopes and other astronomical apparatus. Observatories on the ground as well as space observatories take measurements of celestial entities and phenomena. It is concerned with recording data. The subdisciplines of observational astronomy are generally made by the specifications of the detectors: Radio astronomy – >300 μm Submillimetre astronomy – 200 μm to 1 mm Infrared astronomy – 0.7–350 μm Optical astronomy – 380–750 nm Ultraviolet astronomy – 10–320 nm High-energy astronomy Cosmic ray astronomy - charged particles with very high kinetic energy X-ray astronomy – 0.01–10 nm Gamma-ray astronomy – <0.01 nm Neutrino astronomy – Neutrinos Gravitational wave astronomy – Gravitons === Astronautics === The science and engineering of spacefaring and spaceflight, a subset of Aerospace engineering (which includes atmospheric flight) Space technology is technology for use in outer space, in travel or other activities beyond Earth's atmosphere, for purposes such as spaceflight, space exploration, and Earth observation. Spaceflight Human spaceflight Outline of space exploration Space architecture Life-support system Space station Space Habitation Module Life in space Bioastronautics Animals in space Microorganisms tested in outer space Plants in space Humans in space Women in space Effect of spaceflight on the human body Sleep in space Food in space Medicine in space Neuroscience in space Writing in space == See also == Space Sciences Laboratory – research facility at the University of California, BerkeleyPages displaying wikidata descriptions as a fallback – University of California, Berkeley Space-based economy – Economic activity in space Commercial use of space – Economic activities related to space Space manufacturing – Production of manufactured goods in an environment outside a planetary atmosphere Space tourism – Human space travel for recreation Space warfare – Combat that takes place in outer space Alien invasion – Common theme in science fiction stories and film Asteroid-impact avoidance – Methods to prevent destructive asteroid hitsPages displaying short descriptions of redirect targets Space law – Area of national and international law governing activities in outer space Remote sensing – Obtaining information through non-contact sensors Planetarium – Theatre that presents educational and entertaining shows about astronomy Centennial Challenges – NASA space competition inducement prize contests NASA prize contests Space and survival – Idea that spacefaring is necessary for long-term human survival Space colonization – Concept of permanent human habitation outside of Earth Space industry – Activities related to manufacturing components that go into Earth's orbit or beyond Timeline of artificial satellites and space probes Batteries in space Control engineering – Engineering discipline that deals with control systems Corrosion in space – Corrosion of materials occurring in outer space Nuclear power in space – Space exploration using nuclear energy Observatories in space – Instrument in space to study astronomical objectsPages displaying short descriptions of redirect targets Orbital mechanics – Field of classical mechanics concerned with the motion of spacecraft Robotic spacecraft – Spacecraft without people on boardPages displaying short descriptions of redirect targets Space environment – Study of how space conditions affect spacecraft Space logistics – Logistics for space travel Space technology – Technology developed for use in Space exploration Space-based radar – Use of radar systems mounted on satellites Space-based solar power – Concept of collecting solar power in outer space and distributing it to Earth Spacecraft design – for launch vehicles and satellites Spacecraft propulsion – Method used to accelerate spacecraft == References == == External links == Institute of Space Technology, Pakistan Space Sciences @ NASA Space Sciences @ ESA INDIAN INSTITUTE OF SPACE SCIENCE AND TECHNOLOGY Space Sciences Institute Space Science & Technology (in Persian) – an Iranian nongovernmental group who writes scientific articles about Space Science & Technology
Wikipedia/Space_science
Heliophysics (from the prefix "helio", from Attic Greek hḗlios, meaning Sun, and the noun "physics": the science of matter and energy and their interactions) is the physics of the Sun and its connection with the Solar System. NASA defines heliophysics as "(1) the comprehensive new term for the science of the Sun - Solar System Connection, (2) the exploration, discovery, and understanding of Earth's space environment, and (3) the system science that unites all of the linked phenomena in the region of the cosmos influenced by a star like our Sun." Heliophysics is broader than Solar physics, that studies the Sun itself, including its interior, atmosphere, and magnetic fields. It concentrates on the Sun's effects on Earth and other bodies within the Solar System, as well as the changing conditions in space. It is primarily concerned with the magnetosphere, ionosphere, thermosphere, mesosphere, and upper atmosphere of the Earth and other planets. Heliophysics combines the science of the Sun, corona, heliosphere and geospace, and encompasses a wide variety of astronomical phenomena, including "cosmic rays and particle acceleration, space weather and radiation, dust and magnetic reconnection, nuclear energy generation and internal solar dynamics, solar activity and stellar magnetic fields, aeronomy and space plasmas, magnetic fields and global change", and the interactions of the Solar System with the Milky Way Galaxy. == History and etymology == Term "heliophysics" (Russian: гелиофизика) was widely used in Russian-language scientific literature. The Great Soviet Encyclopedia third edition (1969–1978) defines "Heliophysics" as "[…] a division of astrophysics that studies physics of the Sun". In 1990, the Higher Attestation Commission, responsible for the advanced academic degrees in Soviet Union and later in Russia and the Former Soviet Union, established a new specialty “Heliophysics and physics of solar system”. In English-language scientific literature prior to about 2001, the term heliophysics was sporadically used to describe the study of the "physics of the Sun". As such it was a direct translation from the French "héliophysique" and the Russian "гелиофизика". In 2001, Joseph M. Davila, Nat Gopalswamy and Barbara J. Thompson at NASA's Goddard Space Flight Center adopted the term in their preparations of what became known as the International Heliophysical Year (2007–2008), following 50 years after the International Geophysical Year; in adopting the term for this purpose, they expanded its meaning to encompass the entire domain of influence of the Sun (the heliosphere). As an early advocate of the newly expanded meaning, George Siscoe offered the following characterization: "Heliophysics [encompasses] environmental science, a unique hybrid between meteorology and astrophysics, comprising a body of data and a set of paradigms (general laws—perhaps mostly still undiscovered) specific to magnetized plasmas and neutrals in the heliosphere interacting with themselves and with gravitating bodies and their atmospheres." Around mid-2006, Richard R. Fisher, then Director of the Sun-Earth Connections Division of NASA's Science Mission Directorate, was challenged by the NASA administrator to come up with a concise new name for his division that "had better end on 'physics'". He proposed "Heliophysics Science Division", which has been in use since then. The Heliophysics Science Division uses the term "heliophysics" to denote the study of the heliosphere and the objects that interact with it – most notably planetary atmospheres and magnetospheres, the solar corona, and the interstellar medium. Heliophysical research connects directly to a broader web of physical processes that naturally expand its reach beyond NASA's narrower view that limits it to the Solar System: heliophysics reaches from solar physics out to stellar physics in general, and involves several branches of nuclear physics, plasma physics, space physics and magnetospheric physics. The science of heliophysics lies at the foundation of the study of space weather, and is also directly involved in understanding planetary habitability. == Background == The Sun is an active star, and Earth is located within its atmosphere, so there is a dynamic interaction. The Sun' light influences all life and processes on Earth; it is an energy provider that allows and sustains life on Earth. However, the Sun also produces streams of high energy particles known as the solar wind, and radiation that can harm life or alter its evolution. Under the protective shield of Earth's magnetic field and its atmosphere, Earth can be seen as an island in the universe where life has developed and flourished. The intertwined response of the Earth and heliosphere are studied because the planet is immersed in this unseen environment. Above the protective cocoon of Earth's lower atmosphere is a plasma soup composed of electrified and magnetized matter entwined with penetrating radiation and energetic particles. Modern technologies are susceptible to the extremes of space weather — severe disturbances of the upper atmosphere and of the near-Earth space environment that are driven by the magnetic activity of the Sun. Strong electrical currents driven in the Earth's surface during auroral events can disrupt and damage modern electric power grids and may contribute to the corrosion of oil and gas pipelines. == Heliophysics research program == Methods have been developed to see into the internal workings of the Sun and understand how the Earth's magnetosphere responds to solar activity. Further studies are concerned with exploring the full system of complex interactions that characterize the relationship of the Sun with the Solar System. There are three primary objectives that define the multi-decadal studies: To understand the changing flow of energy and matter throughout the Sun, heliosphere, and planetary environments. To explore the fundamental physical processes of space plasma systems. To define the origins and societal impacts of variability in the Earth-Sun system. === Heliosphere === Plasmas and their embedded magnetic fields affect the formation and evolution of planets and planetary systems. The heliosphere shields the Solar System from galactic cosmic radiation. Earth is shielded by its magnetic field, protecting it from solar and cosmic particle radiation and from erosion of the atmosphere by the solar wind. Planets without a shielding magnetic field, such as Mars and Venus, are exposed to those processes and evolve differently. On Earth, the magnetic field changes strength and configuration during its occasional polarity reversals, altering the shielding of the planet from external radiation sources. === Magnetospheres === Determine changes in the Earth's magnetosphere, ionosphere, and upper atmosphere in order to enable specification, prediction, and mitigation of their effects. Heliophysics seeks to develop an understanding of the response of the near-Earth plasma regions to space weather. This complex, highly coupled system protects Earth from the worst solar disturbances while redistributing energy and mass throughout. == See also == == References == == External links == NASA Heliophysics Heliophysics Integrated Observatory NASA video: Understanding The Sun – The Heliophysics Program NASA video: Introduction to Heliophysics American Geophysical Union video: Heliophysics and the Weather in Space Principles Of Heliophysics: a textbook on the universal processes behind planetary habitability by Karel Schrijver et al NASA Heliophysics textbooks NASA-funded Summer Schools
Wikipedia/Heliophysics
The Evolution of Physics: The Growth of Ideas from Early Concepts to Relativity and Quanta is a science book for the lay reader. Written by the physicists Albert Einstein and Leopold Infeld, it traces the development of ideas in physics. It was originally published in 1938 by Cambridge University Press. It was a popular success, and was featured in a Time cover story. == Background == Einstein agreed to write the book partly as a way to help Infeld financially. Infeld collaborated briefly in Cambridge with Max Born, before moving to Princeton, where he worked with Einstein at the Institute for Advanced Study. Einstein tried to get Infeld a permanent position there, but failed. Infeld came up with a plan to write a history of physics with Einstein, which was sure to be successful, and split the royalties. When he went to Einstein to pitch the idea, Infeld became incredibly tongue-tied, but he was finally able to stammer out his proposal. “This is not at all a stupid idea,” Einstein said. "Not stupid at all. We shall do it." The book was published by Simon & Schuster. == Perspective == In the book, Albert Einstein pushed his realist approach to physics in defiance of much of quantum mechanics. Belief in an “objective reality,” the book argued, had led to great scientific advances throughout the ages, thus proving that it was a useful concept even if not provable. The authors conclude: Without the belief that it is possible to grasp reality with our theoretical constructions, without the belief in the inner harmony of our world, there could be no science. This belief is and always will remain the fundamental motive for all scientific creation. In addition, Einstein used the text to defend the utility of field theories amid the advances of quantum mechanics. The best way to do that was to view particles not as independent objects but as a special manifestation of the field itself: "Could we not reject the concept of matter and build a pure field physics? We could regard matter as the regions in space where the field is extremely strong. A thrown stone is, from this point of view, a changing field in which the states of the greatest field intensity travel through space with the velocity of the stone." == Contents == The book has four chapters: "The Rise of The Mechanical View", "The Decline of the Mechanical View", "Field, Relativity" and "Quanta". === Chapter 1: The Rise of The Mechanical View === The authors liken science to a detective story: "In nearly every detective novel since the admirable stories of Conan Doyle there comes a time where the investigator has collected all the facts he needs for at least some phase of his problem ... The scientist reading the book of nature, if we may be allowed to repeat the trite phrase, must find the solution for himself, for he cannot, as impatient readers of other stories often do, turn to the end of the book. In our case the reader is also the investigator, seeking to explain, at least in part, the relation of events to their rich context. To explain even a partial solution the scientist must collect the unordered facts available and make them coherent and understandable by creative thought." "The first clue" the authors examine is Galileo's law of inertia, codified by Isaac Newton: "Every body perseveres in its state of rest, or of uniform motion in a right line, unless it is compelled to change that state by forces impressed therein." A further clue, returned to later, is Galileo's discovery of the equivalence principle. The authors discuss the kinetic theory of matter and how it solves the mystery of Brownian motion. === Chapter 2: The Decline of The Mechanical View === The authors discuss investigations of electricity by Charles Augustin de Coulomb, Luigi Galvani, Alessandro Volta and Hans Christian Ørsted. Newton's corpuscular theory of light is introduced and contrasted with Christiaan Huygens's wave theory. There is a Socratic dialogue between a supporter of the corpuscular theory and a supporter of the wave theory. It was thought that light must have a medium to travel through, the luminiferous aether, but attempts to detect it yielded null results. They conclude by asking: "what is the medium through which light spreads and what are its mechanical properties? There is no hope of reducing the optical phenomena to the mechanical ones before this question is answered. But the difficulties in solving this problem are so great that we have to give it up and thus give up the mechanical view as well." === Chapter 3: Field, Relativity === The authors examine lines of force starting with gravitational fields (i.e., a physical collection of forces), moving on to descriptions of electric and magnetic fields. They state the "Two Pillars of the Field Theory": "The change of an electric field is accompanied by a magnetic field. If we interchange the words 'magnetic' and 'electric' our sentence reads: 'The change of a magnetic field is accompanied by an electric field.'" They describe work of Oersted and Michael Faraday: "We have already seen, from Oersted's experiment, how a magnetic field coils itself around a changing electric field. We have seen, from Faraday's experiment, how a changing electric field coils itself around a changing magnetic field." This was explained by James Clerk Maxwell's field theory. Maxwell's equations predict the existence of electromagnetic waves, the existence of which was confirmed by Heinrich Hertz. Maxwell predicted these waves should travel at the speed of light, indicating that light is an electromagnetic wave. The authors discuss the Michelson–Morley experiment, which established that the speed of light is a universal constant. They define a co-ordinate system (CS) and discuss the "new assumptions leading to special relativity: "1. The velocity of light in vacuo is the same in all CS moving uniformly, relative to each other. 2. All laws of nature are the same in all CS moving uniformly, relative to each other." They discuss the early tests of special relativity. Mass–energy equivalence is discussed. The equivalence principle is returned to, leading to the general theory of relativity. They discuss thought experiments that led to the theory, such as a free-falling elevator and a rotating disc. The gravitational lensing of light by a massive body is discussed, as is the precession of the perihelion of Mercury, a mystery explained by Einstein's theory. === Chapter 4: Quanta === The authors discuss the atomic theory and J. J. Thomson's discovery of the electron, the quanta of electricity and a constituent of the atom. Max Planck's concept of energy quanta is introduced. The photoelectric effect is discussed, and explained in terms of light quanta, or photons. Niels Bohr's model of the atom is discussed, as are Erwin Schrodinger and Louis de Broglie's matter waves as is the probabilistic nature of quantum mechanics. Einstein, while impressed by the experimental success of quantum theory, maintained a belief in an objective reality: "Throughout all our efforts, in every dramatic struggle between old and new views, we recognize the eternal longing for understanding, the ever-firm belief in the harmony of our world, continually strengthened by the increasing obstacles to comprehension." == Reception == The New York Times reviewed the book favorably, noting that Einstein and Infeld "write with remarkable simplicity and clarity but not much literary art. Perhaps it is just as well. Though not a single mathematical equation appears to frighten away the man who has forgotten everything but his multiplication tables we miss the turn of phrase, the poetic analogies that elevate the writings of Jeans and Eddington to the rank of literature. Both Jeans and Eddington have been the target of critical machine guns—Jeans for his God is a mathematician and Eddington for his mysticism ... This book testifies that [Einstein] is still the clearest and simplest exploiter of his own theories." J. A. Crowther, in Nature, wrote: "If, as Prof. Einstein and his co-author claim, 'Physics is a creation of the human mind, with freely invented ideas and concepts', it is this intellectual content which gives to physics one of its chief claims to cultural significance, and provides for the thoughtful non-technical reader his main source of interest in it. It is with this aspect of the subject that the authors are concerned in this very distinguished book." === Partial list of reviews === Booklist v. 34 (Apr. 15 1938). New York Herald Tribune (May 8, 1938). The Boston Transcript (Apr. 30 1938). The Open Shelf (Mar. 1938). Commonweal v. 28 (July 8, 1938). Manchester Guardian (Apr. 12 1938). The Nation v. 146 (May 7, 1938). Nature v. 141 (May 21, 1938). The New Republic v. 94 (Apr. 20 1938). New Technical Books v. 23 (Apr. 1938). Pratt Institute Quarterly List of New Technical and Industry Books (winter 1939). Saturday Review of Literature v. 17 (Apr. 2 1938). Scientific Book Club Review v. 9 (Mar. 1938). Spectator v. 161 (Aug. 26 1938). Springfield Republican (July 3, 1938). Survey Graphic v. 27 (Dec. 1938). The Times Literary Supplement (Apr. 9 1938). The Yale Review v. 27 (summer 1938). == See also == Relativity: The Special and General Theory (1916), an overview of Special and General Relativity by Einstein The Physical Principles of the Quantum Theory (1930), lectures on quantum mechanics by Werner Heisenberg The Principles of Quantum Mechanics (1930), monograph on quantum theory by Paul Dirac The Feynman Lectures on Physics (1964), lectures by Richard Feynman The Road to Reality (2004), overview of physics by Roger Penrose == References == === Bibliography === Einstein, Einstein; Infeld, Leopold (1938). Snow, C.P. (ed.). The Evolution of Physics. Cambridge University Press. ASIN B000S52QZ4. The Evolution of Physics from Early Concepts to Relativity and Quanta, Albert Einstein & Leopold Infeld, 1966, Simon & Schuster, ASIN: B0011Z6VBK The Evolution of Physics, Albert Einstein & Leopold Infeld, 1967, Touchstone. ISBN 0-671-20156-5 == External links == Free book download on the right of the page, different formats (download at June 05, 2016)
Wikipedia/The_Evolution_of_Physics
The annus mirabilis papers (from Latin: annus mirabilis, lit. 'miraculous year') are four papers that Albert Einstein published in the scientific journal Annalen der Physik (Annals of Physics) in 1905. As major contributions to the foundation of modern physics, these scientific publications were the ones for which he gained fame among physicists. They revolutionized science's understanding of the fundamental concepts of space, time, mass, and energy. The first paper explained the photoelectric effect, which established the energy of the light quanta E = h f {\displaystyle E=hf} , and was the only specific discovery mentioned in the citation awarding Einstein the 1921 Nobel Prize in Physics. The second paper explained Brownian motion, which established the Einstein relation D = μ k B T {\displaystyle D=\mu \,k_{\text{B}}T} and compelled physicists to accept the existence of atoms. The third paper introduced Einstein's special theory of relativity, which proclaims the constancy of the speed of light c {\displaystyle c} and derives the Lorentz transformations. Einstein also examined relativistic aberration and the transverse Doppler effect. The fourth, a consequence of special relativity, developed the principle of mass–energy equivalence, expressed in the equation E = m c 2 {\displaystyle E=mc^{2}} and which led to the discovery and use of nuclear power decades later. These four papers, together with quantum mechanics and Einstein's later general theory of relativity, are the foundation of modern physics. == Background == At the time the papers were written, Einstein did not have easy access to a complete set of scientific reference materials, although he did regularly read and contribute reviews to Annalen der Physik. Additionally, scientific colleagues available to discuss his theories were few. He worked as an examiner at the Patent Office in Bern, Switzerland, and he later said of a co-worker there, Michele Besso, that he "could not have found a better sounding board for my ideas in all of Europe". In addition, co-workers and the other members of the self-styled "Olympia Academy" (Maurice Solovine and Conrad Habicht) and his wife, Mileva Marić, had some influence on Einstein's work, but how much is unclear. Through these papers, Einstein tackled some of the era's most important physics questions and problems. In 1900, Lord Kelvin, in a lecture titled "Nineteenth-Century Clouds over the Dynamical Theory of Heat and Light", suggested that physics had no satisfactory explanations for the results of the Michelson–Morley experiment and for black body radiation. As introduced, special relativity provided an account for the results of the Michelson–Morley experiments. Einstein's explanation of the photoelectric effect extended the quantum theory which Max Planck had developed in his successful explanation of black-body radiation. Despite the greater fame achieved by his other works, such as that on special relativity, it was his work on the photoelectric effect that won him his Nobel Prize in 1921. The Nobel committee had waited patiently for experimental confirmation of special relativity; however, none was forthcoming until the time dilation experiments of Ives and Stilwell (1938 and 1941) and Rossi and Hall (1941). == Papers == === Photoelectric effect === The article "Über einen die Erzeugung und Verwandlung des Lichtes betreffenden heuristischen Gesichtspunkt" ("On a Heuristic Viewpoint Concerning the Production and Transformation of Light") received 18 March and published 9 June, proposed the idea of energy quanta. This idea, motivated by Max Planck's earlier derivation of the law of black-body radiation (which was preceded by the discovery of Wien's displacement law, by Wilhelm Wien, several years prior to Planck) assumes that luminous energy can be absorbed or emitted only in discrete amounts, called quanta. Einstein states, Energy, during the propagation of a ray of light, is not continuously distributed over steadily increasing spaces, but it consists of a finite number of energy quanta localised at points in space, moving without dividing and capable of being absorbed or generated only as entities. In explaining the photoelectric effect, the hypothesis that energy consists of discrete packets, as Einstein illustrates, can be directly applied to black bodies, as well. The idea of light quanta contradicts the wave theory of light that follows naturally from Maxwell's equations for electromagnetic behavior and, more generally, the assumption of infinite divisibility of energy in physical systems. A profound formal difference exists between the theoretical concepts that physicists have formed about gases and other ponderable bodies, and Maxwell's theory of electromagnetic processes in so-called empty space. While we consider the state of a body to be completely determined by the positions and velocities of an indeed very large yet finite number of atoms and electrons, we make use of continuous spatial functions to determine the electromagnetic state of a volume of space, so that a finite number of quantities cannot be considered as sufficient for the complete determination of the electromagnetic state of space. ... [this] leads to contradictions when applied to the phenomena of emission and transformation of light. According to the view that the incident light consists of energy quanta ... the production of cathode rays by light can be conceived in the following way. The body's surface layer is penetrated by energy quanta whose energy is converted at least partially into kinetic energy of the electrons. The simplest conception is that a light quantum transfers its entire energy to a single electron ... Einstein noted that the photoelectric effect depended on the wavelength, and hence the frequency of the light. At too low a frequency, even intense light produced no electrons. However, once a certain frequency was reached, even low intensity light produced electrons. He compared this to Planck's hypothesis that light could be emitted only in packets of energy given by hf, where h is the Planck constant and f is the frequency. He then postulated that light travels in packets whose energy depends on the frequency, and therefore only light above a certain frequency would bring sufficient energy to liberate an electron. Even after experiments confirmed that Einstein's equations for the photoelectric effect were accurate, his explanation was not universally accepted. Niels Bohr, in his 1922 Nobel address, stated, "The hypothesis of light-quanta is not able to throw light on the nature of radiation." By 1921, when Einstein was awarded the Nobel Prize and his work on photoelectricity was mentioned by name in the award citation, some physicists accepted that the equation ( h f = Φ + E k {\displaystyle hf=\Phi +E_{k}} ) was correct and light quanta were possible. In 1923, Arthur Compton's X-ray scattering experiment helped more of the scientific community to accept this formula. The theory of light quanta was a strong indicator of wave–particle duality, a fundamental principle of quantum mechanics. A complete picture of the theory of photoelectricity was realized after the maturity of quantum mechanics. === Brownian motion === The article "Über die von der molekularkinetischen Theorie der Wärme geforderte Bewegung von in ruhenden Flüssigkeiten suspendierten Teilchen" ("On the Motion of Small Particles Suspended in a Stationary Liquid, as Required by the Molecular Kinetic Theory of Heat"), received 11 May and published 18 July, delineated a stochastic model of Brownian motion. In this paper it will be shown that, according to the molecular kinetic theory of heat, bodies of a microscopically visible size suspended in liquids must, as a result of thermal molecular motions, perform motions of such magnitudes that they can be easily observed with a microscope. It is possible that the motions to be discussed here are identical with so-called Brownian molecular motion; however, the data available to me on the latter are so imprecise that I could not form a judgment on the question... Einstein derived expressions for the mean squared displacement of particles. Using the kinetic theory of gases, which at the time was controversial, the article established that the phenomenon, which had lacked a satisfactory explanation even decades after it was first observed, provided empirical evidence for the reality of the atom. It also lent credence to statistical mechanics, which had been controversial at that time, as well. Before this paper, atoms were recognized as a useful concept, but physicists and chemists debated whether atoms were real entities. Einstein's statistical discussion of atomic behavior gave experimentalists a way to count atoms by looking through an ordinary microscope. Wilhelm Ostwald, one of the leaders of the anti-atom school, later told Arnold Sommerfeld that he had been convinced of the existence of atoms by Jean Perrin's subsequent Brownian motion experiments. === Special relativity === Einstein's "Zur Elektrodynamik bewegter Körper" ("On the Electrodynamics of Moving Bodies"), his third paper that year, was received on 30 June and published 26 September. It reconciles Maxwell's equations for electricity and magnetism with the laws of mechanics by introducing major changes to mechanics close to the speed of light. This later became known as Einstein's special theory of relativity. The paper mentions the names of only five other scientists: Isaac Newton, James Clerk Maxwell, Heinrich Hertz, Christian Doppler, and Hendrik Lorentz. It does not have any references to any other publications. Many of the ideas had already been published by others, as detailed in history of special relativity and relativity priority dispute. However, Einstein's paper introduces a theory of time, distance, mass, and energy that was consistent with electromagnetism, but omitted the force of gravity. At the time, it was known that Maxwell's equations, when applied to moving bodies, led to asymmetries (moving magnet and conductor problem), and that it had not been possible to discover any motion of the Earth relative to the aether. Einstein puts forward two postulates to explain these observations. First, he applies the principle of relativity, which states that the laws of physics remain the same for any non-accelerating frame of reference (called an inertial reference frame), to the laws of electrodynamics and optics as well as mechanics. In the second postulate, Einstein proposes that the speed of light has the same value in all frames of reference, independent of the state of motion of the emitting body. Special relativity is thus consistent with the result of the Michelson–Morley experiment, which had not detected a medium of conductance (the aether) for light waves unlike other known waves that require a medium (such as water or air), and which had been crucial for the development of the Lorentz transformations and the principle of relativity. Einstein may not have known about that experiment, but states, Examples of this sort, together with the unsuccessful attempts to discover any motion of the earth relatively to the "light medium", suggest that the phenomena of electrodynamics as well as of mechanics possess no properties corresponding to the idea of absolute rest. The speed of light is fixed, and thus not relative to the movement of the observer. This was impossible under Newtonian classical mechanics. Einstein argues, the same laws of electrodynamics and optics will be valid for all frames of reference for which the equations of mechanics hold good. We will raise this conjecture (the purport of which will hereafter be called the "Principle of Relativity") to the status of a postulate, and also introduce another postulate, which is only apparently irreconcilable with the former, namely, that light is always propagated in empty space with a definite velocity c which is independent of the state of motion of the emitting body. These two postulates suffice for the attainment of a simple and consistent theory of the electrodynamics of moving bodies based on Maxwell's theory for stationary bodies. The introduction of a "luminiferous ether" will prove to be superfluous in as much as the view here to be developed will not require an "absolutely stationary space" provided with special properties, nor assign a velocity-vector to a point of the empty space in which electromagnetic processes take place. The theory ... is based—like all electrodynamics—on the kinematics of the rigid body, since the assertions of any such theory have to do with the relationships between rigid bodies (systems of co-ordinates), clocks, and electromagnetic processes. Insufficient consideration of this circumstance lies at the root of the difficulties which the electrodynamics of moving bodies at present encounters. It had previously been proposed, by George FitzGerald in 1889 and by Lorentz in 1892, independently of each other, that the Michelson–Morley result could be accounted for if moving bodies were contracted in the direction of their motion. Some of the paper's core equations, the Lorentz transforms, had been published by Joseph Larmor (1897, 1900), Hendrik Lorentz (1895, 1899, 1904) and Henri Poincaré (1905), in a development of Lorentz's 1904 paper. Einstein's presentation differed from the explanations given by FitzGerald, Larmor, and Lorentz, but was similar in many respects to the formulation by Poincaré (1905). His explanation arises from two axioms. The first is the idea originating with Galileo Galilei that the laws of nature should be the same for all observers that move with constant speed relative to each other. Einstein writes, The laws by which the states of physical systems undergo change are not affected, whether these changes of state be referred to the one or the other of two systems of co-ordinates in uniform translatory motion. The second axiom is the rule that the speed of light is the same for every observer. Any ray of light moves in the "stationary" system of co-ordinates with the determined velocity c, whether the ray be emitted by a stationary or by a moving body. The theory, now called the special theory of relativity, distinguishes it from his later general theory of relativity, which considers all observers to be equivalent. Acknowledging the role of Max Planck in the early dissemination of his ideas, Einstein wrote in 1913 "The attention that this theory so quickly received from colleagues is surely to be ascribed in large part to the resoluteness and warmth with which he [Planck] intervened for this theory". In addition, the spacetime formulation by Hermann Minkowski in 1907 was influential in gaining widespread acceptance. Also, and most importantly, the theory was supported by an ever-increasing body of confirmatory experimental evidence. === Mass–energy equivalence === On 21 November Annalen der Physik published a fourth paper (received September 27): "Ist die Trägheit eines Körpers von seinem Energieinhalt abhängig?" ("Does the Inertia of a Body Depend Upon Its Energy Content?"), in which Einstein deduced what is sometimes described as the most famous of all equations: E = mc2. Einstein considered the equivalency equation to be of paramount importance because it showed that a massive particle possesses an energy, the "rest energy", distinct from its classical kinetic and potential energies. The paper is based on Maxwell and Hertz's investigations and, in addition, the axioms of relativity, as Einstein states, The results of the previous investigation lead to a very interesting conclusion, which is here to be deduced. The previous investigation was based "on the Maxwell–Hertz equations for empty space, together with the Maxwellian expression for the electromagnetic energy of space ..." The laws by which the states of physical systems alter are independent of the alternative, to which of two systems of coordinates, in uniform motion of parallel translation relatively to each other, these alterations of state are referred (principle of relativity). The equation sets forth that the energy of a body at rest (E) equals its mass (m) times the speed of light (c) squared, or E = mc2. If a body gives off the energy L in the form of radiation, its mass diminishes by L/c2. The fact that the energy withdrawn from the body becomes energy of radiation evidently makes no difference, so that we are led to the more general conclusion that The mass of a body is a measure of its energy-content; if the energy changes by L, the mass changes in the same sense by L/(9 × 1020), the energy being measured in ergs, and the mass in grammes. ... If the theory corresponds to the facts, radiation conveys inertia between the emitting and absorbing bodies. The mass–energy relation can be used to predict how much energy will be released or consumed by nuclear reactions; one simply measures the mass of all constituents and the mass of all the products and multiplies the difference between the two by c2. The result shows how much energy will be released or consumed, usually in the form of light or heat. When applied to certain nuclear reactions, the equation shows that an extraordinarily large amount of energy will be released, millions of times as much as in the combustion of chemical explosives, where the amount of mass converted to energy is negligible. This explains why nuclear reactions produce enormous amounts of energy, as they release binding energy during nuclear fission and nuclear fusion, and convert a portion of subatomic mass to energy. == Commemoration == The International Union of Pure and Applied Physics (IUPAP) resolved to commemorate the 100th year of the publication of Einstein's extensive work in 1905 as the World Year of Physics 2005. This was subsequently endorsed by the United Nations. == Notes == == References == === Citations === === Primary sources === === Secondary sources === Gribbin, John, and Gribbin, Mary. Annus Mirabilis: 1905, Albert Einstein, and the Theory of Relativity, Chamberlain Bros., 2005. ISBN 1-59609-144-4. Renn, Jürgen, and Dieter Hoffmann, "1905 – a miraculous year". 2005 J. Phys. B: At. Mol. Opt. Phys. 38 S437-S448 (Max Planck Institute for the History of Science) [Issue 9 (14 May 2005)]. doi:10.1088/0953-4075/38/9/001. Stachel, John, et al., Einstein's Miraculous Year. Princeton University Press, 1998. ISBN 0-691-05938-1. == External links == Collection of the Annus Mirabilis papers and their English translations at the Library of Congress website
Wikipedia/On_the_Electrodynamics_of_Moving_Bodies
In physics, specifically general relativity, the Mathisson–Papapetrou–Dixon equations describe the motion of a massive spinning body moving in a gravitational field. Other equations with similar names and mathematical forms are the Mathisson–Papapetrou equations and Papapetrou–Dixon equations. All three sets of equations describe the same physics. These equations are named after Myron Mathisson, William Graham Dixon, and Achilles Papapetrou, who worked on them. Throughout, this article uses the natural units c = G = 1, and tensor index notation. == Mathisson–Papapetrou–Dixon equations == The Mathisson–Papapetrou–Dixon (MPD) equations for a mass m {\displaystyle m} spinning body are D k ν D τ + 1 2 S λ μ R λ μ ν ρ V ρ = 0 , D S λ μ D τ + V λ k μ − V μ k λ = 0. {\displaystyle {\begin{aligned}{\frac {Dk_{\nu }}{D\tau }}+{\frac {1}{2}}S^{\lambda \mu }R_{\lambda \mu \nu \rho }V^{\rho }&=0,\\{\frac {DS^{\lambda \mu }}{D\tau }}+V^{\lambda }k^{\mu }-V^{\mu }k^{\lambda }&=0.\end{aligned}}} Here τ {\displaystyle \tau } is the proper time along the trajectory, k ν {\displaystyle k_{\nu }} is the body's four-momentum k ν = ∫ t = const T 0 ν g d 3 x , {\displaystyle k_{\nu }=\int _{t={\text{const}}}{T^{0}}_{\nu }{\sqrt {g}}d^{3}x,} the vector V μ {\displaystyle V^{\mu }} is the four-velocity of some reference point X μ {\displaystyle X^{\mu }} in the body, and the skew-symmetric tensor S μ ν {\displaystyle S^{\mu \nu }} is the angular momentum S μ ν = ∫ t = const { ( x μ − X μ ) T 0 ν − ( x ν − X ν ) T 0 μ } g d 3 x {\displaystyle S^{\mu \nu }=\int _{t={\text{const}}}\left\{\left(x^{\mu }-X^{\mu }\right)T^{0\nu }-\left(x^{\nu }-X^{\nu }\right)T^{0\mu }\right\}{\sqrt {g}}d^{3}x} of the body about this point. In the time-slice integrals we are assuming that the body is compact enough that we can use flat coordinates within the body where the energy-momentum tensor T μ ν {\displaystyle T^{\mu \nu }} is non-zero. As they stand, there are only ten equations to determine thirteen quantities. These quantities are the six components of S λ μ {\displaystyle S^{\lambda \mu }} , the four components of k ν {\displaystyle k_{\nu }} and the three independent components of V μ {\displaystyle V^{\mu }} . The equations must therefore be supplemented by three additional constraints which serve to determine which point in the body has velocity V μ {\displaystyle V^{\mu }} . Mathison and Pirani originally chose to impose the condition V μ S μ ν = 0 {\displaystyle V^{\mu }S_{\mu \nu }=0} which, although involving four components, contains only three constraints because V μ S μ ν V ν {\displaystyle V^{\mu }S_{\mu \nu }V^{\nu }} is identically zero. This condition, however, does not lead to a unique solution and can give rise to the mysterious "helical motions". The Tulczyjew–Dixon condition k μ S μ ν = 0 {\displaystyle k_{\mu }S^{\mu \nu }=0} does lead to a unique solution as it selects the reference point X μ {\displaystyle X^{\mu }} to be the body's center of mass in the frame in which its momentum is ( k 0 , k 1 , k 2 , k 3 ) = ( m , 0 , 0 , 0 ) {\displaystyle (k_{0},k_{1},k_{2},k_{3})=(m,0,0,0)} . Accepting the Tulczyjew–Dixon condition k μ S μ ν = 0 {\displaystyle k_{\mu }S^{\mu \nu }=0} , we can manipulate the second of the MPD equations into the form D S λ μ D τ + 1 m 2 ( S λ ρ k μ D k ρ D τ + S ρ μ k λ D k ρ D τ ) = 0 , {\displaystyle {\frac {DS_{\lambda \mu }}{D\tau }}+{\frac {1}{m^{2}}}\left(S_{\lambda \rho }k_{\mu }{\frac {Dk^{\rho }}{D\tau }}+S_{\rho \mu }k_{\lambda }{\frac {Dk^{\rho }}{D\tau }}\right)=0,} This is a form of Fermi–Walker transport of the spin tensor along the trajectory – but one preserving orthogonality to the momentum vector k μ {\displaystyle k^{\mu }} rather than to the tangent vector V μ = d X μ / d τ {\displaystyle V^{\mu }=dX^{\mu }/d\tau } . Dixon calls this M-transport. == See also == Introduction to the mathematics of general relativity Geodesic equation Pauli–Lubanski pseudovector Test particle Relativistic angular momentum Center of mass (relativistic) == References == === Notes === === Selected papers === C. Chicone; B. Mashhoon; B. Punsly (2005). "Relativistic motion of spinning particles in a gravitational field". Physics Letters A. 343 (1–3): 1–7. arXiv:gr-qc/0504146. Bibcode:2005PhLA..343....1C. doi:10.1016/j.physleta.2005.05.072. hdl:10355/8357. S2CID 56132009. N. Messios (2007). "Spinning Particles in Spacetimes with Torsion". International Journal of Theoretical Physics. General Relativity and Gravitation. 46 (3). Springer: 562–575. Bibcode:2007IJTP...46..562M. doi:10.1007/s10773-006-9146-8. S2CID 119514028. D. Singh (2008). "An analytic perturbation approach for classical spinning particle dynamics". International Journal of Theoretical Physics. General Relativity and Gravitation. 40 (6). Springer: 1179–1192. arXiv:0706.0928. Bibcode:2008GReGr..40.1179S. doi:10.1007/s10714-007-0597-x. S2CID 7255389. L. F. O. Costa; J. Natário; M. Zilhão (2012). "Mathisson's helical motions demystified". AIP Conf. Proc. AIP Conference Proceedings. 1458: 367–370. arXiv:1206.7093. Bibcode:2012AIPC.1458..367C. doi:10.1063/1.4734436. S2CID 119306409. R. M. Plyatsko (1985). "Addition of the Pirani condition to the Mathisson-Papapetrou equations in a Schwarzschild field". Soviet Physics Journal. 28 (7). Springer: 601–604. Bibcode:1985SvPhJ..28..601P. doi:10.1007/BF00896195. S2CID 121704297. R.R. Lompay (2005). "Deriving Mathisson-Papapetrou equations from relativistic pseudomechanics". arXiv:gr-qc/0503054. R. Plyatsko (2011). "Can Mathisson-Papapetrou equations give clue to some problems in astrophysics?". arXiv:1110.2386 [gr-qc]. M. Leclerc (2005). "Mathisson-Papapetrou equations in metric and gauge theories of gravity in a Lagrangian formulation". Classical and Quantum Gravity. 22 (16): 3203–3221. arXiv:gr-qc/0505021. Bibcode:2005CQGra..22.3203L. doi:10.1088/0264-9381/22/16/006. S2CID 2569951. R. Plyatsko; O. Stefanyshyn; M. Fenyk (2011). "Mathisson-Papapetrou-Dixon equations in the Schwarzschild and Kerr backgrounds". Classical and Quantum Gravity. 28 (19): 195025. arXiv:1110.1967. Bibcode:2011CQGra..28s5025P. doi:10.1088/0264-9381/28/19/195025. S2CID 119213540. R. Plyatsko; O. Stefanyshyn (2008). "On common solutions of Mathisson equations under different conditions". arXiv:0803.0121. Bibcode:2008arXiv0803.0121P. {{cite journal}}: Cite journal requires |journal= (help) R. M. Plyatsko; A. L. Vynar; Ya. N. Pelekh (1985). "Conditions for the appearance of gravitational ultrarelativistic spin-orbital interaction". Soviet Physics Journal. 28 (10). Springer: 773–776. Bibcode:1985SvPhJ..28..773P. doi:10.1007/BF00897946. S2CID 119799125. K. Svirskas; K. Pyragas (1991). "The spherically-symmetrical trajectories of spin particles in the Schwarzschild field". Astrophysics and Space Science. 179 (2). Springer: 275–283. Bibcode:1991Ap&SS.179..275S. doi:10.1007/BF00646947. S2CID 120108333.
Wikipedia/Mathisson–Papapetrou–Dixon_equations
In mathematics and physics, a tensor field is a function assigning a tensor to each point of a region of a mathematical space (typically a Euclidean space or manifold) or of the physical space. Tensor fields are used in differential geometry, algebraic geometry, general relativity, in the analysis of stress and strain in material object, and in numerous applications in the physical sciences. As a tensor is a generalization of a scalar (a pure number representing a value, for example speed) and a vector (a magnitude and a direction, like velocity), a tensor field is a generalization of a scalar field and a vector field that assigns, respectively, a scalar or vector to each point of space. If a tensor A is defined on a vector fields set X(M) over a module M, we call A a tensor field on M. A tensor field, in common usage, is often referred to in the shorter form "tensor". For example, the Riemann curvature tensor refers a tensor field, as it associates a tensor to each point of a Riemannian manifold, a topological space. == Definition == Let M {\displaystyle M} be a manifold, for instance the Euclidean space R n {\displaystyle \mathbb {R} ^{n}} . Definition. A tensor field of type ( p , q ) {\displaystyle (p,q)} is a section T ∈ Γ ( M , V ⊗ p ⊗ ( V ∗ ) ⊗ q ) {\displaystyle T\ \in \ \Gamma (M,V^{\otimes p}\otimes (V^{*})^{\otimes q})} where V = T M {\displaystyle V=TM} to be the tangent bundle of M {\displaystyle M} (whose sections are called vector fields or contra variant vector fields in Physics) and V ∗ = T ∗ M {\displaystyle V^{*}=T^{*}M} is its dual bundle, the cotangent space (whose sections are called 1 forms, or covariant vector fields in Physics), and ⊗ {\displaystyle \otimes } is the tensor product of vector bundles. Equivalently, a tensor field is a collection of elements T x ∈ V x ⊗ p ⊗ ( V x ∗ ) ⊗ q {\displaystyle T_{x}\in V_{x}^{\otimes p}\otimes (V_{x}^{*})^{\otimes q}} for every point x ∈ M {\displaystyle x\in M} , where ⊗ {\displaystyle \otimes } now denotes the tensor product of vectors spaces, such that it constitutes a smooth map T : M → V ⊗ p ⊗ ( V ∗ ) ⊗ q {\displaystyle T:M\rightarrow V^{\otimes p}\otimes (V^{*})^{\otimes q}} . The elements T x {\displaystyle T_{x}} are called tensors. Locally in a coordinate neighbourhood U {\displaystyle U} with coordinates x 1 , … x n {\displaystyle x^{1},\ldots x^{n}} we have a local basis (Vielbein) of vector fields ∂ 1 = ∂ ∂ x n … ∂ n = ∂ ∂ x n {\displaystyle \partial _{1}={\frac {\partial }{\partial x^{n}}}\ldots \partial _{n}={\frac {\partial }{\partial x_{n}}}} , and a dual basis of 1 forms d x 1 , … d x n {\displaystyle dx^{1},\ldots dx^{n}} so that d x i ( ∂ j ) = ∂ j x i = δ j i {\displaystyle dx^{i}(\partial _{j})=\partial _{j}x^{i}=\delta _{j}^{i}} . In the coordinate neighbourhood U {\displaystyle U} we then have T x = T j 1 , … , j q i 1 , … i p ( x 1 , … , x n ) ∂ i 1 ⊗ ⋯ ⊗ ∂ i p ⊗ d x j 1 ⊗ ⋯ ⊗ d x j q {\displaystyle T_{x}=T_{j_{1},\ldots ,j_{q}}^{i_{1},\ldots i_{p}}(x^{1},\ldots ,x^{n})\partial _{i_{1}}\otimes \cdots \otimes \partial _{i_{p}}\otimes dx^{j_{1}}\otimes \cdots \otimes dx^{j_{q}}} where here and below we use Einstein summation conventions. Note that if we choose different coordinate system y 1 … y n {\displaystyle y^{1}\ldots y^{n}} then ∂ ∂ x i = ∂ y k ∂ x i ∂ ∂ y k {\displaystyle {\frac {\partial }{\partial x^{i}}}={\frac {\partial y^{k}}{\partial x^{i}}}{\frac {\partial }{\partial y^{k}}}} and d x j = ∂ x j ∂ y ℓ d y ℓ {\displaystyle dx^{j}={\frac {\partial x^{j}}{\partial y^{\ell }}}dy^{\ell }} where the coordinates ( x 1 , … , x n ) {\displaystyle (x^{1},\ldots ,x^{n})} can be expressed in the coordinates ( y 1 , … y n {\displaystyle (y^{1},\ldots y^{n}} and vice versa, so that T x = T j 1 , … , j q i 1 , … i p ( x 1 , … , x n ) ∂ ∂ x i 1 ⊗ ⋯ ⊗ ∂ ∂ x i p ⊗ d x j 1 ⊗ ⋯ ⊗ d x j q = T j 1 , … , j q i 1 , … i p ( x 1 , … , x n ) ∂ y k 1 ∂ x i 1 ⋯ ∂ y k p ∂ x i p ∂ x j 1 ∂ y ℓ 1 ⋯ ∂ x j q ∂ y ℓ q ∂ ∂ y k 1 ⊗ ⋯ ⊗ ∂ ∂ y k p ⊗ d y ℓ 1 ⊗ ⋯ ⊗ d y ℓ q = T ℓ 1 , ⋯ ℓ q k 1 , … , k p ( y 1 , … y n ) ∂ ∂ y k 1 ⊗ ⋯ ⊗ ∂ ∂ y k p ⊗ d y ℓ 1 ⊗ ⋯ ⊗ d y ℓ q {\displaystyle {\begin{aligned}T_{x}&=T_{j_{1},\ldots ,j_{q}}^{i_{1},\ldots i_{p}}(x^{1},\ldots ,x^{n}){\frac {\partial }{\partial x^{i_{1}}}}\otimes \cdots \otimes {\frac {\partial }{\partial x^{i_{p}}}}\otimes dx^{j_{1}}\otimes \cdots \otimes dx^{j_{q}}\\&=T_{j_{1},\ldots ,j_{q}}^{i_{1},\ldots i_{p}}(x^{1},\ldots ,x^{n}){\frac {\partial y^{k_{1}}}{\partial x^{i_{1}}}}\cdots {\frac {\partial y^{k_{p}}}{\partial x^{i_{p}}}}{\frac {\partial x^{j_{1}}}{\partial y^{\ell _{1}}}}\cdots {\frac {\partial x^{j_{q}}}{\partial y^{\ell _{q}}}}{\frac {\partial }{\partial y^{k_{1}}}}\otimes \cdots \otimes {\frac {\partial }{\partial y^{k_{p}}}}\otimes dy^{\ell _{1}}\otimes \cdots \otimes dy^{\ell _{q}}\\&=T_{\ell _{1},\cdots \ell _{q}}^{k_{1},\ldots ,k_{p}}(y^{1},\ldots y^{n}){\frac {\partial }{\partial y^{k_{1}}}}\otimes \cdots \otimes {\frac {\partial }{\partial y^{k_{p}}}}\otimes dy^{\ell _{1}}\otimes \cdots \otimes dy^{\ell _{q}}\\\end{aligned}}} i.e. T ℓ 1 , ⋯ ℓ q k 1 , … , k p ( y 1 , … y n ) = T j 1 , … , j q i 1 , … i p ( x 1 , … , x n ) ∂ y k 1 ∂ x i 1 ⋯ ∂ y k p ∂ x i p ∂ x j 1 ∂ y ℓ 1 ⋯ ∂ x j q ∂ y ℓ q {\displaystyle T_{\ell _{1},\cdots \ell _{q}}^{k_{1},\ldots ,k_{p}}(y^{1},\ldots y^{n})=T_{j_{1},\ldots ,j_{q}}^{i_{1},\ldots i_{p}}(x^{1},\ldots ,x^{n}){\frac {\partial y^{k_{1}}}{\partial x^{i_{1}}}}\cdots {\frac {\partial y^{k_{p}}}{\partial x^{i_{p}}}}{\frac {\partial x^{j_{1}}}{\partial y^{\ell _{1}}}}\cdots {\frac {\partial x^{j_{q}}}{\partial y^{\ell _{q}}}}} The system of indexed functions T j 1 , … , j q i 1 , … i p ( x 1 , … , x n ) {\displaystyle T_{j_{1},\ldots ,j_{q}}^{i_{1},\ldots i_{p}}(x^{1},\ldots ,x^{n})} (one system for each choice of coordinate system) connected by transformations as above are the tensors in the definitions below. Remark One can, more generally, take V {\displaystyle V} to be any vector bundle on M {\displaystyle M} , and V ∗ {\displaystyle V^{*}} its dual bundle. In that case can be a more general topological space. These sections are called tensors of V {\displaystyle V} or tensors for short if no confusion is possible . == Geometric introduction == Intuitively, a vector field is best visualized as an "arrow" attached to each point of a region, with variable length and direction. One example of a vector field on a curved space is a weather map showing horizontal wind velocity at each point of the Earth's surface. Now consider more complicated fields. For example, if the manifold is Riemannian, then it has a metric field g {\displaystyle g} , such that given any two vectors v , w {\displaystyle v,w} at point x {\displaystyle x} , their inner product is g x ( v , w ) {\displaystyle g_{x}(v,w)} . The field g {\displaystyle g} could be given in matrix form, but it depends on a choice of coordinates. It could instead be given as an ellipsoid of radius 1 at each point, which is coordinate-free. Applied to the Earth's surface, this is Tissot's indicatrix. In general, we want to specify tensor fields in a coordinate-independent way: It should exist independently of latitude and longitude, or whatever particular "cartographic projection" we are using to introduce numerical coordinates. == Via coordinate transitions == Following Schouten (1951) and McConnell (1957), the concept of a tensor relies on a concept of a reference frame (or coordinate system), which may be fixed (relative to some background reference frame), but in general may be allowed to vary within some class of transformations of these coordinate systems. For example, coordinates belonging to the n-dimensional real coordinate space R n {\displaystyle \mathbb {R} ^{n}} may be subjected to arbitrary affine transformations: x k ↦ A j k x j + a k {\displaystyle x^{k}\mapsto A_{j}^{k}x^{j}+a^{k}} (with n-dimensional indices, summation implied). A covariant vector, or covector, is a system of functions v k {\displaystyle v_{k}} that transforms under this affine transformation by the rule v k ↦ v i A k i . {\displaystyle v_{k}\mapsto v_{i}A_{k}^{i}.} The list of Cartesian coordinate basis vectors e k {\displaystyle \mathbf {e} _{k}} transforms as a covector, since under the affine transformation e k ↦ A k i e i {\displaystyle \mathbf {e} _{k}\mapsto A_{k}^{i}\mathbf {e} _{i}} . A contravariant vector is a system of functions v k {\displaystyle v^{k}} of the coordinates that, under such an affine transformation undergoes a transformation v k ↦ ( A − 1 ) j k v j . {\displaystyle v^{k}\mapsto (A^{-1})_{j}^{k}v^{j}.} This is precisely the requirement needed to ensure that the quantity v k e k {\displaystyle v^{k}\mathbf {e} _{k}} is an invariant object that does not depend on the coordinate system chosen. More generally, the coordinates of a tensor of valence (p,q) have p upper indices and q lower indices, with the transformation law being T i 1 ⋯ i p j 1 ⋯ j q ↦ A i 1 ′ i 1 ⋯ A i p ′ i p T i 1 ′ ⋯ i p ′ j 1 ′ ⋯ j q ′ ( A − 1 ) j 1 j 1 ′ ⋯ ( A − 1 ) j q j q ′ . {\displaystyle {T^{i_{1}\cdots i_{p}}}_{j_{1}\cdots j_{q}}\mapsto A_{i'_{1}}^{i_{1}}\cdots A_{i'_{p}}^{i_{p}}{T^{i'_{1}\cdots i'_{p}}}_{j'_{1}\cdots j'_{q}}(A^{-1})_{j_{1}}^{j'_{1}}\cdots (A^{-1})_{j_{q}}^{j'_{q}}.} The concept of a tensor field may be obtained by specializing the allowed coordinate transformations to be smooth (or differentiable, analytic, etc.). A covector field is a function v k {\displaystyle v_{k}} of the coordinates that transforms by the Jacobian of the transition functions (in the given class). Likewise, a contravariant vector field v k {\displaystyle v^{k}} transforms by the inverse Jacobian. == Tensor bundles == A tensor bundle is a fiber bundle where the fiber is a tensor product of any number of copies of the tangent space and/or cotangent space of the base space, which is a manifold. As such, the fiber is a vector space and the tensor bundle is a special kind of vector bundle. The vector bundle is a natural idea of "vector space depending continuously (or smoothly) on parameters" – the parameters being the points of a manifold M. For example, a vector space of one dimension depending on an angle could look like a Möbius strip or alternatively like a cylinder. Given a vector bundle V over M, the corresponding field concept is called a section of the bundle: for m varying over M, a choice of vector vm in Vm, where Vm is the vector space "at" m. Since the tensor product concept is independent of any choice of basis, taking the tensor product of two vector bundles on M is routine. Starting with the tangent bundle (the bundle of tangent spaces) the whole apparatus explained at component-free treatment of tensors carries over in a routine way – again independently of coordinates, as mentioned in the introduction. We therefore can give a definition of tensor field, namely as a section of some tensor bundle. (There are vector bundles that are not tensor bundles: the Möbius band for instance.) This is then guaranteed geometric content, since everything has been done in an intrinsic way. More precisely, a tensor field assigns to any given point of the manifold a tensor in the space V ⊗ ⋯ ⊗ V ⊗ V ∗ ⊗ ⋯ ⊗ V ∗ , {\displaystyle V\otimes \cdots \otimes V\otimes V^{*}\otimes \cdots \otimes V^{*},} where V is the tangent space at that point and V∗ is the cotangent space. See also tangent bundle and cotangent bundle. Given two tensor bundles E → M and F → M, a linear map A: Γ(E) → Γ(F) from the space of sections of E to sections of F can be considered itself as a tensor section of E ∗ ⊗ F {\displaystyle \scriptstyle E^{*}\otimes F} if and only if it satisfies A(fs) = fA(s), for each section s in Γ(E) and each smooth function f on M. Thus a tensor section is not only a linear map on the vector space of sections, but a C∞(M)-linear map on the module of sections. This property is used to check, for example, that even though the Lie derivative and covariant derivative are not tensors, the torsion and curvature tensors built from them are. == Notation == The notation for tensor fields can sometimes be confusingly similar to the notation for tensor spaces. Thus, the tangent bundle TM = T(M) might sometimes be written as T 0 1 ( M ) = T ( M ) = T M {\displaystyle T_{0}^{1}(M)=T(M)=TM} to emphasize that the tangent bundle is the range space of the (1,0) tensor fields (i.e., vector fields) on the manifold M. This should not be confused with the very similar looking notation T 0 1 ( V ) {\displaystyle T_{0}^{1}(V)} ; in the latter case, we just have one tensor space, whereas in the former, we have a tensor space defined for each point in the manifold M. Curly (script) letters are sometimes used to denote the set of infinitely-differentiable tensor fields on M. Thus, T n m ( M ) {\displaystyle {\mathcal {T}}_{n}^{m}(M)} are the sections of the (m,n) tensor bundle on M that are infinitely-differentiable. A tensor field is an element of this set. == Tensor fields as multilinear forms == There is another more abstract (but often useful) way of characterizing tensor fields on a manifold M, which makes tensor fields into honest tensors (i.e. single multilinear mappings), though of a different type (although this is not usually why one often says "tensor" when one really means "tensor field"). First, we may consider the set of all smooth (C∞) vector fields on M, X ( M ) := T 0 1 ( M ) {\displaystyle {\mathfrak {X}}(M):={\mathcal {T}}_{0}^{1}(M)} (see the section on notation above) as a single space – a module over the ring of smooth functions, C∞(M), by pointwise scalar multiplication. The notions of multilinearity and tensor products extend easily to the case of modules over any commutative ring. As a motivating example, consider the space Ω 1 ( M ) = T 1 0 ( M ) {\displaystyle \Omega ^{1}(M)={\mathcal {T}}_{1}^{0}(M)} of smooth covector fields (1-forms), also a module over the smooth functions. These act on smooth vector fields to yield smooth functions by pointwise evaluation, namely, given a covector field ω and a vector field X, we define ω ~ ( X ) ( p ) := ω ( p ) ( X ( p ) ) . {\displaystyle {\tilde {\omega }}(X)(p):=\omega (p)(X(p)).} Because of the pointwise nature of everything involved, the action of ω ~ {\displaystyle {\tilde {\omega }}} on X is a C∞(M)-linear map, that is, ω ~ ( f X ) ( p ) = ω ( p ) ( ( f X ) ( p ) ) = ω ( p ) ( f ( p ) X ( p ) ) = f ( p ) ω ( p ) ( X ( p ) ) = ( f ω ) ( p ) ( X ( p ) ) = ( f ω ~ ) ( X ) ( p ) {\displaystyle {\tilde {\omega }}(fX)(p)=\omega (p)((fX)(p))=\omega (p)(f(p)X(p))=f(p)\omega (p)(X(p))=(f\omega )(p)(X(p))=(f{\tilde {\omega }})(X)(p)} for any p in M and smooth function f. Thus we can regard covector fields not just as sections of the cotangent bundle, but also linear mappings of vector fields into functions. By the double-dual construction, vector fields can similarly be expressed as mappings of covector fields into functions (namely, we could start "natively" with covector fields and work up from there). In a complete parallel to the construction of ordinary single tensors (not tensor fields!) on M as multilinear maps on vectors and covectors, we can regard general (k,l) tensor fields on M as C∞(M)-multilinear maps defined on k copies of X ( M ) {\displaystyle {\mathfrak {X}}(M)} and l copies of Ω 1 ( M ) {\displaystyle \Omega ^{1}(M)} into C∞(M). Now, given any arbitrary mapping T from a product of k copies of X ( M ) {\displaystyle {\mathfrak {X}}(M)} and l copies of Ω 1 ( M ) {\displaystyle \Omega ^{1}(M)} into C∞(M), it turns out that it arises from a tensor field on M if and only if it is multilinear over C∞(M). Namely C∞(M)-module of tensor fields of type ( k , l ) {\displaystyle (k,l)} over M is canonically isomorphic to C∞(M)-module of C∞(M)-multilinear forms Ω 1 ( M ) × … × Ω 1 ( M ) ⏟ l t i m e s × X ( M ) × … × X ( M ) ⏟ k t i m e s → C ∞ ( M ) . {\displaystyle \underbrace {\Omega ^{1}(M)\times \ldots \times \Omega ^{1}(M)} _{l\ \mathrm {times} }\times \underbrace {{\mathfrak {X}}(M)\times \ldots \times {\mathfrak {X}}(M)} _{k\ \mathrm {times} }\to C^{\infty }(M).} This kind of multilinearity implicitly expresses the fact that we're really dealing with a pointwise-defined object, i.e. a tensor field, as opposed to a function which, even when evaluated at a single point, depends on all the values of vector fields and 1-forms simultaneously. A frequent example application of this general rule is showing that the Levi-Civita connection, which is a mapping of smooth vector fields ( X , Y ) ↦ ∇ X Y {\displaystyle (X,Y)\mapsto \nabla _{X}Y} taking a pair of vector fields to a vector field, does not define a tensor field on M. This is because it is only R {\displaystyle \mathbb {R} } -linear in Y (in place of full C∞(M)-linearity, it satisfies the Leibniz rule, ∇ X ( f Y ) = ( X f ) Y + f ∇ X Y {\displaystyle \nabla _{X}(fY)=(Xf)Y+f\nabla _{X}Y} )). Nevertheless, it must be stressed that even though it is not a tensor field, it still qualifies as a geometric object with a component-free interpretation. == Applications == The curvature tensor is discussed in differential geometry and the stress–energy tensor is important in physics, and these two tensors are related by Einstein's theory of general relativity. In electromagnetism, the electric and magnetic fields are combined into an electromagnetic tensor field. Differential forms, used in defining integration on manifolds, are a type of tensor field. == Tensor calculus == In theoretical physics and other fields, differential equations posed in terms of tensor fields provide a very general way to express relationships that are both geometric in nature (guaranteed by the tensor nature) and conventionally linked to differential calculus. Even to formulate such equations requires a fresh notion, the covariant derivative. This handles the formulation of variation of a tensor field along a vector field. The original absolute differential calculus notion, which was later called tensor calculus, led to the isolation of the geometric concept of connection. == Twisting by a line bundle == An extension of the tensor field idea incorporates an extra line bundle L on M. If W is the tensor product bundle of V with L, then W is a bundle of vector spaces of just the same dimension as V. This allows one to define the concept of tensor density, a 'twisted' type of tensor field. A tensor density is the special case where L is the bundle of densities on a manifold, namely the determinant bundle of the cotangent bundle. (To be strictly accurate, one should also apply the absolute value to the transition functions – this makes little difference for an orientable manifold.) For a more traditional explanation see the tensor density article. One feature of the bundle of densities (again assuming orientability) L is that Ls is well-defined for real number values of s; this can be read from the transition functions, which take strictly positive real values. This means for example that we can take a half-density, the case where s = ⁠1/2⁠. In general we can take sections of W, the tensor product of V with Ls, and consider tensor density fields with weight s. Half-densities are applied in areas such as defining integral operators on manifolds, and geometric quantization. == Flat case == When M is a Euclidean space and all the fields are taken to be invariant by translations by the vectors of M, we get back to a situation where a tensor field is synonymous with a tensor 'sitting at the origin'. This does no great harm, and is often used in applications. As applied to tensor densities, it does make a difference. The bundle of densities cannot seriously be defined 'at a point'; and therefore a limitation of the contemporary mathematical treatment of tensors is that tensor densities are defined in a roundabout fashion. == Cocycles and chain rules == As an advanced explanation of the tensor concept, one can interpret the chain rule in the multivariable case, as applied to coordinate changes, also as the requirement for self-consistent concepts of tensor giving rise to tensor fields. Abstractly, we can identify the chain rule as a 1-cocycle. It gives the consistency required to define the tangent bundle in an intrinsic way. The other vector bundles of tensors have comparable cocycles, which come from applying functorial properties of tensor constructions to the chain rule itself; this is why they also are intrinsic (read, 'natural') concepts. What is usually spoken of as the 'classical' approach to tensors tries to read this backwards – and is therefore a heuristic, post hoc approach rather than truly a foundational one. Implicit in defining tensors by how they transform under a coordinate change is the kind of self-consistency the cocycle expresses. The construction of tensor densities is a 'twisting' at the cocycle level. Geometers have not been in any doubt about the geometric nature of tensor quantities; this kind of descent argument justifies abstractly the whole theory. == Generalizations == === Tensor densities === The concept of a tensor field can be generalized by considering objects that transform differently. An object that transforms as an ordinary tensor field under coordinate transformations, except that it is also multiplied by the determinant of the Jacobian of the inverse coordinate transformation to the wth power, is called a tensor density with weight w. Invariantly, in the language of multilinear algebra, one can think of tensor densities as multilinear maps taking their values in a density bundle such as the (1-dimensional) space of n-forms (where n is the dimension of the space), as opposed to taking their values in just R. Higher "weights" then just correspond to taking additional tensor products with this space in the range. A special case are the scalar densities. Scalar 1-densities are especially important because it makes sense to define their integral over a manifold. They appear, for instance, in the Einstein–Hilbert action in general relativity. The most common example of a scalar 1-density is the volume element, which in the presence of a metric tensor g is the square root of its determinant in coordinates, denoted det g {\displaystyle {\sqrt {\det g}}} . The metric tensor is a covariant tensor of order 2, and so its determinant scales by the square of the coordinate transition: det ( g ′ ) = ( det ∂ x ∂ x ′ ) 2 det ( g ) , {\displaystyle \det(g')=\left(\det {\frac {\partial x}{\partial x'}}\right)^{2}\det(g),} which is the transformation law for a scalar density of weight +2. More generally, any tensor density is the product of an ordinary tensor with a scalar density of the appropriate weight. In the language of vector bundles, the determinant bundle of the tangent bundle is a line bundle that can be used to 'twist' other bundles w times. While locally the more general transformation law can indeed be used to recognise these tensors, there is a global question that arises, reflecting that in the transformation law one may write either the Jacobian determinant, or its absolute value. Non-integral powers of the (positive) transition functions of the bundle of densities make sense, so that the weight of a density, in that sense, is not restricted to integer values. Restricting to changes of coordinates with positive Jacobian determinant is possible on orientable manifolds, because there is a consistent global way to eliminate the minus signs; but otherwise the line bundle of densities and the line bundle of n-forms are distinct. For more on the intrinsic meaning, see Density on a manifold. == See also == Bitensor – Tensorial object depending on two points in a manifold Jet bundle – Construction in differential topology Ricci calculus – Tensor index notation for tensor-based calculations Spinor field – Geometric structurePages displaying short descriptions of redirect targets == Notes == == References == O'neill, Barrett (1983). Semi-Riemannian Geometry With Applications to Relativity. Elsevier Science. ISBN 9780080570570. Frankel, T. (2012), The Geometry of Physics (3rd edition), Cambridge University Press, ISBN 978-1-107-60260-1. Lambourne [Open University], R.J.A. (2010), Relativity, Gravitation, and Cosmology, Cambridge University Press, Bibcode:2010rgc..book.....L, ISBN 978-0-521-13138-4. Lerner, R.G.; Trigg, G.L. (1991), Encyclopaedia of Physics (2nd Edition), VHC Publishers. McConnell, A. J. (1957), Applications of Tensor Analysis, Dover Publications, ISBN 9780486145020 {{citation}}: ISBN / Date incompatibility (help). McMahon, D. (2006), Relativity DeMystified, McGraw Hill (USA), ISBN 0-07-145545-0. C. Misner, K. S. Thorne, J. A. Wheeler (1973), Gravitation, W.H. Freeman & Co, ISBN 0-7167-0344-0{{citation}}: CS1 maint: multiple names: authors list (link). Parker, C.B. (1994), McGraw Hill Encyclopaedia of Physics (2nd Edition), McGraw Hill, ISBN 0-07-051400-3. Schouten, Jan Arnoldus (1951), Tensor Analysis for Physicists, Oxford University Press. Steenrod, Norman (5 April 1999). The Topology of Fibre Bundles. Princeton Mathematical Series. Vol. 14. Princeton, N.J.: Princeton University Press. ISBN 978-0-691-00548-5. OCLC 40734875.
Wikipedia/Tensor_analysis
Tests of relativistic energy and momentum are aimed at measuring the relativistic expressions for energy, momentum, and mass. According to special relativity, the properties of particles moving approximately at the speed of light significantly deviate from the predictions of Newtonian mechanics. For instance, the speed of light cannot be reached by massive particles. Today, those relativistic expressions for particles close to the speed of light are routinely confirmed in undergraduate laboratories, and necessary in the design and theoretical evaluation of collision experiments in particle accelerators. See also Tests of special relativity for a general overview. == Overview == In classical mechanics, kinetic energy and momentum are expressed as E k = 1 2 m v 2 , p = m v . {\displaystyle E_{k}={\tfrac {1}{2}}mv^{2},\quad p=mv.\,} On the other hand, special relativity predicts that the speed of light is constant in all inertial frames of references. The relativistic energy–momentum relation reads: E 2 − ( p c ) 2 = ( m c 2 ) 2 {\displaystyle E^{2}-(pc)^{2}=(mc^{2})^{2}\,} , from which the relations for rest energy E 0 {\displaystyle E_{0}} , relativistic energy (rest + kinetic) E {\displaystyle E} , kinetic energy E k {\displaystyle E_{k}} , and momentum p {\displaystyle p} of massive particles follow: E 0 = m c 2 , E = γ m c 2 , E k = ( γ − 1 ) m c 2 , p = γ m v {\displaystyle E_{0}=mc^{2},\quad E=\gamma mc^{2},\quad E_{k}=(\gamma -1)mc^{2},\quad p=\gamma mv} , where γ = 1 / 1 − ( v / c ) 2 {\displaystyle \gamma =1/{\sqrt {1-(v/c)^{2}}}} . So relativistic energy and momentum significantly increase with speed, thus the speed of light cannot be reached by massive particles. In some relativity textbooks, the so-called "relativistic mass" M = γ m {\displaystyle M=\gamma m\,} is used as well. However, this concept is considered disadvantageous by many authors, instead the expressions of relativistic energy and momentum should be used to express the velocity dependence in relativity, which provide the same experimental predictions. == Early experiments == First experiments capable of detecting such relations were conducted by Walter Kaufmann, Alfred Bucherer and others between 1901 and 1915. These experiments were aimed at measuring the deflection of beta rays within a magnetic field so as to determine the mass-to-charge ratio of electrons. Since the charge was known to be velocity independent, any variation had to be attributed to alterations in the electron's momentum or mass (formerly known as transverse electromagnetic mass m T = m γ , {\displaystyle m_{T}=m\gamma ,} equivalent to the "relativistic mass" M {\displaystyle M} as indicated above). Since relativistic mass is not often used anymore in modern textbooks, those tests can be described of measurements of relativistic momentum or energy, because the following relation applies: M m = p m v = E m c 2 = γ {\displaystyle {\frac {M}{m}}={\frac {p}{mv}}={\frac {E}{mc^{2}}}=\gamma } Electrons traveling between 0.25–0.75c indicated an increase of momentum in agreement with the relativistic predictions, and were considered as clear confirmations of special relativity. However, it was later pointed out that although the experiments were in agreement with relativity, the precision was not sufficient to rule out competing models of the electron, such as the one of Max Abraham. Already in 1915, however, Arnold Sommerfeld was able to derive the Fine structure of hydrogen-like spectra by using the relativistic expressions for momentum and energy (in the context of the Bohr–Sommerfeld theory). Subsequently, Karl Glitscher simply substituted the relativistic expressions for Abraham's, demonstrating that Abraham's theory is in conflict with experimental data and is therefore refuted, while relativity is in agreement with the data. == Precision measurements == In 1940, Rogers et al. performed the first electron deflection test sufficiently precise to definitely rule out competing models. As in the Bucherer-Neumann experiments, the velocity and the charge-mass-ratio of beta particles of velocities up to 0.75c was measured. However, they made many improvements, including the employment of a Geiger counter. The accuracy of the experiment by which relativity was confirmed was within 1%. An even more precise electron deflection test was conducted by Meyer et al. (1963). They tested electrons traveling at velocities from 0.987 to 0.99c, which were deflected in a static homogenous magnetic field by which p was measured, and a static cylindrical electric field by which p 2 / ( m γ ) {\displaystyle p^{2}/(m\gamma )} was measured. They confirmed relativity with an upper limit for deviations of ~0.00037. Also measurements of the charge-to-mass ratio and thus momentum of protons have been conducted. Grove and Fox (1953) measured 385-MeV protons moving at ~0.7c. Determination of the angular frequencies and of the magnetic field provided the charge-to-mass ratio. This, together with measuring the magnetic center, allowed to confirm the relativistic expression for the charge-to-mass ratio with a precision of ~0.0006. However, Zrelov et al. (1958) criticized the scant information given by Grove and Fox, emphasizing the difficulty of such measurements due to the complex motion of the protons. Therefore, they conducted a more extensive measurement, in which protons of 660 MeV with mean velocity of 0.8112c were employed. The proton's momentum was measured using a Litz wire, and the velocity was determined by evaluation of Cherenkov radiation. They confirmed relativity with an upper limit for deviations of ~0.0041. == Bertozzi experiment == Since the 1930s, relativity was needed in the construction of particle accelerators, and the precision measurements mentioned above clearly confirmed the theory as well. But those tests demonstrate the relativistic expressions in an indirect way, since many other effects have to be considered in order to evaluate the deflection curve, velocity, and momentum. So an experiment specifically aimed at demonstrating the relativistic effects in a very direct way was conducted by William Bertozzi (1962, 1964). He employed the electron accelerator facility at MIT in order to initiate five electron runs, with electrons of kinetic energies between 0.5 and 15 MeV. These electrons were produced by a Van de Graaff generator and traveled a distance of 8.4 m, until they hit an aluminium disc. First, the time of flight of the electrons was measured in all five runs – the velocity data obtained were in close agreement with the relativistic expectation. However, at this stage the kinetic energy was only indirectly determined by the accelerating fields. Therefore, the heat produced by some electrons hitting the aluminium disc was measured by calorimetry in order to directly obtain their kinetic energy - those results agreed with the expected energy within 10% error margin. == Undergraduate experiments == Various experiments have been performed which, due to their simplicity, are still used as undergraduate experiments. Mass, velocity, momentum, and energy of electrons have been measured in different ways in those experiments, all of them confirming relativity. They include experiments involving beta particles, Compton scattering in which electrons exhibit highly relativistic properties and positron annihilation. == Particle accelerators == In modern particle accelerators at high energies, the predictions of special relativity are routinely confirmed, and are necessary for the design and theoretical evaluation of collision experiments, especially in the ultrarelativistic limit. For instance, time dilation must be taken into account to understand the dynamics of particle decay, and the relativistic velocity addition theorem explains the distribution of synchrotron radiation. Regarding the relativistic energy-momentum relations, a series of high precision velocity and energy-momentum experiments have been conducted, in which the energies employed were necessarily much higher than the experiments mentioned above. === Velocity === Time of flight measurements have been conducted to measure differences in the velocities of electrons and light at the SLAC National Accelerator Laboratory. For instance, Brown et al. (1973) found no difference in the time of flight of 11-GeV electrons and visible light, setting an upper limit of velocity differences of Δ v / c = ( − 1.3 ± 2.7 ) × 10 − 6 {\displaystyle \Delta v/c=(-1.3\pm 2.7)\times 10^{-6}} . Another SLAC experiment conducted by Guiragossián et al. (1974) accelerated electrons up to energies of 15 to 20.5 GeV. They used a radio frequency separator (RFS) to measure time-of-flight differences and thus velocity differences between those electrons and 15-GeV gamma rays on a path length of 1015 m. They found no difference, increasing the upper limit to Δ v / c = 2 × 10 − 7 {\displaystyle \Delta v/c=2\times 10^{-7}} . Already before, Alväger et al. (1964) at the CERN Proton Synchrotron executed a time of flight measurement to test the Newtonian momentum relations for light, being valid in the so-called emission theory. In this experiment, gamma rays were produced in the decay of 6-GeV pions traveling at 0.99975c. If Newtonian momentum p = m v {\displaystyle p=mv} were valid, those gamma rays should have traveled at superluminal speeds. However, they found no difference and gave an upper limit of Δ v / c = 10 − 5 {\displaystyle \Delta v/c=10^{-5}} . === Energy and Calorimetry === The intrusion of particles into particle detectors is connected with electron–positron annihilation, Compton scattering, Cherenkov radiation etc., so that a cascade of effects is leading to the production of new particles (photons, electrons, neutrinos, etc.). The energy of such particle showers corresponds to the relativistic kinetic energy and rest energy of the initial particles. This energy can be measured by calorimeters in an electrical, optical, thermal, or acoustical way. Thermal measurements in order to estimate the relativistic kinetic energy were already carried out by Bertozzi as mentioned above. Additional measurements at SLAC followed, in which the heat produced by 20-GeV electrons was measured in 1982. A beam dump of water-cooled aluminium was employed as calorimeter. The results were in agreement with special relativity, even though the accuracy was only 30%. However, the experimentalists alluded to the fact, that calorimetric tests with 10-GeV electrons were executed already in 1969. There, copper was used as beam dump, and an accuracy of 1% was achieved. In modern calorimeters called electromagnetic or hadronic depending on the interaction, the energy of the particle showers is often measured by the ionization caused by them. Also excitations can arise in scintillators (see scintillation), whereby light is emitted and then measured by a scintillation counter. Cherenkov radiation is measured as well. In all of those methods, the measured energy is proportional to the initial particle energy. === Annihilation and pair production === Relativistic energy and momentum can also be measured by studying processes such as annihilation and pair production. For instance, the rest energy of electrons and positrons is 0.51 MeV respectively. When a photon interacts with an atomic nucleus, electron-positron pairs can be generated in case the energy of the photon matches the required threshold energy, which is the combined electron-positron rest energy of 1.02 MeV. However, if the photon energy is even higher, then the exceeding energy is converted into kinetic energy of the particles. The reverse process occurs in electron-positron annihilation at low energies, in which process photons are created having the same energy as the electron-positron pair. These are direct examples of E 0 = m c 2 {\displaystyle E_{0}=mc^{2}} (mass–energy equivalence). There are also many examples of conversion of relativistic kinetic energy into rest energy. In 1974, SLAC National Accelerator Laboratory accelerated electrons and positrons up to relativistic velocities, so that their relativistic energy γ m c 2 {\displaystyle \gamma mc^{2}} (i.e. the sum of their rest energy and kinetic energy) is significantly increased to about 1500 MeV each. When those particles collide, other particles such as the J/ψ meson of rest energy of about 3000 MeV were produced. Much higher energies were employed at the Large Electron–Positron Collider in 1989, where electrons and positrons were accelerated up to 45 GeV each, in order to produce W and Z bosons of rest energies between 80 and 91 GeV. Later, the energies were considerably increased to 200 GeV to generate pairs of W bosons. Such bosons were also measured using proton-antiproton annihilation. The combined rest energy of those particles amounts to approximately 0.938 GeV each. The Super Proton Synchrotron accelerated those particle up to relativistic velocities and energies of approximately 270 GeV each, so that the center of mass energy at the collision reaches 540 GeV. Thereby, quarks and antiquarks gained the necessary energy and momentum to annihilate into W and Z bosons. Many other experiments involving the creation of a considerable amount of different particles at relativistic velocities have been (and still are) conducted in hadron colliders such as Tevatron (up to 1 TeV), the Relativistic Heavy Ion Collider (up to 200 GeV), and most recently the Large Hadron Collider (up to 7 TeV) in the course of searching for the Higgs boson. == Nuclear reactions == The relation E 0 = m c 2 {\displaystyle E_{0}=mc^{2}} can be tested in nuclear reactions, as the percent differences between the masses of the reactants and the products are big enough to measure; the change in total mass should account for the change in total kinetic energy. Einstein proposed such a test in the paper where he first stated the equivalence of mass and energy, mentioning the radioactive decay of radium as a possibility. The first test in a nuclear reaction, however, used the absorption of an incident proton by lithium-7, which then breaks into two alpha particles. The change in mass corresponded to the change in kinetic energy to within 0.5%. A particularly sensitive test was carried out in 2005 in the gamma decay of excited sulfur and silicon nuclei, in each case to the non-excited state (ground state). The masses of the excited and ground states were measured by measuring their revolution frequencies in an electromagnetic trap. The gamma rays' energies were measured by measuring their wavelengths with gamma-ray diffraction, similar to X-ray diffraction, and using the well-established relation between photon energy and wavelength. The results confirmed the predictions of relativity to a precision of 0.0000004. == References == == External links == Physics FAQ: List of SR tests
Wikipedia/Tests_of_relativistic_energy_and_momentum
The Einstein Theory of Relativity (1923) is a silent animated short film directed by Dave Fleischer and released by Fleischer Studios. == History == In August 1922, Scientific American published an article explaining their position that a silent film would be unsuccessful in presenting the theory of relativity to the general public, arguing that only as part of a broader educational package including lecture and text would such film be successful. Scientific American then went on to review frames from an unnamed German film reported to be financially successful. Six months later, on February 8, 1923, the Fleischers released their relativity film, produced in collaboration with popular science journalist Garrett P. Serviss to accompany his book on the same topic. Two versions of the Fleischer film are reported to exist – a shorter two-reel (20 minute) edit intended for general theater audiences, and a longer five-reel (50 minute) version intended for educational use. The Fleischers lifted footage from the German predecessor, Die Grundlagen der Einsteinschen Relativitäts-Theorie, directed by Hanns-Walter Kornblum, for inclusion into their film. Presented here are images from the Fleischer film and German film. If actual footage was not recycled into The Einstein Theory of Relativity, these images and text from the Scientific American article suggest that original visual elements from the German film were. This film, like much of the Fleischer's work, has fallen into the public domain. Unlike Fleischer Studio's Superman or Betty Boop cartoons, The Einstein Theory of Relativity has very few existing prints and is available in 16mm from only a few specialized film preservation organizations. == References == == External links == Media related to The Einstein Theory of Relativity at Wikimedia Commons The full text of The Einstein Theory of Relativity at Wikisource The Einstein Theory of Relativity at IMDb The Einstein Theory of Relativity DVD of the film bundled with guidebook by Garrett P. Serviss (and including another Fleischer documentary, Evolution), from Apogee Books, ISBN 1-894959-51-5.
Wikipedia/The_Einstein_Theory_of_Relativity