id stringlengths 2 8 | url stringlengths 31 117 | title stringlengths 1 71 | text stringlengths 153 118k | topic stringclasses 4
values | section stringlengths 4 49 ⌀ | sublist stringclasses 9
values |
|---|---|---|---|---|---|---|
27774 | https://en.wikipedia.org/wiki/Scanning%20tunneling%20microscope | Scanning tunneling microscope | A scanning tunneling microscope (STM) is a type of scanning probe microscope used for imaging surfaces at the atomic level. Its development in 1981 earned its inventors, Gerd Binnig and Heinrich Rohrer, then at IBM Zürich, the Nobel Prize in Physics in 1986. STM senses the surface by using an extremely sharp conducting tip that can distinguish features smaller than 0.1 nm with a 0.01 nm (10 pm) depth resolution. This means that individual atoms can routinely be imaged and manipulated. Most scanning tunneling microscopes are built for use in ultra-high vacuum at temperatures approaching absolute zero, but variants exist for studies in air, water and other environments, and for temperatures over 1000 °C.
STM is based on the concept of quantum tunneling. When the tip is brought very near to the surface to be examined, a bias voltage applied between the two allows electrons to tunnel through the vacuum separating them. The resulting tunneling current is a function of the tip position, applied voltage, and the local density of states (LDOS) of the sample. Information is acquired by monitoring the current as the tip scans across the surface, and is usually displayed in image form.
A refinement of the technique known as scanning tunneling spectroscopy consists of keeping the tip in a constant position above the surface, varying the bias voltage and recording the resultant change in current. Using this technique, the local density of the electronic states can be reconstructed. This is sometimes performed in high magnetic fields and in presence of impurities to infer the properties and interactions of electrons in the studied material.
Scanning tunneling microscopy can be a challenging technique, as it requires extremely clean and stable surfaces, sharp tips, excellent vibration isolation, and sophisticated electronics. Nonetheless, many hobbyists build their own microscopes.
Procedure
The tip is brought close to the sample by a coarse positioning mechanism that is usually monitored visually. At close range, fine control of the tip position with respect to the sample surface is achieved by piezoelectric scanner tubes whose length can be altered by a control voltage. A bias voltage is applied between the sample and the tip, and the scanner is gradually elongated until the tip starts receiving the tunneling current. The tip–sample separation w is then kept somewhere in the 4–7 Å (0.4–0.7 nm) range, slightly above the height where the tip would experience repulsive interaction but still in the region where attractive interaction exists The tunneling current, being in the sub-nanoampere range, is amplified as close to the scanner as possible. Once tunneling is established, the sample bias and tip position with respect to the sample are varied according to the requirements of the experiment.
As the tip is moved across the surface in a discrete x–y matrix, the changes in surface height and population of the electronic states cause changes in the tunneling current. Digital images of the surface are formed in one of the two ways: in the constant-height mode changes of the tunneling current are mapped directly, while in the constant-current mode the voltage that controls the height (z) of the tip is recorded while the tunneling current is kept at a predetermined level.
In constant-current mode, feedback electronics adjust the height by a voltage to the piezoelectric height-control mechanism. If at some point the tunneling current is below the set level, the tip is moved towards the sample, and conversely. This mode is relatively slow, as the electronics need to check the tunneling current and adjust the height in a feedback loop at each measured point of the surface. When the surface is atomically flat, the voltage applied to the z-scanner mainly reflects variations in local charge density. But when an atomic step is encountered, or when the surface is buckled due to reconstruction, the height of the scanner also have to change because of the overall topography. The image formed of the z-scanner voltages that were needed to keep the tunneling current constant as the tip scanned the surface thus contain both topographical and electron density data. In some cases it may not be clear whether height changes came as a result of one or the other.
In constant-height mode, the z-scanner voltage is kept constant as the scanner swings back and forth across the surface, and the tunneling current, exponentially dependent on the distance, is mapped. This mode of operation is faster, but on rough surfaces, where there may be large adsorbed molecules present, or ridges and groves, the tip will be in danger of crashing.
The raster scan of the tip is anything from a 128×128 to a 1024×1024 (or more) matrix, and for each point of the raster a single value is obtained. The images produced by STM are therefore grayscale, and color is only added in post-processing in order to visually emphasize important features.
In addition to scanning across the sample, information on the electronic structure at a given location in the sample can be obtained by sweeping the bias voltage (along with a small AC modulation to directly measure the derivative) and measuring current change at a specific location. This type of measurement is called scanning tunneling spectroscopy (STS) and typically results in a plot of the local density of states as a function of the electrons' energy within the sample. The advantage of STM over other measurements of the density of states lies in its ability to make extremely local measurements. This is how, for example, the density of states at an impurity site can be compared to the density of states around the impurity and elsewhere on the surface.
Instrumentation
The main components of a scanning tunneling microscope are the scanning tip, piezoelectrically controlled height (z axis) and lateral (x and y axes) scanner, and coarse sample-to-tip approach mechanism. The microscope is controlled by dedicated electronics and a computer. The system is supported on a vibration isolation system.
The tip is often made of tungsten or platinum–iridium wire, though gold is also used. Tungsten tips are usually made by electrochemical etching, and platinum–iridium tips by mechanical shearing. The resolution of an image is limited by the radius of curvature of the scanning tip. Sometimes, image artefacts occur if the tip has more than one apex at the end; most frequently double-tip imaging is observed, a situation in which two apices contribute equally to the tunneling. While several processes for obtaining sharp, usable tips are known, the ultimate test of quality of the tip is only possible when it is tunneling in the vacuum. Every so often the tips can be conditioned by applying high voltages when they are already in the tunneling range, or by making them pick up an atom or a molecule from the surface.
In most modern designs the scanner is a hollow tube of a radially polarized piezoelectric with metallized surfaces. The outer surface is divided into four long quadrants to serve as x and y motion electrodes with deflection voltages of two polarities applied on the opposing sides. The tube material is a lead zirconate titanate ceramic with a piezoelectric constant of about 5 nanometres per volt. The tip is mounted at the center of the tube. Because of some crosstalk between the electrodes and inherent nonlinearities, the motion is calibrated, and voltages needed for independent x, y and z motion applied according to calibration tables.
Due to the extreme sensitivity of the tunneling current to the separation of the electrodes, proper vibration isolation or a rigid STM body is imperative for obtaining usable results. In the first STM by Binnig and Rohrer, magnetic levitation was used to keep the STM free from vibrations; now mechanical spring or gas spring systems are often employed. Additionally, mechanisms for vibration damping using eddy currents are sometimes implemented. Microscopes designed for long scans in scanning tunneling spectroscopy need extreme stability and are built in anechoic chambers—dedicated concrete rooms with acoustic and electromagnetic isolation that are themselves floated on vibration isolation devices inside the laboratory.
Maintaining the tip position with respect to the sample, scanning the sample and acquiring the data is computer-controlled. Dedicated software for scanning probe microscopies is used for image processing as well as performing quantitative measurements.
Some scanning tunneling microscopes are capable of recording images at high frame rates. Videos made of such images can show surface diffusion or track adsorption and reactions on the surface. In video-rate microscopes, frame rates of 80 Hz have been achieved with fully working feedback that adjusts the height of the tip.
Principle of operation
Quantum tunneling of electrons is a functioning concept of STM that arises from quantum mechanics. Classically, a particle hitting an impenetrable barrier will not pass through. If the barrier is described by a potential acting along z direction, in which an electron of mass me acquires the potential energy U(z), the electron's trajectory will be deterministic and such that the sum E of its kinetic and potential energies is at all times conserved:
The electron will have a defined, non-zero momentum p only in regions where the initial energy E is greater than U(z). In quantum physics, however, the electron can pass through classically forbidden regions. This is referred to as tunneling.
Rectangular barrier model
The simplest model of tunneling between the sample and the tip of a scanning tunneling microscope is that of a rectangular potential barrier. An electron of energy E is incident upon an energy barrier of height U, in the region of space of width w. An electron's behavior in the presence of a potential U(z), assuming one-dimensional case, is described by wave functions that satisfy Schrödinger's equation
where ħ is the reduced Planck constant, z is the position, and me is the electron mass. In the zero-potential regions on two sides of the barrier, the wave function takes on the forms
for z < 0,
for z > w,
where . Inside the barrier, where E < U, the wave function is a superposition of two terms, each decaying from one side of the barrier:
for 0 < z < w,
where .
The coefficients r and t provide measure of how much of the incident electron's wave is reflected or transmitted through the barrier. Namely, of the whole impinging particle current only is transmitted, as can be seen from the probability current expression
which evaluates to . The transmission coefficient is obtained from the continuity condition on the three parts of the wave function and their derivatives at z = 0 and z = w (detailed derivation is in the article Rectangular potential barrier). This gives where . The expression can be further simplified, as follows:
In STM experiments, typical barrier height is of the order of the material's surface work function W, which for most metals has a value between 4 and 6 eV. The work function is the minimum energy needed to bring an electron from an occupied level, the highest of which is the Fermi level (for metals at T = 0 K), to vacuum level. The electrons can tunnel between two metals only from occupied states on one side into the unoccupied states of the other side of the barrier. Without bias, Fermi energies are flush, and there is no tunneling. Bias shifts electron energies in one of the electrodes higher, and those electrons that have no match at the same energy on the other side will tunnel. In experiments, bias voltages of a fraction of 1 V are used, so is of the order of 10 to 12 nm−1, while w is a few tenths of a nanometre. The barrier is strongly attenuating. The expression for the transmission probability reduces to The tunneling current from a single level is therefore
where both wave vectors depend on the level's energy E, and
Tunneling current is exponentially dependent on the separation of the sample and the tip, typically reducing by an order of magnitude when the separation is increased by 1 Å (0.1 nm). Because of this, even when tunneling occurs from a non-ideally sharp tip, the dominant contribution to the current is from its most protruding atom or orbital.
Tunneling between two conductors
As a result of the restriction that the tunneling from an occupied energy level on one side of the barrier requires an empty level of the same energy on the other side of the barrier, tunneling occurs mainly with electrons near the Fermi level. The tunneling current can be related to the density of available or filled states in the sample. The current due to an applied voltage V (assume tunneling occurs from the sample to the tip) depends on two factors: 1) the number of electrons between the Fermi level EF and EF − eV in the sample, and 2) the number among them which have corresponding free states to tunnel into on the other side of the barrier at the tip. The higher the density of available states in the tunneling region the greater the tunneling current. By convention, a positive V means that electrons in the tip tunnel into empty states in the sample; for a negative bias, electrons tunnel out of occupied states in the sample into the tip.
For small biases and temperatures near absolute zero, the number of electrons in a given volume (the electron concentration) that are available for tunneling is the product of the density of the electronic states ρ(EF) and the energy interval between the two Fermi levels, eV. Half of these electrons will be travelling away from the barrier. The other half will represent the electric current impinging on the barrier, which is given by the product of the electron concentration, charge, and velocity v (Ii = nev),
The tunneling electric current will be a small fraction of the impinging current. The proportion is determined by the transmission probability T, so\
In the simplest model of a rectangular potential barrier the transmission probability coefficient T equals |t|2.
Bardeen's formalism
A model that is based on more realistic wave functions for the two electrodes was devised by John Bardeen in a study of the metal–insulator–metal junction. His model takes two separate orthonormal sets of wave functions for the two electrodes and examines their time evolution as the systems are put close together. Bardeen's novel method, ingenious in itself, solves a time-dependent perturbative problem in which the perturbation emerges from the interaction of the two subsystems rather than an external potential of the standard Rayleigh–Schrödinger perturbation theory.
Each of the wave functions for the electrons of the sample (S) and the tip (T) decay into the vacuum after hitting the surface potential barrier, roughly of the size of the surface work function. The wave functions are the solutions of two separate Schrödinger's equations for electrons in potentials US and UT. When the time dependence of the states of known energies and is factored out, the wave functions have the following general form
If the two systems are put closer together, but are still separated by a thin vacuum region, the potential acting on an electron in the combined system is UT + US. Here, each of the potentials is spatially limited to its own side of the barrier. Only because the tail of a wave function of one electrode is in the range of the potential of the other, there is a finite probability for any state to evolve over time into the states of the other electrode. The future of the sample's state μ can be written as a linear combination with time-dependent coefficients of and all :
with the initial condition . When the new wave function is inserted into the Schrödinger's equation for the potential UT + US, the obtained equation is projected onto each separate (that is, the equation is multiplied by a and integrated over the whole volume) to single out the coefficients All are taken to be nearly orthogonal to all (their overlap is a small fraction of the total wave functions), and only first-order quantities retained. Consequently, the time evolution of the coefficients is given by
Because the potential UT is zero at the distance of a few atomic diameters away from the surface of the electrode, the integration over z can be done from a point z0 somewhere inside the barrier and into the volume of the tip (z > z0).
If the tunneling matrix element is defined as
the probability of the sample's state μ evolving in time t into the state of the tip ν is
In a system with many electrons impinging on the barrier, this probability will give the proportion of those that successfully tunnel. If at a time t this fraction was at a later time t + dt the total fraction of would have tunneled. The current of tunneling electrons at each instance is therefore proportional to divided by which is the time derivative of
The time scale of the measurement in STM is many orders of magnitude larger than the typical femtosecond time scale of electron processes in materials, and is large. The fraction part of the formula is a fast-oscillating function of that rapidly decays away from the central peak, where . In other words, the most probable tunneling process, by far, is the elastic one, in which the electron's energy is conserved. The fraction, as written above, is a representation of the delta function, so
Solid-state systems are commonly described in terms of continuous rather than discrete energy levels. The term can be thought of as the density of states of the tip at energy giving
The number of energy levels in the sample between the energies and is When occupied, these levels are spin-degenerate (except in a few special classes of materials) and contain charge of either spin. With the sample biased to voltage tunneling can occur only between states whose occupancies, given for each electrode by the Fermi–Dirac distribution , are not the same, that is, when either one or the other is occupied, but not both. That will be for all energies for which is not zero. For example, an electron will tunnel from energy level in the sample into energy level in the tip (), an electron at in the sample will find unoccupied states in the tip at (), and so will be for all energies in between. The tunneling current is therefore the sum of little contributions over all these energies of the product of three factors: representing available electrons, for those that are allowed to tunnel, and the probability factor for those that will actually tunnel:
Typical experiments are run at a liquid-helium temperature (around 4 K), at which the Fermi-level cut-off of the electron population is less than a millielectronvolt wide. The allowed energies are only those between the two step-like Fermi levels, and the integral becomes
When the bias is small, it is reasonable to assume that the electron wave functions and, consequently, the tunneling matrix element do not change significantly in the narrow range of energies. Then the tunneling current is simply the convolution of the densities of states of the sample surface and the tip:
How the tunneling current depends on distance between the two electrodes is contained in the tunneling matrix element
This formula can be transformed so that no explicit dependence on the potential remains. First, the part is taken out from the Schrödinger equation for the tip, and the elastic tunneling condition is used so that
Now is present in the Schrödinger equation for the sample and equals the kinetic plus the potential operator acting on However, the potential part containing US is on the tip side of the barrier nearly zero. What remains,
can be integrated over z because the integrand in the parentheses equals
Bardeen's tunneling matrix element is an integral of the wave functions and their gradients over a surface separating the two planar electrodes:
The exponential dependence of the tunneling current on the separation of the electrodes comes from the very wave functions that leak through the potential step at the surface and exhibit exponential decay into the classically forbidden region outside of the material.
The tunneling matrix elements show appreciable energy dependence, which is such that tunneling from the upper end of the eV interval is nearly an order of magnitude more likely than tunneling from the states at its bottom. When the sample is biased positively, its unoccupied levels are probed as if the density of states of the tip is concentrated at its Fermi level. Conversely, when the sample is biased negatively, its occupied electronic states are probed, but the spectrum of the electronic states of the tip dominates. In this case it is important that the density of states of the tip is as flat as possible.
The results identical to Bardeen's can be obtained by considering adiabatic approach of the two electrodes and using the standard time-dependent perturbation theory. This leads to Fermi's golden rule for the transition probability in the form given above.
Bardeen's model is for tunneling between two planar electrodes and does not explain scanning tunneling microscope's lateral resolution. Tersoff and Hamann used Bardeen's theory and modeled the tip as a structureless geometric point. This helped them disentangle the properties of the tip—which are hard to model—from the properties of the sample surface. The main result was that the tunneling current is proportional to the local density of states of the sample at the Fermi level taken at the position of the center of curvature of a spherically symmetric tip (s-wave tip model). With such a simplification, their model proved valuable for interpreting images of surface features bigger than a nanometre, even though it predicted atomic-scale corrugations of less than a picometre. These are well below the microscope's detection limit and below the values actually observed in experiments.
In sub-nanometre-resolution experiments, the convolution of the tip and sample surface states will always be important, to the extent of the apparent inversion of the atomic corrugations that may be observed within the same scan. Such effects can only be explained by modeling of the surface and tip electronic states and the ways the two electrodes interact from first principles.
Gallery of STM images
Early invention
An earlier invention similar to Binnig and Rohrer's, the Topografiner of R. Young, J. Ward, and F. Scire from the NIST, relied on field emission. However, Young is credited by the Nobel Committee as the person who realized that it should be possible to achieve better resolution by using the tunnel effect.
Other related techniques
Many other microscopy techniques have been developed based upon STM. These include photon scanning microscopy (PSTM), which uses an optical tip to tunnel photons; scanning tunneling potentiometry (STP), which measures electric potential across a surface; spin-polarized scanning tunneling microscopy (SPSTM), which uses a ferromagnetic tip to tunnel spin-polarized electrons into a magnetic sample; multi-tip scanning tunneling microscopy, which enables electrical measurements to be performed at the nanoscale; and atomic force microscopy (AFM), in which the force caused by interaction between the tip and sample is measured.
STM can be used to manipulate atoms and change the topography of the sample. This is attractive for several reasons. Firstly the STM has an atomically precise positioning system, which enables very accurate atomic-scale manipulation. Furthermore, after the surface is modified by the tip, the same instrument can be used to image the resulting structures. IBM researchers famously developed a way to manipulate xenon atoms adsorbed on a nickel surface. This technique has been used to create electron corrals with a small number of adsorbed atoms and observe Friedel oscillations in the electron density on the surface of the substrate. Aside from modifying the actual sample surface, one can also use the STM to tunnel electrons into a layer of electron-beam photoresist on the sample, in order to do lithography. This has the advantage of offering more control of the exposure than traditional electron-beam lithography. Another practical application of STM is atomic deposition of metals (gold, silver, tungsten, etc.) with any desired (pre-programmed) pattern, which can be used as contacts to nanodevices or as nanodevices themselves.
| Technology | Optical instruments | null |
27783 | https://en.wikipedia.org/wiki/Stem%20cell | Stem cell | In multicellular organisms, stem cells are undifferentiated or partially differentiated cells that can change into various types of cells and proliferate indefinitely to produce more of the same stem cell. They are the earliest type of cell in a cell lineage. They are found in both embryonic and adult organisms, but they have slightly different properties in each. They are usually distinguished from progenitor cells, which cannot divide indefinitely, and precursor or blast cells, which are usually committed to differentiating into one cell type.
In mammals, roughly 50 to 150 cells make up the inner cell mass during the blastocyst stage of embryonic development, around days 5–14. These have stem-cell capability. In vivo, they eventually differentiate into all of the body's cell types (making them pluripotent). This process starts with the differentiation into the three germ layers – the ectoderm, mesoderm and endoderm – at the gastrulation stage. However, when they are isolated and cultured in vitro, they can be kept in the stem-cell stage and are known as embryonic stem cells (ESCs).
Adult stem cells are found in a few select locations in the body, known as niches, such as those in the bone marrow or gonads. They exist to replenish rapidly lost cell types and are multipotent or unipotent, meaning they only differentiate into a few cell types or one type of cell. In mammals, they include, among others, hematopoietic stem cells, which replenish blood and immune cells, basal cells, which maintain the skin epithelium, and mesenchymal stem cells, which maintain bone, cartilage, muscle and fat cells. Adult stem cells are a small minority of cells; they are vastly outnumbered by the progenitor cells and terminally differentiated cells that they differentiate into.
Research into stem cells grew out of findings by Canadian biologists Ernest McCulloch, James Till and Andrew J. Becker at the University of Toronto and the Ontario Cancer Institute in the 1960s. , the only established medical therapy using stem cells is hematopoietic stem cell transplantation, first performed in 1958 by French oncologist Georges Mathé. Since 1998 however, it has been possible to culture and differentiate human embryonic stem cells (in stem-cell lines). The process of isolating these cells has been controversial, because it typically results in the destruction of the embryo. Sources for isolating ESCs have been restricted in some European countries and Canada, but others such as the UK and China have promoted the research. Somatic cell nuclear transfer is a cloning method that can be used to create a cloned embryo for the use of its embryonic stem cells in stem cell therapy. In 2006, a Japanese team led by Shinya Yamanaka discovered a method to convert mature body cells back into stem cells. These were termed induced pluripotent stem cells (iPSCs).
History
The term stem cell was coined by Theodor Boveri and Valentin Haecker in late 19th century. Pioneering works in theory of blood stem cell were conducted in the beginning of 20th century by Artur Pappenheim, Alexander A. Maximow, Franz Ernst Christian Neumann.
The key properties of a stem cell were first defined by Ernest McCulloch and James Till at the University of Toronto and the Ontario Cancer Institute in the early 1960s. They discovered the blood-forming stem cell, the hematopoietic stem cell (HSC), through their pioneering work in mice. McCulloch and Till began a series of experiments in which bone marrow cells were injected into irradiated mice. They observed lumps in the spleens of the mice that were linearly proportional to the number of bone marrow cells injected. They hypothesized that each lump (colony) was a clone arising from a single marrow cell (stem cell). In subsequent work, McCulloch and Till, joined by graduate student Andrew John Becker and senior scientist Louis Siminovitch, confirmed that each lump did in fact arise from a single cell. Their results were published in Nature in 1963. In that same year, Siminovitch was a lead investigator for studies that found colony-forming cells were capable of self-renewal, which is a key defining property of stem cells that Till and McCulloch had theorized.
The first therapy using stem cells was a bone marrow transplant performed by French oncologist Georges Mathé in 1956 on five workers at the Vinča Nuclear Institute in Yugoslavia who had been affected by a criticality accident. The workers all survived.
In 1981, embryonic stem (ES) cells were first isolated and successfully cultured using mouse blastocysts by British biologists Martin Evans and Matthew Kaufman. This allowed the formation of murine genetic models, a system in which the genes of mice are deleted or altered in order to study their function in pathology. In 1991, a process that allowed the human stem cell to be isolated was patented by Ann Tsukamoto. By 1998, human embryonic stem cells were first isolated by American biologist James Thomson, which made it possible to have new transplantation methods or various cell types for testing new treatments. In 2006, Shinya Yamanaka's team in Kyoto, Japan converted fibroblasts into pluripotent stem cells by modifying the expression of only four genes. The feat represents the origin of induced pluripotent stem cells, known as iPS cells.
In 2011, a female maned wolf, run over by a truck, underwent stem cell treatment at the , this being the first recorded case of the use of stem cells to heal injuries in a wild animal.
Properties
The classical definition of a stem cell requires that it possesses two properties:
Self-renewal: the ability to go through numerous cycles of cell growth and cell division, known as cell proliferation, while maintaining the undifferentiated state.
Potency: the capacity to differentiate into specialized cell types. In the strictest sense, this requires stem cells to be either totipotent or pluripotent—to be able to give rise to any mature cell type, although multipotent or unipotent progenitor cells are sometimes referred to as stem cells. Apart from this, it is said that stem cell function is regulated in a feedback mechanism.
Self-renewal
Two mechanisms ensure that a stem cell population is maintained (does not shrink in size):
1. Asymmetric cell division: a stem cell divides into one mother cell, which is identical to the original stem cell, and another daughter cell, which is differentiated.
When a stem cell self-renews, it divides and disrupts the undifferentiated state. This self-renewal demands control of cell cycle as well as upkeep of multipotency or pluripotency, which all depends on the stem cell.
H.
Stem cells use telomerase, a protein that restores telomeres, to protect their DNA and extend their cell division limit (the Hayflick limit).
Potency meaning
Potency specifies the differentiation potential (the potential to differentiate into different cell types) of the stem cell.
Totipotent (also known as omnipotent) stem cells can differentiate into embryonic and extraembryonic cell types. Such cells can construct a complete, viable organism. These cells are produced from the fusion of an egg and sperm cell. Cells produced by the first few divisions of the fertilized egg are also totipotent.
Pluripotent stem cells are the descendants of totipotent cells and can differentiate into nearly all cells, i.e. cells derived from any of the three germ layers.
Multipotent stem cells can differentiate into a number of cell types, but only those of a closely related family of cells.
Oligopotent stem cells can differentiate into only a few cell types, such as lymphoid or myeloid stem cells.
Unipotent cells can produce only one cell type, their own, but have the property of self-renewal, which distinguishes them from non-stem cells
Identification
In practice, stem cells are identified by whether they can regenerate tissue. For example, the defining test for bone marrow or hematopoietic stem cells (HSCs) is the ability to transplant the cells and save an individual without HSCs. This demonstrates that the cells can produce new blood cells over a long term. It should also be possible to isolate stem cells from the transplanted individual, which can themselves be transplanted into another individual without HSCs, demonstrating that the stem cell was able to self-renew.
Properties of stem cells can be illustrated in vitro, using methods such as clonogenic assays, in which single cells are assessed for their ability to differentiate and self-renew. Stem cells can also be isolated by their possession of a distinctive set of cell surface markers. However, in vitro culture conditions can alter the behavior of cells, making it unclear whether the cells shall behave in a similar manner in vivo. There is considerable debate as to whether some proposed adult cell populations are truly stem cells.
Embryonic
Embryonic stem cells (ESCs) are the cells of the inner cell mass of a blastocyst, formed prior to implantation in the uterus. In human embryonic development the blastocyst stage is reached 4–5 days after fertilization, at which time it consists of 50–150 cells. ESCs are pluripotent and give rise during development to all derivatives of the three germ layers: ectoderm, endoderm and mesoderm. In other words, they can develop into each of the more than 200 cell types of the adult body when given sufficient and necessary stimulation for a specific cell type. They do not contribute to the extraembryonic membranes or to the placenta.
During embryonic development the cells of the inner cell mass continuously divide and become more specialized. For example, a portion of the ectoderm in the dorsal part of the embryo specializes as 'neurectoderm', which will become the future central nervous system (CNS). Later in development, neurulation causes the neurectoderm to form the neural tube. At the neural tube stage, the anterior portion undergoes encephalization to generate or 'pattern' the basic form of the brain. At this stage of development, the principal cell type of the CNS is considered a neural stem cell.
The neural stem cells self-renew and at some point transition into radial glial progenitor cells (RGPs). Early-formed RGPs self-renew by symmetrical division to form a reservoir group of progenitor cells. These cells transition to a neurogenic state and start to divide asymmetrically to produce a large diversity of many different neuron types, each with unique gene expression, morphological, and functional characteristics. The process of generating neurons from radial glial cells is called neurogenesis. The radial glial cell, has a distinctive bipolar morphology with highly elongated processes spanning the thickness of the neural tube wall. It shares some glial characteristics, most notably the expression of glial fibrillary acidic protein (GFAP). The radial glial cell is the primary neural stem cell of the developing vertebrate CNS, and its cell body resides in the ventricular zone, adjacent to the developing ventricular system. Neural stem cells are committed to the neuronal lineages (neurons, astrocytes, and oligodendrocytes), and thus their potency is restricted.
Nearly all research to date has made use of mouse embryonic stem cells (mES) or human embryonic stem cells (hES) derived from the early inner cell mass. Both have the essential stem cell characteristics, yet they require very different environments in order to maintain an undifferentiated state. Mouse ES cells are grown on a layer of gelatin as an extracellular matrix (for support) and require the presence of leukemia inhibitory factor (LIF) in serum media. A drug cocktail containing inhibitors to GSK3B and the MAPK/ERK pathway, called 2i, has also been shown to maintain pluripotency in stem cell culture. Human ESCs are grown on a feeder layer of mouse embryonic fibroblasts and require the presence of basic fibroblast growth factor (bFGF or FGF-2). Without optimal culture conditions or genetic manipulation, embryonic stem cells will rapidly differentiate.
A human embryonic stem cell is also defined by the expression of several transcription factors and cell surface proteins. The transcription factors Oct-4, Nanog, and Sox2 form the core regulatory network that ensures the suppression of genes that lead to differentiation and the maintenance of pluripotency. The cell surface antigens most commonly used to identify hES cells are the glycolipids stage specific embryonic antigen 3 and 4, and the keratan sulfate antigens Tra-1-60 and Tra-1-81. The molecular definition of a stem cell includes many more proteins and continues to be a topic of research.
By using human embryonic stem cells to produce specialized cells like nerve cells or heart cells in the lab, scientists can gain access to adult human cells without taking tissue from patients. They can then study these specialized adult cells in detail to try to discern complications of diseases, or to study cell reactions to proposed new drugs.
Because of their combined abilities of unlimited expansion and pluripotency, embryonic stem cells remain a theoretically potential source for regenerative medicine and tissue replacement after injury or disease., however, there are currently no approved treatments using ES cells. The first human trial was approved by the US Food and Drug Administration in January 2009. However, the human trial was not initiated until October 13, 2010 in Atlanta for spinal cord injury research. On November 14, 2011 the company conducting the trial (Geron Corporation) announced that it will discontinue further development of its stem cell programs. Differentiating ES cells into usable cells while avoiding transplant rejection are just a few of the hurdles that embryonic stem cell researchers still face. Embryonic stem cells, being pluripotent, require specific signals for correct differentiation – if injected directly into another body, ES cells will differentiate into many different types of cells, causing a teratoma. Ethical considerations regarding the use of unborn human tissue are another reason for the lack of approved treatments using embryonic stem cells. Many nations currently have moratoria or limitations on either human ES cell research or the production of new human ES cell lines.
Mesenchymal stem cells
Mesenchymal stem cells (MSC) or mesenchymal stromal cells, also known as medicinal signaling cells are known to be multipotent, which can be found in adult tissues, for example, in the muscle, liver, bone marrow and adipose tissue. Mesenchymal stem cells usually function as structural support in various organs as mentioned above, and control the movement of substances. MSC can differentiate into numerous cell categories as an illustration of adipocytes, osteocytes, and chondrocytes, derived by the mesodermal layer. Where the mesoderm layer provides an increase to the body's skeletal elements, such as relating to the cartilage or bone. The term "meso" means middle, infusion originated from the Greek, signifying that mesenchymal cells are able to range and travel in early embryonic growth among the ectodermal and endodermal layers. This mechanism helps with space-filling thus, key for repairing wounds in adult organisms that have to do with mesenchymal cells in the dermis (skin), bone, or muscle.
Mesenchymal stem cells are known to be essential for regenerative medicine. They are broadly studied in clinical trials. Since they are easily isolated and obtain high yield, high plasticity, which makes able to facilitate inflammation and encourage cell growth, cell differentiation, and restoring tissue derived from immunomodulation and immunosuppression. MSC comes from the bone marrow, which requires an aggressive procedure when it comes to isolating the quantity and quality of the isolated cell, and it varies by how old the donor. When comparing the rates of MSC in the bone marrow aspirates and bone marrow stroma, the aspirates tend to have lower rates of MSC than the stroma. MSC are known to be heterogeneous, and they express a high level of pluripotent markers when compared to other types of stem cells, such as embryonic stem cells. MSCs injection leads to wound healing primarily through stimulation of angiogenesis.
Cell cycle control
Embryonic stem cells (ESCs) have the ability to divide indefinitely while keeping their pluripotency, which is made possible through specialized mechanisms of cell cycle control. Compared to proliferating somatic cells, ESCs have unique cell cycle characteristics—such as rapid cell division caused by shortened G1 phase, absent G0 phase, and modifications in cell cycle checkpoints—which leaves the cells mostly in S phase at any given time. ESCs' rapid division is demonstrated by their short doubling time, which ranges from 8 to 10 hours, whereas somatic cells have doubling time of approximately 20 hours or longer. As cells differentiate, these properties change: G1 and G2 phases lengthen, leading to longer cell division cycles. This suggests that a specific cell cycle structure may contribute to the establishment of pluripotency.
Particularly because G1 phase is the phase in which cells have increased sensitivity to differentiation, shortened G1 is one of the key characteristics of ESCs and plays an important role in maintaining undifferentiated phenotype. Although the exact molecular mechanism remains only partially understood, several studies have shown insight on how ESCs progress through G1—and potentially other phases—so rapidly.
The cell cycle is regulated by complex network of cyclins, cyclin-dependent kinases (Cdk), cyclin-dependent kinase inhibitors (Cdkn), pocket proteins of the retinoblastoma (Rb) family, and other accessory factors. Foundational insight into the distinctive regulation of ESC cell cycle was gained by studies on mouse ESCs (mESCs). mESCs showed a cell cycle with highly abbreviated G1 phase, which enabled cells to rapidly alternate between M phase and S phase. In a somatic cell cycle, oscillatory activity of Cyclin-Cdk complexes is observed in sequential action, which controls crucial regulators of the cell cycle to induce unidirectional transitions between phases: Cyclin D and Cdk4/6 are active in the G1 phase, while Cyclin E and Cdk2 are active during the late G1 phase and S phase; and Cyclin A and Cdk2 are active in the S phase and G2, while Cyclin B and Cdk1 are active in G2 and M phase. However, in mESCs, this typically ordered and oscillatory activity of Cyclin-Cdk complexes is absent. Rather, the Cyclin E/Cdk2 complex is constitutively active throughout the cycle, keeping retinoblastoma protein (pRb) hyperphosphorylated and thus inactive. This allows for direct transition from M phase to the late G1 phase, leading to absence of D-type cyclins and therefore a shortened G1 phase. Cdk2 activity is crucial for both cell cycle regulation and cell-fate decisions in mESCs; downregulation of Cdk2 activity prolongs G1 phase progression, establishes a somatic cell-like cell cycle, and induces expression of differentiation markers.
In human ESCs (hESCs), the duration of G1 is dramatically shortened. This has been attributed to high mRNA levels of G1-related Cyclin D2 and Cdk4 genes and low levels of cell cycle regulatory proteins that inhibit cell cycle progression at G1, such as p21CipP1, p27Kip1, and p57Kip2. Furthermore, regulators of Cdk4 and Cdk6 activity, such as members of the Ink family of inhibitors (p15, p16, p18, and p19), are expressed at low levels or not at all. Thus, similar to mESCs, hESCs show high Cdk activity, with Cdk2 exhibiting the highest kinase activity. Also similar to mESCs, hESCs demonstrate the importance of Cdk2 in G1 phase regulation by showing that G1 to S transition is delayed when Cdk2 activity is inhibited and G1 is arrest when Cdk2 is knocked down. However unlike mESCs, hESCs have a functional G1 phase. hESCs show that the activities of Cyclin E/Cdk2 and Cyclin A/Cdk2 complexes are cell cycle-dependent and the Rb checkpoint in G1 is functional.
ESCs are also characterized by G1 checkpoint non-functionality, even though the G1 checkpoint is crucial for maintaining genomic stability. In response to DNA damage, ESCs do not stop in G1 to repair DNA damages but instead, depend on S and G2/M checkpoints or undergo apoptosis. The absence of G1 checkpoint in ESCs allows for the removal of cells with damaged DNA, hence avoiding potential mutations from inaccurate DNA repair. Consistent with this idea, ESCs are hypersensitive to DNA damage to minimize mutations passed onto the next generation.
Fetal
The primitive stem cells located in the organs of fetuses are referred to as fetal stem cells.
There are two types of fetal stem cells:
Fetal proper stem cells come from the tissue of the fetus proper and are generally obtained after an abortion. These stem cells are not immortal but have a high level of division and are multipotent.
Extraembryonic fetal stem cells come from extraembryonic membranes, and are generally not distinguished from adult stem cells. These stem cells are acquired after birth, they are not immortal but have a high level of cell division, and are pluripotent.
Adult
Adult stem cells, also called somatic (from Greek σωματικóς, "of the body") stem cells, are stem cells which maintain and repair the tissue in which they are found.
There are three known accessible sources of autologous adult stem cells in humans:
Bone marrow, which requires extraction by harvesting, usually from pelvic bones via surgery.
Adipose tissue (fat cells), which requires extraction by liposuction.
Blood, which requires extraction through apheresis, wherein blood is drawn from the donor (similar to a blood donation), and passed through a machine that extracts the stem cells and returns other portions of the blood to the donor.
Stem cells can also be taken from umbilical cord blood just after birth. Of all stem cell types, autologous harvesting involves the least risk. By definition, autologous cells are obtained from one's own body, just as one may bank their own blood for elective surgical procedures.
Pluripotent adult stem cells are rare and generally small in number, but they can be found in umbilical cord blood and other tissues. Bone marrow is a rich source of adult stem cells, which have been used in treating several conditions including liver cirrhosis, chronic limb ischemia and endstage heart failure. The quantity of bone marrow stem cells declines with age and is greater in males than females during reproductive years. Much adult stem cell research to date has aimed to characterize their potency and self-renewal capabilities. DNA damage accumulates with age in both stem cells and the cells that comprise the stem cell environment. This accumulation is considered to be responsible, at least in part, for increasing stem cell dysfunction with aging (see DNA damage theory of aging).
Most adult stem cells are lineage-restricted (multipotent) and are generally referred to by their tissue origin (mesenchymal stem cell, adipose-derived stem cell, endothelial stem cell, dental pulp stem cell, etc.). Muse cells (multi-lineage differentiating stress enduring cells) are a recently discovered pluripotent stem cell type found in multiple adult tissues, including adipose, dermal fibroblasts, and bone marrow. While rare, muse cells are identifiable by their expression of SSEA-3, a marker for undifferentiated stem cells, and general mesenchymal stem cells markers such as CD90, CD105. When subjected to single cell suspension culture, the cells will generate clusters that are similar to embryoid bodies in morphology as well as gene expression, including canonical pluripotency markers Oct4, Sox2, and Nanog.
Adult stem cell treatments have been successfully used for many years to treat leukemia and related bone/blood cancers through bone marrow transplants. Adult stem cells are also used in veterinary medicine to treat tendon and ligament injuries in horses.
The use of adult stem cells in research and therapy is not as controversial as the use of embryonic stem cells, because the production of adult stem cells does not require the destruction of an embryo. Additionally, in instances where adult stem cells are obtained from the intended recipient (an autograft), the risk of rejection is essentially non-existent. Consequently, more US government funding is being provided for adult stem cell research.
With the increasing demand of human adult stem cells for both research and clinical purposes (typically 1–5 million cells per kg of body weight are required per treatment) it becomes of utmost importance to bridge the gap between the need to expand the cells in vitro and the capability of harnessing the factors underlying replicative senescence. Adult stem cells are known to have a limited lifespan in vitro and to enter replicative senescence almost undetectably upon starting in vitro culturing.
Hematopoietic stem cells
Hematopoietic stem cells (HSCs) are vulnerable to DNA damage and mutations that increase with age. This vulnerability may explain the increased risk of slow growing blood cancers (myeloid malignancies) in the elderly. Several factors appear to influence HSC aging including responses to the production of reactive oxygen species that may cause DNA damage and genetic mutations as well as altered epigenetic profiling.
Amniotic
Also called perinatal stem cells, these multipotent stem cells are found in amniotic fluid and umbilical cord blood. These stem cells are very active, expand extensively without feeders and are not tumorigenic. Amniotic stem cells are multipotent and can differentiate in cells of adipogenic, osteogenic, myogenic, endothelial, hepatic and also neuronal lines.
Amniotic stem cells are a topic of active research.
Use of stem cells from amniotic fluid overcomes the ethical objections to using human embryos as a source of cells. Roman Catholic teaching forbids the use of embryonic stem cells in experimentation; accordingly, the Vatican newspaper "Osservatore Romano" called amniotic stem cells "the future of medicine".
It is possible to collect amniotic stem cells for donors or for autologous use: the first US amniotic stem cells bank was opened in 2009 in Medford, MA, by Biocell Center Corporation and collaborates with various hospitals and universities all over the world.
Induced pluripotent
Adult stem cells have limitations with their potency; unlike embryonic stem cells (ESCs), they are not able to differentiate into cells from all three germ layers. As such, they are deemed multipotent.
However, reprogramming allows for the creation of pluripotent cells, induced pluripotent stem cells (iPSCs), from adult cells. These are not adult stem cells, but somatic cells (e.g. epithelial cells) reprogrammed to give rise to cells with pluripotent capabilities. Using genetic reprogramming with protein transcription factors, pluripotent stem cells with ESC-like capabilities have been derived. The first demonstration of induced pluripotent stem cells was conducted by Shinya Yamanaka and his colleagues at Kyoto University. They used the transcription factors Oct3/4, Sox2, c-Myc, and Klf4 to reprogram mouse fibroblast cells into pluripotent cells. Subsequent work used these factors to induce pluripotency in human fibroblast cells. Junying Yu, James Thomson, and their colleagues at the University of Wisconsin–Madison used a different set of factors, Oct4, Sox2, Nanog and Lin28, and carried out their experiments using cells from human foreskin. However, they were able to replicate Yamanaka's finding that inducing pluripotency in human cells was possible.
Induced pluripotent stem cells differ from embryonic stem cells. They share many similar properties, such as pluripotency and differentiation potential, the expression of pluripotency genes, epigenetic patterns, embryoid body and teratoma formation, and viable chimera formation, but there are many differences within these properties. The chromatin of iPSCs appears to be more "closed" or methylated than that of ESCs. Similarly, the gene expression pattern between ESCs and iPSCs, or even iPSCs sourced from different origins. There are thus questions about the "completeness" of reprogramming and the somatic memory of induced pluripotent stem cells. Despite this, inducing somatic cells to be pluripotent appears to be viable.
As a result of the success of these experiments, Ian Wilmut, who helped create the first cloned animal Dolly the Sheep, has announced that he will abandon somatic cell nuclear transfer as an avenue of research.
The ability to induce pluripotency benefits developments in tissue engineering. By providing a suitable scaffold and microenvironment, iPSC can be differentiated into cells of therapeutic application, and for in vitro models to study toxins and pathogenesis.
Induced pluripotent stem cells provide several therapeutic advantages. Like ESCs, they are pluripotent. They thus have great differentiation potential; theoretically, they could produce any cell within the human body (if reprogramming to pluripotency was "complete"). Moreover, unlike ESCs, they potentially could allow doctors to create a pluripotent stem cell line for each individual patient. Frozen blood samples can be used as a valuable source of induced pluripotent stem cells. Patient specific stem cells allow for the screening for side effects before drug treatment, as well as the reduced risk of transplantation rejection. Despite their current limited use therapeutically, iPSCs hold great potential for future use in medical treatment and research.
Cell cycle control
The key factors controlling the cell cycle also regulate pluripotency. Thus, manipulation of relevant genes can maintain pluripotency and reprogram somatic cells to an induced pluripotent state. However, reprogramming of somatic cells is often low in efficiency and considered stochastic.
With the idea that a more rapid cell cycle is a key component of pluripotency, reprogramming efficiency can be improved. Methods for improving pluripotency through manipulation of cell cycle regulators include: overexpression of Cyclin D/Cdk4, phosphorylation of Sox2 at S39 and S253, overexpression of Cyclin A and Cyclin E, knockdown of Rb, and knockdown of members of the Cip/Kip family or the Ink family. Furthermore, reprogramming efficiency is correlated with the number of cell divisions happened during the stochastic phase, which is suggested by the growing inefficiency of reprogramming of older or slow diving cells.
Lineage
Lineage is an important procedure to analyze developing embryos. Since cell lineages shows the relationship between cells at each division. This helps in analyzing stem cell lineages along the way which helps recognize stem cell effectiveness, lifespan, and other factors. With the technique of cell lineage mutant genes can be analyzed in stem cell clones that can help in genetic pathways. These pathways can regulate how the stem cell perform.
To ensure self-renewal, stem cells undergo two types of cell division (see Stem cell division and differentiation diagram). Symmetric division gives rise to two identical daughter cells both endowed with stem cell properties. Asymmetric division, on the other hand, produces only one stem cell and a progenitor cell with limited self-renewal potential. Progenitors can go through several rounds of cell division before terminally differentiating into a mature cell. It is possible that the molecular distinction between symmetric and asymmetric divisions lies in differential segregation of cell membrane proteins (such as receptors) between the daughter cells.
An alternative theory is that stem cells remain undifferentiated due to environmental cues in their particular niche. Stem cells differentiate when they leave that niche or no longer receive those signals. Studies in Drosophila germarium have identified the signals decapentaplegic and adherens junctions that prevent germarium stem cells from differentiating.
In the United States, Executive Order 13505 established that federal money can be used for research in which approved human embryonic stem-cell (hESC) lines are used, but it cannot be used to derive new lines. The National Institutes of Health (NIH) Guidelines on Human Stem Cell Research, effective July 7, 2009, implemented the Executive Order 13505 by establishing criteria which hESC lines must meet to be approved for funding. The NIH Human Embryonic Stem Cell Registry can be accessed online and has updated information on cell lines eligible for NIH funding. There are 486 approved lines as of January 2022.
Therapies
Stem cell therapy is the use of stem cells to treat or prevent a disease or condition. Bone marrow transplant is a form of stem cell therapy that has been used for many years because it has proven to be effective in clinical trials. Stem cell implantation may help in strengthening the left-ventricle of the heart, as well as retaining the heart tissue to patients who have suffered from heart attacks in the past.
For over 90 years, hematopoietic stem cell transplantation (HSCT) has been used to treat people with conditions such as leukaemia and lymphoma; this is the only widely practiced form of stem-cell therapy. , the only established therapy using stem cells is hematopoietic stem cell transplantation. This usually takes the form of a bone-marrow transplantation, but the cells can also be derived from umbilical cord blood. Research is underway to develop various sources for stem cells as well as to apply stem-cell treatments for neurodegenerative diseases and conditions such as diabetes and heart disease.
Advantages
Stem cell treatments may lower symptoms of the disease or condition that is being treated. The lowering of symptoms may allow patients to reduce the drug intake of the disease or condition. Stem cell treatment may also provide knowledge for society to further stem cell understanding and future treatments. The physicians' creed would be to do no injury, and stem cells make that simpler than ever before. Surgical processes by their character are harmful. Tissue has to be dropped as a way to reach a successful outcome. One may prevent the dangers of surgical interventions using stem cells. Additionally, there's a possibility of disease, and whether the procedure fails, further surgery may be required. Risks associated with anesthesia can also be eliminated with stem cells. On top of that, stem cells have been harvested from the patient's body and redeployed in which they're wanted. Since they come from the patient's own body, this is referred to as an autologous treatment. Autologous remedies are thought to be the safest because there's likely zero probability of donor substance rejection.
Disadvantages
Stem cell treatments may require immunosuppression because of a requirement for radiation before the transplant to remove the person's previous cells, or because the patient's immune system may target the stem cells. One approach to avoid the second possibility is to use stem cells from the same patient who is being treated.
Pluripotency in certain stem cells could also make it difficult to obtain a specific cell type. It is also difficult to obtain the exact cell type needed, because not all cells in a population differentiate uniformly. Undifferentiated cells can create tissues other than desired types.
Some stem cells form tumors after transplantation; pluripotency is linked to tumor formation especially in embryonic stem cells, fetal proper stem cells, induced pluripotent stem cells. Fetal proper stem cells form tumors despite multipotency.
Ethical concerns are also raised about the practice of using or researching embryonic stem cells. Harvesting cells from the blastocyst results in the death of the blastocyst. The concern is whether or not the blastocyst should be considered as a human life. The debate on this issue is mainly a philosophical one, not a scientific one.
Stem cell tourism
Stem cell tourism is the part of the medical tourism industry in which patients travel to obtain stem cell procedures.
The United States has had an explosion of "stem cell clinics". Stem cell procedures are highly profitable for clinics. The advertising sounds authoritative but the efficacy and safety of the procedures is unproven. Patients sometimes experience complications, such as spinal tumors and death. The high expense can also lead to financial problems. According to researchers, there is a need to educate the public, patients, and doctors about this issue.
According to the International Society for Stem Cell Research, the largest academic organization that advocates for stem cell research, stem cell therapies are under development and cannot yet be said to be proven. Doctors should inform patients that clinical trials continue to investigate whether these therapies are safe and effective but that unethical clinics present them as proven.
Research
Some of the fundamental patents covering human embryonic stem cells are owned by the Wisconsin Alumni Research Foundation (WARF) – they are patents 5,843,780, 6,200,806, and 7,029,913 invented by James A. Thomson. WARF does not enforce these patents against academic scientists, but does enforce them against companies.
In 2006, a request for the US Patent and Trademark Office (USPTO) to re-examine the three patents was filed by the Public Patent Foundation on behalf of its client, the non-profit patent-watchdog group Consumer Watchdog (formerly the Foundation for Taxpayer and Consumer Rights). In the re-examination process, which involves several rounds of discussion between the USPTO and the parties, the USPTO initially agreed with Consumer Watchdog and rejected all the claims in all three patents, however in response, WARF amended the claims of all three patents to make them more narrow, and in 2008 the USPTO found the amended claims in all three patents to be patentable. The decision on one of the patents (7,029,913) was appealable, while the decisions on the other two were not. Consumer Watchdog appealed the granting of the '913 patent to the USPTO's Board of Patent Appeals and Interferences (BPAI) which granted the appeal, and in 2010 the BPAI decided that the amended claims of the '913 patent were not patentable. However, WARF was able to re-open prosecution of the case and did so, amending the claims of the '913 patent again to make them more narrow, and in January 2013 the amended claims were allowed.
In July 2013, Consumer Watchdog announced that it would appeal the decision to allow the claims of the '913 patent to the US Court of Appeals for the Federal Circuit (CAFC), the federal appeals court that hears patent cases. At a hearing in December 2013, the CAFC raised the question of whether Consumer Watchdog had legal standing to appeal; the case could not proceed until that issue was resolved.
Conditions
Diseases and conditions where stem cell treatment is being investigated include:
Diabetes
Androgenic Alopecia and hair loss
Rheumatoid arthritis
Parkinson's disease
Alzheimer's disease
Respiratory disease
Osteoarthritis
Stroke and traumatic brain injury repair
Learning disability due to congenital disorder
Spinal cord injury repair
Heart infarction
Anti-cancer treatments
Baldness reversal
Replace missing teeth
Repair hearing
Restore vision and repair damage to the cornea
Amyotrophic lateral sclerosis
Crohn's disease
Wound healing
Male infertility due to absence of spermatogonial stem cells. In recent studies, scientists have found a way to solve this problem by reprogramming a cell and turning it into a spermatozoon. Other studies have proven the restoration of spermatogenesis by introducing human iPSC cells in mice testicles. This could mean the end of azoospermia.
Female infertility: oocytes made from embryonic stem cells. Scientists have found the ovarian stem cells, a rare type of cells (0.014%) found in the ovary. They could be used as a treatment not only for infertility, but also for premature ovarian insufficiency (POI). New research posted in Science Direct suggests that ovarian follicles could be triggered to grow in the ovarian environment by using stem cells present in bone marrow. This study was conducted by infusing human bone marrow stem cells into immune-deficient mice to imrpove fertilization. Another study conducted using mice with damanged ovarian function from chemothearpy found that in vivo thearpy with bone marrow stem cells can heal the damaged ovaries. Both of these studies are proof-of-concept and need to be furthered tested, but they have the possibility improve fertility for individuals who have POI from chemothearpy treatment.
Critical Limb Ischemia
Production
Research is underway to develop various sources for stem cells.
Organoids
Research is attempting to generating organoids using stem cells, which would allow for further understanding of human development, organogenesis, and modeling of human diseases. Engineered ‘synthetic organizer’ (SO) cells can instruct stem cells to grow into specific tissues and organs. The program used native and synthetic cell adhesion protein molecules (CAMs) that help make cells sticky. The organizer cells self-assembled around mouse ESCs. These cells were engineered to produce morphogens (signaling molecules) that direct cellular development based on their concentration. Delivered morphogens disperse, leaving higher concentrations closer to the source and lower concentrations further away. These gradients signal cells' ultimate roles, such as nerve, skin cell, or connective tissue. The engineered organizer cells were also fitted with a chemical switch that enabled the researchers to turn the delivery of cellular instructions on and off, as well as a ‘suicide switch’ for eliminating the cells when needed. SOs carry spatial and biochemical information, allowing considerable discretion in organoid formation.
Risks
Hepatotoxicity and drug-induced liver injury account for a substantial number of failures of new drugs in development and market withdrawal, highlighting the need for screening assays such as stem cell-derived hepatocyte-like cells, that are capable of detecting toxicity early in the drug development process.
Dormancy
In August 2021, researchers in the Princess Margaret Cancer Centre at the University Health Network published their discovery of a dormancy mechanism in key stem cells which could help develop cancer treatments in the future.
| Biology and health sciences | Cell processes | null |
27790 | https://en.wikipedia.org/wiki/Schizophrenia | Schizophrenia | Schizophrenia is a mental disorder characterized variously by hallucinations (typically, hearing voices), delusions, disorganized thinking and behavior, and flat or inappropriate affect. Symptoms develop gradually and typically begin during young adulthood and are never resolved. There is no objective diagnostic test; diagnosis is based on observed behavior, a psychiatric history that includes the person's reported experiences, and reports of others familiar with the person. For a diagnosis of schizophrenia, the described symptoms need to have been present for at least six months (according to the DSM-5) or one month (according to the ICD-11). Many people with schizophrenia have other mental disorders, especially mood, anxiety, and substance use disorders, as well as obsessive–compulsive disorder (OCD).
About 0.3% to 0.7% of people are diagnosed with schizophrenia during their lifetime. In 2017, there were an estimated 1.1 million new cases and in 2022 a total of 24 million cases globally. Males are more often affected and on average have an earlier onset than females. The causes of schizophrenia may include genetic and environmental factors. Genetic factors include a variety of common and rare genetic variants. Possible environmental factors include being raised in a city, childhood adversity, cannabis use during adolescence, infections, the age of a person's mother or father, and poor nutrition during pregnancy.
About half of those diagnosed with schizophrenia will have a significant improvement over the long term with no further relapses, and a small proportion of these will recover completely. The other half will have a lifelong impairment. In severe cases, people may be admitted to hospitals. Social problems such as long-term unemployment, poverty, homelessness, exploitation, and victimization are commonly correlated with schizophrenia. Compared to the general population, people with schizophrenia have a higher suicide rate (about 5% overall) and more physical health problems, leading to an average decrease in life expectancy by 20 to 28 years. In 2015, an estimated 17,000 deaths were linked to schizophrenia.
The mainstay of treatment is antipsychotic medication, including olanzapine and risperidone, along with counseling, job training, and social rehabilitation. Up to a third of people do not respond to initial antipsychotics, in which case clozapine is offered. In a network comparative meta-analysis of 15 antipsychotic drugs, clozapine was significantly more effective than all other drugs, although clozapine's heavily multimodal action may cause more significant side effects. In situations where doctors judge that there is a risk of harm to self or others, they may impose short involuntary hospitalization. Long-term hospitalization is used on a small number of people with severe schizophrenia. In some countries where supportive services are limited or unavailable, long-term hospital stays are more common.
Signs and symptoms
Schizophrenia is a mental disorder characterized by significant alterations in perception, thoughts, mood, and behavior. Symptoms are described in terms of positive, negative, and cognitive symptoms. The positive symptoms of schizophrenia are the same for any psychosis and are sometimes referred to as psychotic symptoms. These may be present in any of the different psychoses and are often transient, making early diagnosis of schizophrenia problematic. Psychosis noted for the first time in a person who is later diagnosed with schizophrenia is referred to as a first-episode psychosis (FEP).
Positive symptoms
Positive symptoms are those symptoms that are not normally experienced, but are present in people during a psychotic episode in schizophrenia, including delusions, hallucinations, and disorganized thoughts, speech and behavior or inappropriate affect, typically regarded as manifestations of psychosis. Hallucinations occur at some point in the lifetimes of 80% of those with schizophrenia and most commonly involve the sense of hearing (most often hearing voices), but can sometimes involve any of the other senses such as taste, sight, smell, and touch. The frequency of hallucinations involving multiple senses is double the rate of those involving only one sense. They are also typically related to the content of the delusional theme. Delusions are bizarre or persecutory in nature. Distortions of self-experience such as feeling that others can hear one's thoughts or that thoughts are being inserted into one's mind, sometimes termed passivity phenomena, are also common. Positive symptoms generally respond well to medication and become reduced over the course of the illness, perhaps linked to the age-related decline in dopamine activity.
Negative symptoms
Negative symptoms are deficits of normal emotional responses, or of other thought processes. The five recognized domains of negative symptoms are: blunted affect – showing flat expressions (monotone) or little emotion; alogia – a poverty of speech; anhedonia – an inability to feel pleasure; asociality – the lack of desire to form relationships, and avolition – a lack of motivation and apathy. Avolition and anhedonia are seen as motivational deficits resulting from impaired reward processing. Reward is the main driver of motivation and this is mostly mediated by dopamine. It has been suggested that negative symptoms are multidimensional and they have been categorised into two subdomains of apathy or lack of motivation, and diminished expression. Apathy includes avolition, anhedonia, and social withdrawal; diminished expression includes blunt affect and alogia. Sometimes diminished expression is treated as both verbal and non-verbal.
Apathy accounts for around 50% of the most often found negative symptoms and affects functional outcome and subsequent quality of life. Apathy is related to disrupted cognitive processing affecting memory and planning, including goal-directed behaviour. The two subdomains have suggested a need for separate treatment approaches. A lack of distress is another noted negative symptom. A distinction is often made between those negative symptoms that are inherent to schizophrenia, termed primary; and those that result from positive symptoms, from the side effects of antipsychotics, substance use disorder, and social deprivation, termed secondary negative symptoms. Negative symptoms are less responsive to medication and the most difficult to treat. However, if properly assessed, secondary negative symptoms are amenable to treatment. There is some evidence that the negative symptoms of schizophrenia are amenable to psychostimulant medication, although such drugs have varying degrees of risk for causing positive psychotic symptoms.
Scales for specifically assessing the presence of negative symptoms, and for measuring their severity, and their changes have been introduced since the earlier scales such as the PANNS that deals with all types of symptoms. These scales are the Clinical Assessment Interview for Negative Symptoms (CAINS), and the Brief Negative Symptom Scale (BNSS) also known as second-generation scales. In 2020, ten years after its introduction, a cross-cultural study of the use of BNSS found valid and reliable psychometric evidence for its five-domain structure cross-culturally. The BNSS can assess both the presence and severity of negative symptoms of the five recognized domains and an additional item of reduced normal distress. It has been used to measure changes in negative symptoms in trials of psychosocial and pharmacological interventions.
Cognitive symptoms
An estimated 70% of those with schizophrenia have cognitive deficits, and these are most pronounced in early-onset and late-onset illness. These are often evident long before the onset of illness in the prodromal stage, and may be present in childhood or early adolescence. They are a core feature but not considered to be core symptoms, as are positive and negative symptoms. However, their presence and degree of dysfunction is taken as a better indicator of functionality than the presentation of core symptoms. Cognitive deficits become worse at first episode psychosis but then return to baseline, and remain fairly stable over the course of the illness.
The deficits in cognition are seen to drive the negative psychosocial outcome in schizophrenia, and are claimed to equate to a possible reduction in IQ from the norm of 100 to 70–85. Cognitive deficits may be of neurocognition (nonsocial) or of social cognition. Neurocognition is the ability to receive and remember information, and includes verbal fluency, memory, reasoning, problem solving, speed of processing, and auditory and visual perception. Verbal memory and attention are seen to be the most affected. Verbal memory impairment is associated with a decreased level of semantic processing (relating meaning to words). Another memory impairment is that of episodic memory. An impairment in visual perception that is consistently found in schizophrenia is that of visual backward masking. Visual processing impairments include an inability to perceive complex visual illusions. Social cognition is concerned with the mental operations needed to interpret, and understand the self and others in the social world. This is also an associated impairment, and facial emotion perception is often found to be difficult. Facial perception is critical for ordinary social interaction. Cognitive impairments do not usually respond to antipsychotics, and there are a number of interventions that are used to try to improve them; cognitive remediation therapy is of particular help.
Neurological soft signs of clumsiness and loss of fine motor movement are often found in schizophrenia, which may resolve with effective treatment of FEP.
Onset
Onset typically occurs between the late teens and early 30s, with the peak incidence occurring in males in the early to mid-twenties, and in females in the late twenties. Onset before the age of 17 is known as early-onset, and before the age of 13, as can sometimes occur, is known as childhood schizophrenia or very early-onset. Onset can occur between the ages of 40 and 60, known as late-onset schizophrenia. Onset over the age of 60, which may be difficult to differentiate as schizophrenia, is known as very-late-onset schizophrenia-like psychosis. Late onset has shown that a higher rate of females are affected; they have less severe symptoms and need lower doses of antipsychotics. The tendency for earlier onset in males is later seen to be balanced by a post-menopausal increase in the development in females. Estrogen produced pre-menopause has a dampening effect on dopamine receptors but its protection can be overridden by a genetic overload. There has been a dramatic increase in the numbers of older adults with schizophrenia.
Onset may happen suddenly or may occur after the slow and gradual development of a number of signs and symptoms, a period known as the prodromal stage. Up to 75% of those with schizophrenia go through a prodromal stage. The negative and cognitive symptoms in the prodrome stage can precede FEP (first episode psychosis) by many months and up to five years. The period from FEP and treatment is known as the duration of untreated psychosis (DUP) which is seen to be a factor in functional outcome. The prodromal stage is the high-risk stage for the development of psychosis. Since the progression to first episode psychosis is not inevitable, an alternative term is often preferred of at risk mental state. Cognitive dysfunction at an early age impacts a young person's usual cognitive development. Recognition and early intervention at the prodromal stage would minimize the associated disruption to educational and social development and has been the focus of many studies.
Risk factors
Schizophrenia is described as a neurodevelopmental disorder with no precise boundary, or single cause, and is thought to develop from gene–environment interactions with involved vulnerability factors. The interactions of these risk factors are complex, as numerous and diverse insults from conception to adulthood can be involved. A genetic predisposition on its own, without interacting environmental factors, will not give rise to the development of schizophrenia. The genetic component means that prenatal brain development is disturbed, and environmental influence affects the postnatal development of the brain. Evidence suggests that genetically susceptible children are more likely to be vulnerable to the effects of environmental risk factors.
Genetic
Estimates of the heritability of schizophrenia are between 70% and 80%, which implies that 70% to 80% of the individual differences in risk of schizophrenia are associated with genetics. These estimates vary because of the difficulty in separating genetic and environmental influences, and their accuracy has been queried. The greatest risk factor for developing schizophrenia is having a first-degree relative with the disease (risk is 6.5%); more than 40% of identical twins of those with schizophrenia are also affected. If one parent is affected the risk is about 13% and if both are affected the risk is nearly 50%. However, the DSM-5 indicates that most people with schizophrenia have no family history of psychosis. Results of candidate gene studies of schizophrenia have generally failed to find consistent associations, and the genetic loci identified by genome-wide association studies explain only a small fraction of the variation in the disease.
Many genes are known to be involved in schizophrenia, each with small effects and unknown transmission and expression. The summation of these effect sizes into a polygenic risk score can explain at least 7% of the variability in liability for schizophrenia. Around 5% of cases of schizophrenia are understood to be at least partially attributable to rare copy number variations (CNVs); these structural variations are associated with known genomic disorders involving deletions at 22q11.2 (DiGeorge syndrome) and 17q12 (17q12 microdeletion syndrome), duplications at 16p11.2 (most frequently found) and deletions at 15q11.2 (Burnside–Butler syndrome). Some of these CNVs increase the risk of developing schizophrenia by as much as 20-fold, and are frequently comorbid with autism and intellectual disabilities.
The genes CRHR1 and CRHBP are associated with the severity of suicidal behavior. These genes code for stress response proteins needed in the control of the HPA axis, and their interaction can affect this axis. Response to stress can cause lasting changes in the function of the HPA axis possibly disrupting the negative feedback mechanism, homeostasis, and the regulation of emotion leading to altered behaviors.
The question of how schizophrenia could be primarily genetically influenced, given that people with schizophrenia have lower fertility rates, is a paradox. It is expected that genetic variants that increase the risk of schizophrenia would be selected against, due to their negative effects on reproductive fitness. A number of potential explanations have been proposed, including that alleles associated with schizophrenia risk confers a fitness advantage in unaffected individuals. While some evidence has not supported this idea, others propose that a large number of alleles each contributing a small amount can persist.
A meta-analysis found that oxidative DNA damage was significantly increased in schizophrenia.
Environmental
Environmental factors, each associated with a slight risk of developing schizophrenia in later life include oxygen deprivation, infection, prenatal maternal stress, and malnutrition in the mother during prenatal development. A risk is associated with maternal obesity, in increasing oxidative stress, and dysregulating the dopamine and serotonin pathways. Both maternal stress and infection have been demonstrated to alter fetal neurodevelopment through an increase of pro-inflammatory cytokines. There is a slighter risk associated with being born in the winter or spring possibly due to vitamin D deficiency or a prenatal viral infection. Other infections during pregnancy or around the time of birth that have been linked to an increased risk include infections by Toxoplasma gondii and Chlamydia. The increased risk is about five to eight percent. Viral infections of the brain during childhood are also linked to a risk of schizophrenia during adulthood. Cat exposure is also associated with an increased risk of broadly defined schizophrenia-related disorders, with an odds ratio of 2.4.
Adverse childhood experiences (ACEs), severe forms of which are classed as childhood trauma, range from being bullied or abused, to the death of a parent. Many adverse childhood experiences can cause toxic stress and increase the risk of psychosis. Chronic trauma, including ACEs, can promote lasting inflammatory dysregulation throughout the nervous system. It is suggested that early stress may contribute to the development of schizophrenia through these alterations in the immune system. Schizophrenia was the last diagnosis to benefit from the link made between ACEs and adult mental health outcomes.
Living in an urban environment during childhood or as an adult has consistently been found to increase the risk of schizophrenia by a factor of two, even after taking into account drug use, ethnic group, and size of social group. A possible link between the urban environment and pollution has been suggested to be the cause of the elevated risk of schizophrenia. Other risk factors include social isolation, immigration related to social adversity and racial discrimination, family dysfunction, unemployment, and poor housing conditions. Having a father older than 40 years, or parents younger than 20 years are also associated with schizophrenia.
Substance use
About half of those with schizophrenia use recreational drugs including alcohol, tobacco, and cannabis excessively. Use of stimulants such as amphetamine and cocaine can lead to a temporary stimulant psychosis, which presents very similarly to schizophrenia. Rarely, alcohol use can also result in a similar alcohol-related psychosis. Drugs may also be used as coping mechanisms by people who have schizophrenia, to deal with depression, anxiety, boredom, and loneliness. The use of cannabis and tobacco are not associated with the development of cognitive deficits, and sometimes a reverse relationship is found where their use improves these symptoms. However, substance use disorders are associated with an increased risk of suicide, and a poor response to treatment.
Cannabis use may be a contributory factor in the development of schizophrenia, potentially increasing the risk of the disease in those who are already at risk. The increased risk may require the presence of certain genes within an individual. Its use is associated with doubling the rate.
Causes
The causes of schizophrenia are still unknown. Several models have been put forward to explain the link between altered brain function and schizophrenia. The prevailing model of schizophrenia is that of a neurodevelopmental disorder, and the underlying changes that occur before symptoms become evident are seen as arising from the interaction between genes and the environment. Extensive studies support this model. Maternal infections, malnutrition and complications during pregnancy and childbirth are known risk factors for the development of schizophrenia, which usually emerges between the ages of 18 and 25, a period that overlaps with certain stages of neurodevelopment. Gene-environment interactions lead to deficits in the neural circuitry that affect sensory and cognitive functions.
The common dopamine and glutamate models proposed are not mutually exclusive; each is seen to have a role in the neurobiology of schizophrenia. The most common model put forward was the dopamine hypothesis of schizophrenia, which attributes psychosis to the mind's faulty interpretation of the misfiring of dopaminergic neurons. This has been directly related to the symptoms of delusions and hallucinations. Abnormal dopamine signaling has been implicated in schizophrenia based on the usefulness of medications that affect the dopamine receptor and the observation that dopamine levels are increased during acute psychosis. A decrease in D1 receptors in the dorsolateral prefrontal cortex may also be responsible for deficits in working memory.
The glutamate hypothesis of schizophrenia links alterations between glutamatergic neurotransmission and the neural oscillations that affect connections between the thalamus and the cortex. Studies have shown that a reduced expression of a glutamate receptor – NMDA receptor, and glutamate blocking drugs such as phencyclidine and ketamine can mimic the symptoms and cognitive problems associated with schizophrenia. Post-mortem studies consistently find that a subset of these neurons fail to express GAD67 (GAD1), in addition to abnormalities in brain morphometry. The subsets of interneurons that are abnormal in schizophrenia are responsible for the synchronizing of neural ensembles needed during working memory tasks. These give the neural oscillations produced as gamma waves that have a frequency of between 30 and 80 hertz. Both working memory tasks and gamma waves are impaired in schizophrenia, which may reflect abnormal interneuron functionality. An important process that may be disrupted in neurodevelopment is astrogenesis – the formation of astrocytes. Astrocytes are crucial in contributing to the formation and maintenance of neural circuits and it is believed that disruption in this role can result in a number of neurodevelopmental disorders including schizophrenia. Evidence suggests that reduced numbers of astrocytes in deeper cortical layers are assocociated with a diminished expression of EAAT2, a glutamate transporter in astrocytes; supporting the glutamate hypothesis.
Deficits in executive functions, such as planning, inhibition, and working memory, are pervasive in schizophrenia. Although these functions are separable, their dysfunction in schizophrenia may reflect an underlying deficit in the ability to represent goal related information in working memory, and to use this to direct cognition and behavior. These impairments have been linked to a number of neuroimaging and neuropathological abnormalities. For example, functional neuroimaging studies report evidence of reduced neural processing efficiency, whereby the dorsolateral prefrontal cortex is activated to a greater degree to achieve a certain level of performance relative to controls on working memory tasks. These abnormalities may be linked to the consistent post-mortem finding of reduced neuropil, evidenced by increased pyramidal cell density and reduced dendritic spine density. These cellular and functional abnormalities may also be reflected in structural neuroimaging studies that find reduced grey matter volume in association with deficits in working memory tasks.
Positive symptoms have been linked to cortical thinning in the superior temporal gyrus. The severity of negative symptoms has been linked to reduced thickness in the left medial orbitofrontal cortex. Anhedonia, traditionally defined as a reduced capacity to experience pleasure, is frequently reported in schizophrenia. However, a large body of evidence suggests that hedonic responses are intact in schizophrenia, and that what is reported to be anhedonia is a reflection of dysfunction in other processes related to reward. Overall, a failure of reward prediction is thought to lead to impairment in the generation of cognition and behavior required to obtain rewards, despite normal hedonic responses.
Another theory links abnormal brain lateralization to the development of being left-handed which is significantly more common in those with schizophrenia. This abnormal development of hemispheric asymmetry is noted in schizophrenia. Studies have concluded that the link is a true and verifiable effect that may reflect a genetic link between lateralization and schizophrenia.
Bayesian models of brain functioning have been used to link abnormalities in cellular functioning to symptoms. Both hallucinations and delusions have been suggested to reflect improper encoding of prior expectations, thereby causing expectation to excessively influence sensory perception and the formation of beliefs. In approved models of circuits that mediate predictive coding, reduced NMDA receptor activation, could in theory result in the positive symptoms of delusions and hallucinations.
Diagnosis
Criteria
Schizophrenia is diagnosed based on criteria in either the Diagnostic and Statistical Manual of Mental Disorders (DSM) published by the American Psychiatric Association or the International Statistical Classification of Diseases and Related Health Problems (ICD) published by the World Health Organization (WHO). These criteria use the self-reported experiences of the person and reported abnormalities in behavior, followed by a psychiatric assessment. The mental status examination is an important part of the assessment. An established tool for assessing the severity of positive and negative symptoms is the Positive and Negative Syndrome Scale (PANSS). This has been seen to have shortcomings relating to negative symptoms, and other scales – the Clinical Assessment Interview for Negative Symptoms (CAINS), and the Brief Negative Symptoms Scale (BNSS) have been introduced. The DSM-5, published in 2013, gives a Scale to Assess the Severity of Symptom Dimensions outlining eight dimensions of symptoms.
DSM-5 states that to be diagnosed with schizophrenia, two diagnostic criteria have to be met over the period of one month, with a significant impact on social or occupational functioning for at least six months. One of the symptoms needs to be either delusions, hallucinations, or disorganized speech. A second symptom could be one of the negative symptoms, or severely disorganized or catatonic behaviour. A different diagnosis of schizophreniform disorder can be made before the six months needed for the diagnosis of schizophrenia.
In Australia, the guideline for diagnosis is for six months or more with symptoms severe enough to affect ordinary functioning. In the UK diagnosis is based on having the symptoms for most of the time for one month, with symptoms that significantly affect the ability to work, study, or carry on ordinary daily living, and with other similar conditions ruled out.
The ICD criteria are typically used in European countries; the DSM criteria are used predominantly in the United States and Canada, and are prevailing in research studies. In practice, agreement between the two systems is high. The current proposal for the ICD-11 criteria for schizophrenia recommends adding self-disorder as a symptom.
A major unresolved difference between the two diagnostic systems is that of the requirement in DSM of an impaired functional outcome. WHO for ICD argues that not all people with schizophrenia have functional deficits and so these are not specific for the diagnosis.
Neuroimaging techniques
Functional magnetic resonance imaging (fMRI) has become a tool in understanding brain activity and connectivity differences in individuals with schizophrenia. Through resting-state fMRI, researchers have observed altered connectivity patterns within several key brain networks, such as the default mode network (DMN), salience network (SN), and central executive network (CEN). Alterations may underlie cognitive and emotional symptoms in schizophrenia, such as disorganized thinking, impaired attention, and emotional dysregulation.
Comorbidities
Many people with schizophrenia may have one or more other mental disorders, such as anxiety disorders, obsessive–compulsive disorder, or substance use disorder. These are separate disorders that require treatment. When comorbid with schizophrenia, substance use disorder and antisocial personality disorder both increase the risk for violence. Comorbid substance use disorder also increases the risk of suicide.
Sleep disorders often co-occur with schizophrenia, and may be an early sign of relapse. Sleep disorders are linked with positive symptoms such as disorganized thinking and can adversely affect cortical plasticity and cognition. The consolidation of memories is disrupted in sleep disorders. They are associated with severity of illness, a poor prognosis, and poor quality of life. Sleep onset and maintenance insomnia is a common symptom, regardless of whether treatment has been received or not. Genetic variations have been found associated with these conditions involving the circadian rhythm, dopamine and histamine metabolism, and signal transduction.
Schizophrenia is also associated with a number of somatic comorbidities including diabetes mellitus type 2, autoimmune diseases, and cardiovascular diseases. The association of these with schizophrenia may be partially due to medications (e.g. dyslipidemia from antipsychotics), environmental factors (e.g. complications from an increased rate of cigarette smoking), or associated with the disorder itself (e.g. diabetes mellitus type 2 and some cardiovascular diseases are thought to be genetically linked). These somatic comorbidities contribute to reduced life expectancy among persons with the disorder.
Differential diagnosis
To make a diagnosis of schizophrenia other possible causes of psychosis need to be excluded. Psychotic symptoms lasting less than a month may be diagnosed as brief psychotic disorder, or as schizophreniform disorder. Psychosis is noted in Other specified schizophrenia spectrum and other psychotic disorders as a DSM-5 category. Schizoaffective disorder is diagnosed if symptoms of mood disorder are substantially present alongside psychotic symptoms. Psychosis that results from a general medical condition or substance is termed secondary psychosis.
Psychotic symptoms may be present in several other conditions, including bipolar disorder, borderline personality disorder, substance intoxication, substance-induced psychosis, and a number of drug withdrawal syndromes. Non-bizarre delusions are also present in delusional disorder, and social withdrawal in social anxiety disorder, avoidant personality disorder and schizotypal personality disorder. Schizotypal personality disorder has symptoms that are similar but less severe than those of schizophrenia. Schizophrenia occurs along with obsessive–compulsive disorder (OCD) considerably more often than could be explained by chance, although it can be difficult to distinguish obsessions that occur in OCD from the delusions of schizophrenia. There can be considerable overlap with the symptoms of post-traumatic stress disorder.
A more general medical and neurological examination may be needed to rule out medical illnesses which may rarely produce psychotic schizophrenia-like symptoms, such as metabolic disturbance, systemic infection, syphilis, HIV-associated neurocognitive disorder, epilepsy, limbic encephalitis, and brain lesions. Stroke, multiple sclerosis, hyperthyroidism, hypothyroidism, and dementias such as Alzheimer's disease, Huntington's disease, frontotemporal dementia, and the Lewy body dementias may also be associated with schizophrenia-like psychotic symptoms. It may be necessary to rule out a delirium, which can be distinguished by visual hallucinations, acute onset and fluctuating level of consciousness, and indicates an underlying medical illness. Investigations are not generally repeated for relapse unless there is a specific medical indication or possible adverse effects from antipsychotic medication. In children hallucinations must be separated from typical childhood fantasies. It is difficult to distinguish childhood schizophrenia from autism.
Prevention
Prevention of schizophrenia is difficult as there are no reliable markers for the later development of the disorder.
Early intervention programs diagnose and treat patients in the prodromal phase of the illness. There is some evidence that these programs reduce symptoms. Patients tend to prefer early treatment programs to ordinary treatment and are less likely to disengage from them. As of 2020, it is unclear whether the benefits of early treatment persist once the treatment is terminated.
Cognitive behavioral therapy may reduce the risk of psychosis in those at high risk after a year and is recommended in this group, by the National Institute for Health and Care Excellence (NICE). Another preventive measure is to avoid drugs that have been associated with development of the disorder, including cannabis, cocaine, and amphetamines.
Antipsychotics are prescribed following a first-episode psychosis, and following remission, a preventive maintenance use is continued to avoid relapse. However, it is recognized that some people do recover following a single episode and that long-term use of antipsychotics will not be needed but there is no way of identifying this group.
Management
The primary treatment of schizophrenia is the use of antipsychotic medications, often in combination with psychosocial interventions and social supports. Community support services including drop-in centers, visits by members of a community mental health team, supported employment, and support groups are common. The time between the onset of psychotic symptoms to being given treatment – the duration of untreated psychosis (DUP) – is associated with a poorer outcome in both the short term and the long term.
Voluntary or involuntary admission to hospital may be imposed by doctors and courts who deem a person to be having a severe episode. In the UK, large mental hospitals termed asylums began to be closed down in the 1950s with the advent of antipsychotics, and with an awareness of the negative impact of long-term hospital stays on recovery. This process was known as deinstitutionalization, and community and supportive services were developed to support this change. Many other countries followed suit with the US starting in the 60s. There still remain a smaller group of people who do not improve enough to be discharged. In some countries that lack the necessary supportive and social services, long-term hospital stays are more usual.
Medication
The first-line treatment for schizophrenia is an antipsychotic. The first-generation antipsychotics, now called typical antipsychotics, like haloperidol, are dopamine antagonists that block D2 receptors, and affect the neurotransmission of dopamine. Those brought out later, the second-generation antipsychotics known as atypical antipsychotics, including olanzapine and risperidone, can also have an effect on another neurotransmitter, serotonin. Antipsychotics can reduce the symptoms of anxiety within hours of their use, but, for other symptoms, they may take several days or weeks to reach their full effect. They have little effect on negative and cognitive symptoms, which may be helped by additional psychotherapies and medications. There is no single antipsychotic suitable for first-line treatment for everyone, as responses and tolerances vary between people. Stopping medication may be considered after a single psychotic episode where there has been a full recovery with no symptoms for twelve months. Repeated relapses worsen the long-term outlook and the risk of relapse following a second episode is high, and long-term treatment is usually recommended.
About half of those with schizophrenia will respond favourably to antipsychotics, and have a good return of functioning. However, positive symptoms persist in up to a third of people. Following two trials of different antipsychotics over six weeks, that also prove ineffective, they will be classed as having treatment-resistant schizophrenia (TRS), and clozapine will be offered. Clozapine is of benefit to around half of this group although it has the potentially serious side effect of agranulocytosis (lowered white blood cell count) in less than 4% of people.
About 30 to 50 percent of people with schizophrenia do not accept that they have an illness or comply with their recommended treatment. For those who are unwilling or unable to take medication regularly, long-acting injections of antipsychotics may be used, which reduce the risk of relapse to a greater degree than oral medications. When used in combination with psychosocial interventions, they may improve long-term adherence to treatment.
The fixed-dose combination medication xanomeline/trospium chloride (Cobenfy) was approved for medical use in the United States in September 2024. It is the first cholinergic agonist approved by the US Food and Drug Administration (FDA) to treat schizophrenia.
Negative and cognitive symptoms are an unmet clinical need in antipsychotic-based treatment approaches. Psychostimulant drugs have been found effective in the treatment of negative symptoms, but are rarely prescribed due to concerns about the excacerbation of positive symptoms. It is possible that low-dose psychedelic therapies could be of benefit in schizophrenia through their prosocial and procognitive effects, although there is a serious risk that high dose psychedelic therapies could lead to worsening of positive symptoms.
Adverse effects
Extrapyramidal symptoms, including akathisia, are associated with all commercially available antipsychotic to varying degrees. There is little evidence that second generation antipsychotics have reduced levels of extrapyramidical symptoms compared to typical antipsychotics. Tardive dyskinesia can occur due to long-term use of antipsychotics, developing after months or years of use. The antipsychotic clozapine is also associated with thromboembolism (including pulmonary embolism), myocarditis, and cardiomyopathy.
Psychosocial interventions
A number of psychosocial interventions that include several types of psychotherapy may be useful in the treatment of schizophrenia such as: family therapy, group therapy, cognitive remediation therapy (CRT), cognitive behavioral therapy (CBT), and metacognitive training. Skills training, help with substance use, and weight management – often needed as a side effect of an antipsychotic – are also offered. In the US, interventions for first episode psychosis have been brought together in an overall approach known as coordinated speciality care (CSC) and also includes support for education. In the UK care across all phases is a similar approach that covers many of the treatment guidelines recommended. The aim is to reduce the number of relapses and stays in the hospital.
Other support services for education, employment, and housing are usually offered. For people with severe schizophrenia, who are discharged from a stay in the hospital, these services are often brought together in an integrated approach to offer support in the community away from the hospital setting. In addition to medicine management, housing, and finances, assistance is given for more routine matters such as help with shopping and using public transport. This approach is known as assertive community treatment (ACT) and has been shown to achieve positive results in symptoms, social functioning and quality of life. Another more intense approach is known as intensive care management (ICM). ICM is a stage further than ACT and emphasises support of high intensity in smaller caseloads, (less than twenty). This approach is to provide long-term care in the community. Studies show that ICM improves many of the relevant outcomes including social functioning.
Some studies have shown little evidence for the effectiveness of CBT in either reducing symptoms or preventing relapse. However, other studies have found that CBT does improve overall psychotic symptoms (when in use with medication) and it has been recommended in Canada, but has been seen to have no effect on social function, relapse, or quality of life. In the UK it is recommended as an add-on therapy in the treatment of schizophrenia. Arts therapies are seen to improve negative symptoms in some people, and are recommended by NICE in the UK. This approach is criticised as having not been well-researched, and arts therapies are not recommended in Australian guidelines for example. Peer support, in which people with personal experience of schizophrenia, provide help to each other, is of unclear benefit.
Other
Exercise including aerobic exercise has been shown to improve positive and negative symptoms, cognition, working memory, and improve quality of life. Exercise has also been shown to increase the volume of the hippocampus in those with schizophrenia. A decrease in hippocampal volume is one of the factors linked to the development of the disease. However, there still remains the problem of increasing motivation for, and maintaining participation in physical activity. Supervised sessions are recommended. In the UK healthy eating advice is offered alongside exercise programs.
An inadequate diet is often found in schizophrenia, and associated vitamin deficiencies including those of folate, and vitamin D are linked to the risk factors for the development of schizophrenia and for early death including heart disease. Those with schizophrenia possibly have the worst diet of all the mental disorders. Lower levels of folate and vitamin D have been noted as significantly lower in first episode psychosis. The use of supplemental folate is recommended. A zinc deficiency has also been noted. Vitamin B12 is also often deficient and this is linked to worse symptoms. Supplementation with B vitamins has been shown to significantly improve symptoms, and to put in reverse some of the cognitive deficits. It is also suggested that the noted dysfunction in gut microbiota might benefit from the use of probiotics.
Prognosis
Schizophrenia has great human and economic costs. It decreases life expectancy by between 10 and 28 years. This is primarily because of its association with heart disease, diabetes, obesity, poor diet, a sedentary lifestyle, and smoking, with an increased rate of suicide playing a lesser role. Side effects of antipsychotics may also increase the risk.
Almost 40% of those with schizophrenia die from complications of cardiovascular disease which is seen to be increasingly associated. An underlying factor of sudden cardiac death may be Brugada syndrome (BrS) – BrS mutations that overlap with those linked with schizophrenia are the calcium channel mutations. BrS may also be drug-induced from certain antipsychotics and antidepressants. Primary polydipsia, or excessive fluid intake, is relatively common in people with chronic schizophrenia. This may lead to hyponatremia which can be life-threatening. Antipsychotics can lead to a dry mouth, but there are several other factors that may contribute to the disorder; it may reduce life expectancy by 13 percent. Barriers to improving the mortality rate in schizophrenia are poverty, overlooking the symptoms of other illnesses, stress, stigma, and medication side effects.
Schizophrenia is a major cause of disability. In 2016, it was classed as the 12th most disabling condition. Approximately 75% of people with schizophrenia have ongoing disability with relapses. Some people do recover completely and others function well in society. Most people with schizophrenia live independently with community support. About 85% are unemployed. In people with a first episode of psychosis in schizophrenia a good long-term outcome occurs in 31%, an intermediate outcome in 42% and a poor outcome in 31%. Males are affected more often than females, and have a worse outcome. Studies showing that outcomes for schizophrenia appear better in the developing than the developed world have been questioned. Social problems, such as long-term unemployment, poverty, homelessness, exploitation, stigmatization and victimization are common consequences, and lead to social exclusion.
There is a higher than average suicide rate associated with schizophrenia estimated at 5% to 6%, most often occurring in the period following onset or first hospital admission. Several times more (20 to 40%) attempt suicide at least once. There are a variety of risk factors, including male sex, depression, a high IQ, heavy smoking, and substance use. Repeated relapse is linked to an increased risk of suicidal behavior. The use of clozapine can reduce the risk of suicide, and of aggression.
A strong association between schizophrenia and tobacco smoking has been shown in worldwide studies. Smoking is especially high in those diagnosed with schizophrenia, with estimates ranging from 80 to 90% being regular smokers, as compared to 20% of the general population. Those who smoke tend to smoke heavily, and additionally smoke cigarettes with high nicotine content. Some propose that this is in an effort to improve symptoms. Among people with schizophrenia use of cannabis is also common.
Schizophrenia leads to an increased risk of dementia.
Violence
Most people with schizophrenia are not aggressive, and are more likely to be victims of violence rather than perpetrators. People with schizophrenia are commonly exploited and victimized by violent crime as part of a broader dynamic of social exclusion. People diagnosed with schizophrenia are also subject to forced drug injections, seclusion, and restraint at high rates.
The risk of violence by people with schizophrenia is small. There are minor subgroups where the risk is high. This risk is usually associated with a comorbid disorder such as a substance use disorder – in particular alcohol, or with antisocial personality disorder. Substance use disorder is strongly linked, and other risk factors are linked to deficits in cognition and social cognition including facial perception and insight that are in part included in theory of mind impairments. Poor cognitive functioning, decision-making, and facial perception may contribute to making a wrong judgement of a situation that could result in an inappropriate response such as violence. These associated risk factors are also present in antisocial personality disorder which when present as a comorbid disorder greatly increases the risk of violence.
Epidemiology
In 2017, the Global Burden of Disease Study estimated there were 1.1 million new cases; in 2022 the World Health Organization (WHO) reported a total of 24 million cases globally. Schizophrenia affects around 0.3–0.7% of people at some point in their life. In areas of conflict this figure can rise to between 4.0 and 6.5%. It occurs 1.4 times more frequently in males than females and typically appears earlier in men.
Worldwide, schizophrenia is the most common psychotic disorder. The frequency of schizophrenia varies across the world, within countries, and at the local and neighborhood level; this variation in prevalence between studies over time, across geographical locations, and by gender is as high as fivefold.
Schizophrenia causes approximately one percent of worldwide disability adjusted life years and resulted in 17,000 deaths in 2015.
In 2000, WHO found the percentage of people affected and the number of new cases that develop each year is roughly similar around the world, with age-standardized prevalence per 100,000 ranging from 343 in Africa to 544 in Japan and Oceania for men, and from 378 in Africa to 527 in Southeastern Europe for women.
History
Conceptual development
Accounts of a schizophrenia-like syndrome are rare in records before the 19th century; the earliest case reports were in 1797 and 1809. The term dementia praecox ("premature dementia") was used by German psychiatrist Heinrich Schüle in 1886 and then in 1891 by Arnold Pick in a case report of hebephrenia. In 1893 Emil Kraepelin used the term in making a distinction, known as the Kraepelinian dichotomy, between the two psychoses: dementia praecox and manic depression (now called bipolar disorder). When it became evident that the disorder was not a degenerative dementia, it was renamed schizophrenia by Eugen Bleuler in 1908.
The word schizophrenia ("splitting of the mind") is Modern Latin, derived from the Greek schizein () and phrēn (). Its use was intended to describe the separation of function between personality, thinking, memory, and perception.
In the early 20th century, the psychiatrist Kurt Schneider categorized the psychotic symptoms of schizophrenia into two groups: hallucinations and delusions. The hallucinations were listed as specific to auditory and the delusions included thought disorders. These were seen as important symptoms, termed first-rank. The most common first-rank symptom was found to belong to thought disorders. In 2013 the first-rank symptoms were excluded from the DSM-5 criteria; while they may not be useful in diagnosing schizophrenia, they can assist in differential diagnosis.
Subtypes of schizophrenia—classified as paranoid, disorganized, catatonic, undifferentiated, and residual—were difficult to distinguish and are no longer recognized as separate conditions by DSM-5 (2013) or ICD-11.
Breadth of diagnosis
Before the 1960s, nonviolent petty criminals and women were sometimes diagnosed with schizophrenia, categorizing the latter as ill for not performing their duties as wives and mothers. In the mid- to late 1960s, black men were categorized as "hostile and aggressive" and diagnosed as schizophrenic at much higher rates, their civil rights and Black Power activism labeled as delusions.
In the early 1970s in the United States, the diagnostic model for schizophrenia was broad and clinically based using DSM II. Schizophrenia was diagnosed far more in the United States than in Europe, where the ICD-9 criteria were followed. The US model was criticised for failing to demarcate clearly those people with a mental illness. In 1980 DSM III was published and showed a shift in focus from the clinically based biopsychosocial model to a reason-based medical model. DSM IV brought an increased focus on an evidence-based medical model.
Historical treatment
In the 1930s a number of shock procedures which induced seizures (convulsions) or comas were used to treat schizophrenia. Insulin shock involved injecting large doses of insulin to induce comas, which in turn produced hypoglycemia and convulsions. The use of electricity to induce seizures was in use as electroconvulsive therapy (ECT) by 1938.
Carried out from the 1930s until the 1970s in the United States and until the 1980s in France, psychosurgery, including such modalities as the lobotomy, is recognized as a human rights abuse. In the mid-1950s, chlorpromazine, the first typical antipsychotic, was introduced, followed in the 1970s by clozapine, the first atypical antipsychotic.
Political abuse
From the 1960s until 1989, psychiatrists in the USSR and Eastern Bloc diagnosed thousands of people with sluggish schizophrenia, without signs of psychosis, based on "the assumption that symptoms would later appear". Now discredited, the diagnosis provided a convenient way to confine political dissidents.
Society and culture
In the United States, the annual cost of schizophrenia – including direct costs (outpatient, inpatient, drugs, and long-term care) and non-healthcare costs (law enforcement, reduced workplace productivity, and unemployment) – was estimated at $62.7 billion for the year 2002. In the UK the cost in 2016 was put at £11.8 billion per year with a third of that figure directly attributable to the cost of hospital, social care and treatment.
Stigma
In 2002, the term for schizophrenia in Japan was changed from to to reduce stigma and confusion with "multiple personalities". The new name, also interpreted as "integration disorder", was inspired by the biopsychosocial model. A similar change was made in South Korea in 2012 to attunement disorder.
Stigma may prevent further research and treatment as in history treated some in the past invariably worse to recovery.
Cultural depictions
Media coverage, especially movies, reinforce the public perception of an association between schizophrenia and violence. A majority of movies have historically depicted characters with schizophrenia as criminal, dangerous, violent, unpredictable and homicidal, and depicted delusions and hallucinations as the main symptoms of schizophrenic characters, ignoring other common symptoms, furthering stereotypes of schizophrenia including the idea of a split personality.
The book A Beautiful Mind chronicled the life of John Forbes Nash who had been diagnosed with schizophrenia and won the Nobel Memorial Prize in Economic Sciences. The book was made into a film with the same name; an earlier documentary film was A Brilliant Madness.
In the UK, guidelines for reporting conditions and award campaigns have shown a reduction in negative reporting since 2013.
In 1964 a case study of three males diagnosed with schizophrenia who each had the delusional belief that they were Jesus Christ was published as The Three Christs of Ypsilanti; a film with the title Three Christs was released in 2020.
Research directions
A 2015 Cochrane review found unclear evidence of benefit from brain stimulation techniques to treat the positive symptoms of schizophrenia, in particular auditory verbal hallucinations (AVHs). Most studies focus on transcranial direct-current stimulation (tDCM), and repetitive transcranial magnetic stimulation (rTMS). Techniques based on focused ultrasound for deep brain stimulation could provide insight for the treatment of AVHs.
The study of potential biomarkers that would help in diagnosis and treatment of schizophrenia is an active area of research as of 2020. Possible biomarkers include markers of inflammation, neuroimaging, brain-derived neurotrophic factor (BDNF), and speech analysis. Some markers such as C-reactive protein are useful in detecting levels of inflammation implicated in some psychiatric disorders but they are not disorder-specific. Other inflammatory cytokines are found to be elevated in first episode psychosis and acute relapse that are normalized after treatment with antipsychotics, and these may be considered as state markers. Deficits in sleep spindles in schizophrenia may serve as a marker of an impaired thalamocortical circuit, and a mechanism for memory impairment. MicroRNAs are highly influential in early neuronal development, and their disruption is implicated in several CNS disorders; circulating microRNAs (cimiRNAs) are found in body fluids such as blood and cerebrospinal fluid, and changes in their levels are seen to relate to changes in microRNA levels in specific regions of brain tissue. These studies suggest that cimiRNAs have the potential to be early and accurate biomarkers in a number of disorders including schizophrenia.
Ongoing fMRI research aims to identify biomarkers within these brain networks, potentially aiding in earlier diagnosis and better tracking of treatment responses in schizophrenia.
Explanatory notes
| Biology and health sciences | Mental disorder | null |
27813 | https://en.wikipedia.org/wiki/Systematics | Systematics | Systematics is the study of the diversification of living forms, both past and present, and the relationships among living things through time. Relationships are visualized as evolutionary trees (synonyms: phylogenetic trees, phylogenies). Phylogenies have two components: branching order (showing group relationships, graphically represented in cladograms) and branch length (showing amount of evolution). Phylogenetic trees of species and higher taxa are used to study the evolution of traits (e.g., anatomical or molecular characteristics) and the distribution of organisms (biogeography). Systematics, in other words, is used to understand the evolutionary history of life on Earth.
The word systematics is derived from the Latin word of Ancient Greek origin systema, which means systematic arrangement of organisms. Carl Linnaeus used 'Systema Naturae' as the title of his book.
Branches and applications
In the study of biological systematics, researchers use the different branches to further understand the relationships between differing organisms. These branches are used to determine the applications and uses for modern day systematics.
Biological systematics classifies species by using three specific branches. Numerical systematics, or biometry, uses biological statistics to identify and classify animals. Biochemical systematics classifies and identifies animals based on the analysis of the material that makes up the living part of a cell—such as the nucleus, organelles, and cytoplasm. Experimental systematics identifies and classifies animals based on the evolutionary units that comprise a species, as well as their importance in evolution itself. Factors such as mutations, genetic divergence, and hybridization all are considered evolutionary units.
With the specific branches, researchers are able to determine the applications and uses for modern-day systematics. These applications include:
Studying the diversity of organisms and the differentiation between extinct and living creatures. Biologists study the well-understood relationships by making many different diagrams and "trees" (cladograms, phylogenetic trees, phylogenies, etc.).
Including the scientific names of organisms, species descriptions and overviews, taxonomic orders, and classifications of evolutionary and organism histories.
Explaining the biodiversity of the planet and its organisms. The systematic study is that of conservation.
Manipulating and controlling the natural world. This includes the practice of 'biological control', the intentional introduction of natural predators and disease.
Definition and relation with taxonomy
John Lindley provided an early definition of systematics in 1830, although he wrote of "systematic botany" rather than using the term "systematics".
In 1970 Michener et al. defined "systematic biology" and "taxonomy" (terms that are often confused and used interchangeably) in relationship to one another as follows:
Systematic biology (hereafter called simply systematics) is the field that (a) provides scientific names for organisms, (b) describes them, (c) preserves collections of them, (d) provides classifications for the organisms, keys for their identification, and data on their distributions, (e) investigates their evolutionary histories, and (f) considers their environmental adaptations. This is a field with a long history that in recent years has experienced a notable renaissance, principally with respect to theoretical content. Part of the theoretical material has to do with evolutionary areas (topics e and f above), the rest relates especially to the problem of classification. Taxonomy is that part of Systematics concerned with topics (a) to (d) above.
The term "taxonomy" was coined by Augustin Pyramus de Candolle while the term "systematic" was coined by Carl Linnaeus the father of taxonomy.
Taxonomy, systematic biology, systematics, biosystematics, scientific classification, biological classification, phylogenetics: At various times in history, all these words have had overlapping, related meanings. However, in modern usage, they can all be considered synonyms of each other.
For example, Webster's 9th New Collegiate Dictionary of 1987 treats "classification", "taxonomy", and "systematics" as synonyms. According to this work, the terms originated in 1790, c. 1828, and in 1888 respectively. Some claim systematics alone deals specifically with relationships through time, and that it can be synonymous with phylogenetics, broadly dealing with the inferred hierarchy of organisms. This means it would be a subset of taxonomy as it is sometimes regarded, but the inverse is claimed by others.
Europeans tend to use the terms "systematics" and "biosystematics" for the study of biodiversity as a whole, whereas North Americans tend to use "taxonomy" more frequently. However, taxonomy, and in particular alpha taxonomy, is more specifically the identification, description, and naming (i.e. nomenclature) of organisms,
while "classification" focuses on placing organisms within hierarchical groups that show their relationships to other organisms. All of these biological disciplines can deal with both extinct and extant organisms.
Systematics uses taxonomy as a primary tool in understanding, as nothing about an organism's relationships with other living things can be understood without it first being properly studied and described in sufficient detail to identify and classify it correctly. Scientific classifications are aids in recording and reporting information to other scientists and to laymen. The systematist, a scientist who specializes in systematics, must, therefore, be able to use existing classification systems, or at least know them well enough to skilfully justify not using them.
Phenetics was an attempt to determine the relationships of organisms through a measure of overall similarity, making no distinction between plesiomorphies (shared ancestral traits) and apomorphies (derived traits). From the late-20th century onwards, it was superseded by cladistics, which rejects plesiomorphies in attempting to resolve the phylogeny of Earth's various organisms through time. systematists generally make extensive use of molecular biology and of computer programs to study organisms.
Taxonomic characters
Taxonomic characters are the taxonomic attributes that can be used to provide the evidence from which relationships (the phylogeny) between taxa are inferred. Kinds of taxonomic characters include:
Morphological characters
General external morphology
Special structures (e.g. genitalia)
Internal morphology (anatomy)
Embryology
Karyology and other cytological factors
Physiological characters
Metabolic factors
Body secretions
Genic sterility factors
Molecular characters
Immunological distance
Electrophoretic differences
Amino acid sequences of proteins
DNA hybridization
DNA and RNA sequences
Restriction endonuclease analyses
Other molecular differences
Behavioral characters
Courtship and other ethological isolating mechanisms
Other behavior patterns
Ecological characters
Habit and habitats
Food
Seasonal variations
Parasites and hosts
Geographic characters
General biogeographic distribution patterns
Sympatric-allopatric relationship of populations
| Biology and health sciences | Basics_4 | Biology |
27834 | https://en.wikipedia.org/wiki/Sleep | Sleep | Sleep is a state of reduced mental and physical activity in which consciousness is altered and certain sensory activity is inhibited. During sleep, there is a marked decrease in muscle activity and interactions with the surrounding environment. While sleep differs from wakefulness in terms of the ability to react to stimuli, it still involves active brain patterns, making it more reactive than a coma or disorders of consciousness.
Sleep occurs in repeating periods, during which the body alternates between two distinct modes: REM and non-REM sleep. Although REM stands for "rapid eye movement", this mode of sleep has many other aspects, including virtual paralysis of the body. Dreams are a succession of images, ideas, emotions, and sensations that usually occur involuntarily in the mind during certain stages of sleep.
During sleep, most of the body's systems are in an anabolic state, helping to restore the immune, nervous, skeletal, and muscular systems; these are vital processes that maintain mood, memory, and cognitive function, and play a large role in the function of the endocrine and immune systems. The internal circadian clock promotes sleep daily at night, when it is dark. The diverse purposes and mechanisms of sleep are the subject of substantial ongoing research. Sleep is a highly conserved behavior across animal evolution, likely going back hundreds of millions of years, and originating as a means for the brain to cleanse itself of waste products. In a major breakthrough, researchers have found that this cleansing may be a core purpose of sleep.
Humans may suffer from various sleep disorders, including dyssomnias, such as insomnia, hypersomnia, narcolepsy, and sleep apnea; parasomnias, such as sleepwalking and rapid eye movement sleep behavior disorder; bruxism; and circadian rhythm sleep disorders. The use of artificial light has substantially altered humanity's sleep patterns. Common sources of artificial light include outdoor lighting and the screens of electronic devices such as smartphones and televisions, which emit large amounts of blue light, a form of light typically associated with daytime. This disrupts the release of the hormone melatonin needed to regulate the sleep cycle.
Physiology
The most pronounced physiological changes in sleep occur in the brain. The brain uses significantly less energy during sleep than it does when awake, especially during non-REM sleep. In areas with reduced activity, the brain restores its supply of adenosine triphosphate (ATP), the molecule used for short-term storage and transport of energy. In quiet waking, the brain is responsible for 20% of the body's energy use, thus this reduction has a noticeable effect on overall energy consumption.
Sleep increases the sensory threshold. In other words, sleeping persons perceive fewer stimuli, but can generally still respond to loud noises and other salient sensory events.
During slow-wave sleep, humans secrete bursts of growth hormone. All sleep, even during the day, is associated with the secretion of prolactin.
Key physiological methods for monitoring and measuring changes during sleep include electroencephalography (EEG) of brain waves, electrooculography (EOG) of eye movements, and electromyography (EMG) of skeletal muscle activity. Simultaneous collection of these measurements is called polysomnography, and can be performed in a specialized sleep laboratory. Sleep researchers also use simplified electrocardiography (EKG) for cardiac activity and actigraphy for motor movements.
Brain waves in sleep
The electrical activity seen on an EEG represents brain waves. The amplitude of EEG waves at a particular frequency corresponds to various points in the sleep-wake cycle, such as being asleep, being awake, or falling asleep. Alpha, beta, theta, gamma, and delta waves are all seen in the different stages of sleep. Each waveform maintains a different frequency and amplitude. Alpha waves are seen when a person is in a resting state, but is still fully conscious. Their eyes may be closed and all of their body is resting and relatively still, where the body is starting to slow down. Beta waves take over alpha waves when a person is at attention, as they might be completing a task or concentrating on something. Beta waves consist of the highest of frequencies and the lowest of amplitude, and occur when a person is fully alert. Gamma waves are seen when a person is highly focused on a task or using all their concentration. Theta waves occur during the period of a person being awake, and they continue to transition into Stage 1 of sleep and in stage 2. Delta waves are seen in stages 3 and 4 of sleep when a person is in their deepest of sleep.
Non-REM and REM sleep
Sleep is divided into two broad types: non-rapid eye movement (non-REM or NREM) sleep and rapid eye movement (REM) sleep. Non-REM and REM sleep are so different that physiologists identify them as distinct behavioral states. Non-REM sleep occurs first and after a transitional period is called slow-wave sleep or deep sleep. During this phase, body temperature and heart rate fall, and the brain uses less energy. REM sleep, also known as paradoxical sleep, represents a smaller portion of total sleep time. It is the main occasion for dreams (or nightmares), and is associated with desynchronized and fast brain waves, eye movements, loss of muscle tone, and suspension of homeostasis.
The sleep cycle of alternate NREM and REM sleep takes an average of 90 minutes, occurring 4–6 times in a good night's sleep. The American Academy of Sleep Medicine (AASM) divides NREM into three stages: N1, N2, and N3, the last of which is also called delta sleep or slow-wave sleep. The whole period normally proceeds in the order: N1 → N2 → N3 → N2 → REM. REM sleep occurs as a person returns to stage 2 or 1 from a deep sleep. There is a greater amount of deep sleep (stage N3) earlier in the night, while the proportion of REM sleep increases in the two cycles just before natural awakening.
Awakening
Awakening can mean the end of sleep, or simply a moment to survey the environment and readjust body position before falling back asleep. Sleepers typically awaken soon after the end of a REM phase or sometimes in the middle of REM. Internal circadian indicators, along with a successful reduction of homeostatic sleep need, typically bring about awakening and the end of the sleep cycle. Awakening involves heightened electrical activation in the brain, beginning with the thalamus and spreading throughout the cortex.
On a typical night of sleep, there is not much time that is spent in the waking state. In various sleep studies that have been conducted using the electroencephalography, it has been found that females are awake for 0–1% during their nightly sleep while males are awake for 0–2% during that time. In adults, wakefulness increases, especially in later cycles. One study found 3% awake time in the first ninety-minute sleep cycle, 8% in the second, 10% in the third, 12% in the fourth, and 13–14% in the fifth. Most of this awake time occurred shortly after REM sleep.
Today, many humans wake up with an alarm clock; however, people can also reliably wake themselves up at a specific time with no need for an alarm. Many sleep quite differently on workdays versus days off, a pattern which can lead to chronic circadian desynchronization. Many people regularly look at television and other screens before going to bed, a factor which may exacerbate disruption of the circadian cycle. Scientific studies on sleep have shown that sleep stage at awakening is an important factor in amplifying sleep inertia.
Determinants of alertness after waking up include quantity/quality of the sleep, physical activity the day prior, a carbohydrate-rich breakfast, and a low blood glucose response to it.
Timing
Sleep timing is controlled by the circadian clock (Process C), sleep-wake homeostasis (Process S), and to some extent by the individual will.
Circadian clock
Sleep timing depends greatly on hormonal signals from the circadian clock, or Process C, a complex neurochemical system which uses signals from an organism's environment to recreate an internal day–night rhythm. Process C counteracts the homeostatic drive for sleep during the day (in diurnal animals) and augments it at night. The suprachiasmatic nucleus (SCN), a brain area directly above the optic chiasm, is presently considered the most important nexus for this process; however, secondary clock systems have been found throughout the body.
An organism whose circadian clock exhibits a regular rhythm corresponding to outside signals is said to be entrained; an entrained rhythm persists even if the outside signals suddenly disappear. If an entrained human is isolated in a bunker with constant light or darkness, he or she will continue to experience rhythmic increases and decreases of body temperature and melatonin, on a period that slightly exceeds 24 hours. Scientists refer to such conditions as free-running of the circadian rhythm. Under natural conditions, light signals regularly adjust this period downward, so that it corresponds better with the exact 24 hours of an Earth day.
The circadian clock exerts constant influence on the body, affecting sinusoidal oscillation of body temperature between roughly 36.2 °C and 37.2 °C. The suprachiasmatic nucleus itself shows conspicuous oscillation activity, which intensifies during subjective day (i.e., the part of the rhythm corresponding with daytime, whether accurately or not) and drops to almost nothing during subjective night. The circadian pacemaker in the suprachiasmatic nucleus has a direct neural connection to the pineal gland, which releases the hormone melatonin at night. Cortisol levels typically rise throughout the night, peak in the awakening hours, and diminish during the day. Circadian prolactin secretion begins in the late afternoon, especially in women, and is subsequently augmented by sleep-induced secretion, to peak in the middle of the night. Circadian rhythm exerts some influence on the nighttime secretion of growth hormone.
The circadian rhythm influences the ideal timing of a restorative sleep episode. Sleepiness increases during the night. REM sleep occurs more during body temperature minimum within the circadian cycle, whereas slow-wave sleep can occur more independently of circadian time.
The internal circadian clock is profoundly influenced by changes in light, since these are its main clues about what time it is. Exposure to even small amounts of light during the night can suppress melatonin secretion, and increase body temperature and wakefulness. Short pulses of light, at the right moment in the circadian cycle, can significantly 'reset' the internal clock. Blue light, in particular, exerts the strongest effect, leading to concerns that use of a screen before bed may interfere with sleep.
Modern humans often find themselves desynchronized from their internal circadian clock, due to the requirements of work (especially night shifts), long-distance travel, and the influence of universal indoor lighting. Even if they have sleep debt, or feel sleepy, people can have difficulty staying asleep at the peak of their circadian cycle. Conversely, they can have difficulty waking up in the trough of the cycle. A healthy young adult entrained to the sun will (during most of the year) fall asleep a few hours after sunset, experience body temperature minimum at 6 a.m., and wake up a few hours after sunrise.
Process S
Generally speaking, the longer an organism is awake, the more it feels a need to sleep ("sleep debt"). This driver of sleep is referred to as Process S. The balance between sleeping and waking is regulated by a process called homeostasis. Induced or perceived lack of sleep is called sleep deprivation.
Process S is driven by the depletion of glycogen and accumulation of adenosine in the forebrain that disinhibits the ventrolateral preoptic nucleus, allowing for inhibition of the ascending reticular activating system.
Sleep deprivation tends to cause slower brain waves in the frontal cortex, shortened attention span, higher anxiety, impaired memory, and a grouchy mood. Conversely, a well-rested organism tends to have improved memory and mood. Neurophysiological and functional imaging studies have demonstrated that frontal regions of the brain are particularly responsive to homeostatic sleep pressure.
There is disagreement on how much sleep debt is possible to accumulate, and whether sleep debt is accumulated against an individual's average sleep or some other benchmark. It is also unclear whether the prevalence of sleep debt among adults has changed appreciably in the industrialized world in recent decades. Sleep debt does show some evidence of being cumulative. Subjectively, however, humans seem to reach maximum sleepiness 30 hours after waking. It is likely that in Western societies, children are sleeping less than they previously have.
One neurochemical indicator of sleep debt is adenosine, a neurotransmitter that inhibits many of the bodily processes associated with wakefulness. Adenosine levels increase in the cortex and basal forebrain during prolonged wakefulness, and decrease during the sleep-recovery period, potentially acting as a homeostatic regulator of sleep. Coffee, tea, and other sources of caffeine temporarily block the effect of adenosine, prolong sleep latency, and reduce total sleep time and quality.
Social timing
Humans are also influenced by aspects of social time, such as the hours when other people are awake, the hours when work is required, the time on clocks, etc. Time zones, standard times used to unify the timing for people in the same area, correspond only approximately to the natural rising and setting of the sun. An extreme example of the approximate nature of time zones is China, a country which used to span five time zones and now officially uses only one (UTC+8).
Distribution
In polyphasic sleep, an organism sleeps several times in a 24-hour cycle, whereas in monophasic sleep this occurs all at once. Under experimental conditions, humans tend to alternate more frequently between sleep and wakefulness (i.e., exhibit more polyphasic sleep) if they have nothing better to do. Given a 14-hour period of darkness in experimental conditions, humans tended towards bimodal sleep, with two sleep periods concentrated at the beginning and at the end of the dark time. Bimodal sleep in humans was more common before the Industrial Revolution.
Different characteristic sleep patterns, such as the familiarly so-called "early bird" and "night owl", are called chronotypes. Genetics and sex have some influence on chronotype, but so do habits. Chronotype is also liable to change over the course of a person's lifetime. Seven-year-olds are better disposed to wake up early in the morning than are fifteen-year-olds. Chronotypes far outside the normal range are called circadian rhythm sleep disorders.
Naps
Naps are short periods of sleep that one might take during the daytime, often in order to get the necessary amount of rest. Napping is often associated with childhood, but around one-third of American adults partake in it daily. The optimal nap duration is around 10–20 minutes, as researchers have proven that it takes at least 30 minutes to enter slow-wave sleep, the deepest period of sleep. Napping too long and entering the slow wave cycles can make it difficult to awake from the nap and leave one feeling unrested. This period of drowsiness is called sleep inertia.
The siesta habit has recently been associated with a 37% lower coronary mortality, possibly due to reduced cardiovascular stress mediated by daytime sleep. Short naps at mid-day and mild evening exercise were found to be effective for improved sleep, cognitive tasks, and mental health in elderly people.
Genetics
Monozygotic (identical) but not dizygotic (fraternal) twins tend to have similar sleep habits. Neurotransmitters, molecules whose production can be traced to specific genes, are one genetic influence on sleep that can be analyzed. The circadian clock has its own set of genes. Genes which may influence sleep include ABCC9, DEC2, Dopamine receptor D2 and variants near PAX 8 and VRK2. While the latter have been found in a GWAS study that primarily detects correlations (but not necessarily causation), other genes have been shown to have a more direct effect. For instance, mice lacking dihydropyrimidine dehydrogenase (Dpyd) had 78.4 min less sleep during the lights-off period than wild-type mice. Dpyd encodes the rate-limiting enzyme in the metabolic pathway that catabolizes uracil and thymidine to β-alanine, an inhibitory neurotransmitter. This also supports the role of β-alanine as a neurotransmitter that promotes sleep in mice.
Genes for short sleep duration
The genes DEC2, ADRB1, NPSR1 and GRM1 are implicated in enabling short sleep.
Quality
The quality of sleep may be evaluated from an objective and a subjective point of view. Objective sleep quality refers to how difficult it is for a person to fall asleep and remain in a sleeping state, and how many times they wake up during a single night. Poor sleep quality disrupts the cycle of transition between the different stages of sleep. Subjective sleep quality in turn refers to a sense of being rested and regenerated after awaking from sleep. A study by A. Harvey et al. (2002) found that insomniacs were more demanding in their evaluations of sleep quality than individuals who had no sleep problems.
Homeostatic sleep propensity (the need for sleep as a function of the amount of time elapsed since the last adequate sleep episode) must be balanced against the circadian element for satisfactory sleep. Along with corresponding messages from the circadian clock, this tells the body it needs to sleep. The timing is correct when the following two circadian markers occur after the middle of the sleep episode and before awakening: maximum concentration of the hormone melatonin, and minimum core body temperature.
Ideal duration
Human sleep-needs vary by age and amongst individuals; sleep is considered to be adequate when there is no daytime sleepiness or dysfunction. Moreover, self-reported sleep duration is only moderately correlated with actual sleep time as measured by actigraphy, and those affected with sleep state misperception may typically report having slept only four hours despite having slept a full eight hours.
Researchers have found that sleeping 6–7 hours each night correlates with longevity and cardiac health in humans, though many underlying factors may be involved in the causality behind this relationship.
Sleep difficulties are furthermore associated with psychiatric disorders such as depression, alcoholism, and bipolar disorder. Up to 90 percent of adults with depression are found to have sleep difficulties. Dysregulation detected by EEG includes disturbances in sleep continuity, decreased delta sleep and altered REM patterns with regard to latency, distribution across the night and density of eye movements.
Sleep duration can also vary according to season. Up to 90% of people report longer sleep duration in winter, which may lead to more pronounced seasonal affective disorder.
Children
By the time infants reach the age of two, their brain size has reached 90 percent of an adult-sized brain; a majority of this brain growth has occurred during the period of life with the highest rate of sleep. The hours that children spend asleep influence their ability to perform on cognitive tasks. Children who sleep through the night and have few night waking episodes have higher cognitive attainments and easier temperaments than other children.
Sleep also influences language development. To test this, researchers taught infants a faux language and observed their recollection of the rules for that language. Infants who slept within four hours of learning the language could remember the language rules better, while infants who stayed awake longer did not recall those rules as well. There is also a relationship between infants' vocabulary and sleeping: infants who sleep longer at night at 12 months have better vocabularies at 26 months.
Children can greatly benefit from a structured bedtime routine. This can look differently among families, but will generally consist of a set of rituals such as reading a bedtime story, a bath, brushing teeth, and can also include a show of affection from the parent to the child such a hug or kiss before bed. A bedtime routine will also include a consistent time that the child is expected to be in bed ready for sleep. Having a reliable bedtime routine can help improve a child's quality of sleep as well as prepare them to make and keep healthy sleep hygiene habits in the future.
Recommended duration
Children need many hours of sleep per day in order to develop and function properly: up to 18 hours for newborn babies, with a declining rate as a child ages. Early in 2015, after a two-year study, the National Sleep Foundation in the US announced newly revised recommendations as shown in the table below.
Functions
Restoration
The sleeping brain has been shown to remove metabolic end products at a faster rate than during an awake state, by increasing the flow of cerebrospinal fluid during sleep. The mechanism for this removal appears to be the glymphatic system, a system that does for the brain what the lymphatic system does for the body. Further research has shown that the glymphatic system is driven by pulses of hormones that in turn create surges in blood flow that cause the cerebrospinal fluid to flow, carrying away metabolites.
Sleep may facilitate the synthesis of molecules that help repair and protect the brain from metabolic end products generated during waking. Anabolic hormones, such as growth hormones, are secreted preferentially during sleep. The brain concentration of glycogen increases during sleep, and is depleted through metabolism during wakefulness.
The human organism physically restores itself during sleep, occurring mostly during slow-wave sleep during which body temperature, heart rate, and brain oxygen consumption decrease. In both the brain and body, the reduced rate of metabolism enables countervailing restorative processes. While the body benefits from sleep, the brain actually requires sleep for restoration, whereas these processes can take place during quiescent waking in the rest of the body. The essential function of sleep may be its restorative effect on the brain: "Sleep is of the brain, by the brain and for the brain." Furthermore, this includes almost any brain, no matter how small: sleep is observed to be a necessary behavior across most of the animal kingdom, including some of the least cognitively advanced animals, implying that sleep is essential to the most fundamental brain processes, i.e. neuronal firing. This shows that sleep is vital even when there is no need for other functions of sleep, such as memory consolidation or dreaming.
Memory processing
It has been widely accepted that sleep must support the formation of long-term memory, and generally increasing previous learning and experiences recalls. However, its benefit seems to depend on the phase of sleep and the type of memory. For example, declarative and procedural memory-recall tasks applied over early and late nocturnal sleep, as well as wakefulness controlled conditions, have been shown that declarative memory improves more during early sleep (dominated by SWS) while procedural memory during late sleep (dominated by REM sleep) does so.
With regard to declarative memory, the functional role of SWS has been associated with hippocampal replays of previously encoded neural patterns that seem to facilitate long-term memory consolidation. This assumption is based on the active system consolidation hypothesis, which states that repeated reactivations of newly encoded information in the hippocampus during slow oscillations in NREM sleep mediate the stabilization and gradual integration of declarative memory with pre-existing knowledge networks on the cortical level. It assumes the hippocampus might hold information only temporarily and in a fast-learning rate, whereas the neocortex is related to long-term storage and a slow-learning rate. This dialogue between the hippocampus and neocortex occurs in parallel with hippocampal sharp-wave ripples and thalamo-cortical spindles, synchrony that drives the formation of the spindle-ripple event which seems to be a prerequisite for the formation of long-term memories.
Reactivation of memory also occurs during wakefulness and its function is associated with serving to update the reactivated memory with newly encoded information, whereas reactivations during SWS are presented as crucial for memory stabilization. Based on targeted memory reactivation (TMR) experiments that use associated memory cues to triggering memory traces during sleep, several studies have been reassuring the importance of nocturnal reactivations for the formation of persistent memories in neocortical networks, as well as highlighting the possibility of increasing people's memory performance at declarative recalls.
Furthermore, nocturnal reactivation seems to share the same neural oscillatory patterns as reactivation during wakefulness, processes which might be coordinated by theta activity. During wakefulness, theta oscillations have been often related to successful performance in memory tasks, and cued memory reactivations during sleep have been showing that theta activity is significantly stronger in subsequent recognition of cued stimuli as compared to uncued ones, possibly indicating a strengthening of memory traces and lexical integration by cuing during sleep. However, the beneficial effect of TMR for memory consolidation seems to occur only if the cued memories can be related to prior knowledge.
Dreaming
During sleep, especially REM sleep, humans tend to experience dreams. These are elusive and mostly unpredictable first-person experiences which seem logical and realistic to the dreamer while they are in progress, despite their frequently bizarre, irrational, and/or surreal qualities that become apparent when assessed after waking. Dreams often seamlessly incorporate concepts, situations, people, and objects within a person's mind that would not normally go together. They can include apparent sensations of all types, especially vision and movement.
Dreams tend to rapidly fade from memory after waking. Some people choose to keep a dream journal, which they believe helps them build dream recall and facilitate the ability to experience lucid dreams.
A lucid dream is a type of dream in which the dreamer becomes aware that they are dreaming while dreaming. In a preliminary study, dreamers were able to consciously communicate with experimenters via eye movements or facial muscle signals, and were able to comprehend complex questions and use working memory.
People have proposed many hypotheses about the functions of dreaming. Sigmund Freud postulated that dreams are the symbolic expression of frustrated desires that have been relegated to the unconscious mind, and he used dream interpretation in the form of psychoanalysis in attempting to uncover these desires.
Counterintuitively, penile erections during sleep are not more frequent during sexual dreams than during other dreams. The parasympathetic nervous system experiences increased activity during REM sleep which may cause erection of the penis or clitoris. In males, 80% to 95% of REM sleep is normally accompanied by partial to full penile erection, while only about 12% of men's dreams contain sexual content.
Disorders
Insomnia
Insomnia is a general term for difficulty falling asleep and/or staying asleep. Insomnia is the most common sleep problem, with many adults reporting occasional insomnia, and 10–15% reporting a chronic condition. Insomnia can have many different causes, including psychological stress, a poor sleep environment, an inconsistent sleep schedule, or excessive mental or physical stimulation in the hours before bedtime. Insomnia is often treated through behavioral changes like keeping a regular sleep schedule, avoiding stimulating or stressful activities before bedtime, and cutting down on stimulants such as caffeine. The sleep environment may be improved by installing heavy drapes to shut out all sunlight, and keeping computers, televisions, and work materials out of the sleeping area.
A 2010 review of published scientific research suggested that exercise generally improves sleep for most people, and helps sleep disorders such as insomnia. The optimum time to exercise may be 4 to 8 hours before bedtime, though exercise at any time of day is beneficial, with the exception of heavy exercise taken shortly before bedtime, which may disturb sleep. However, there is insufficient evidence to draw detailed conclusions about the relationship between exercise and sleep. Nonbenzodiazepine sleeping medications such as Ambien, Imovane, and Lunesta (also known as "Z-drugs"), while initially believed to be better and safer than earlier generations of including benzodiazepine are now known to be almost entirely the same as benzodiazepines in terms of their pharmacodynamics, differing only at the molecular level in their chemical structure, and therefore exhibit similar benefits, side-effects, and risks. White noise appears to be a promising treatment for insomnia.
Sleep health
Sleep duration and quality
Sleep duration measures the length of sleep, whereas sleep quality includes factors such as speed in falling asleep and whether sleep is unbroken.
Low quality sleep has been linked with health conditions like cardiovascular disease, obesity, and mental illness. While poor sleep is common among those with cardiovascular disease, some research indicates that poor sleep can be a contributing cause. Short sleep duration of less than seven hours is correlated with coronary heart disease and increased risk of death from coronary heart disease. Sleep duration greater than nine hours is also correlated with coronary heart disease, as well as stroke and cardiovascular events.
In both children and adults, short sleep duration is associated with an increased risk of obesity, with various studies reporting an increased risk of 45–55%. Other aspects of sleep health have been associated with obesity, including daytime napping, sleep timing, the variability of sleep timing, and low sleep efficiency. However, sleep duration is the most-studied for its impact on obesity.
Sleep problems have been frequently viewed as a symptom of mental illness rather than a causative factor. However, a growing body of evidence suggests that they are both a cause and a symptom of mental illness. Insomnia is a significant predictor of major depressive disorder; a meta-analysis of 170,000 people showed that insomnia at the beginning of a study period indicated a more than the twofold increased risk for major depressive disorder. Some studies have also indicated correlation between insomnia and anxiety, post-traumatic stress disorder, and suicide. Sleep disorders can increase the risk of psychosis and worsen the severity of psychotic episodes.
Sleep research also displays differences in race and class. Short sleep and poor sleep are observed more frequently in ethnic minorities than in whites in the USA. African-Americans report experiencing short durations of sleep five times more often than whites, possibly as a result of social and environmental factors. A study done in the USA suggested that higher rates of sleep apnea (and poorer responses to treatment) are suffered by children in disadvantaged neighborhoods (which, in context, includes a disproportionate effect on children of African-American descent).
Sleep hygiene
Sleep health can be improved through implementing good sleep hygiene habits. Having good sleep hygiene can help to improve your physical and mental health by providing your body with the necessary rejuvenation only restful sleep can provide. Some ways to improve sleep health include going to sleep at consistent times every night, avoiding any electronic devices such as televisions in the bedroom, getting adequate exercise throughout your day, and avoiding caffeine in the hours before going to sleep. Another way to greatly improve sleep hygiene is by creating a peaceful and relaxing sleep environment. Sleeping in a dark and clean room with things like a white noise maker can help facilitate restful sleep. However, noise, with the exception of white noise, may not be good for sleep.
Drugs and diet
Drugs which induce sleep, known as hypnotics, include benzodiazepines (although these interfere with REM); nonbenzodiazepine hypnotics such as eszopiclone (Lunesta), zaleplon (Sonata), and zolpidem (Ambien); antihistamines such as diphenhydramine (Benadryl) and doxylamine; alcohol (ethanol), (which exerts an excitatory rebound effect later in the night and intereferes with REM) barbiturates (which have the same problem), melatonin (a component of the circadian clock) and cannabis (which may also interfere with REM). Some opioids (including morphine, codeine, heroin, and oxycodone) also induce sleep, and can disrupt sleep architecture and sleep stage distribution. The endogenously produced drug gamma-hydroxybutyrate (GHB) is capable of producing high quality sleep that is indistinguishable from natural sleep architecture in humans.
Stimulants, which inhibit sleep, include caffeine, an adenosine antagonist; amphetamine, methamphetamine, MDMA, empathogen-entactogens, and related drugs; cocaine, which can alter the circadian rhythm, and methylphenidate, which acts similarly; and eugeroic drugs like modafinil and armodafinil with poorly understood mechanisms. Consuming high amounts of the stimulant caffeine can result in interrupted sleep patterns and sometimes sleep deprivation. This vicious cycle can result in drowsiness which can then result in a higher consumption of caffeine in order to stay awake the next day. This cycle can lead to decreased cognitive function and an overall feeling of fatigue.
Some drugs may alter sleep architecture without inhibiting or inducing sleep. Drugs that amplify or inhibit endocrine and immune system secretions associated with certain sleep stages have been shown to alter sleep architecture. The growth hormone releasing hormone receptor agonist MK-677 has been shown to increase REM in older adults as well as stage IV sleep in younger adults by approximately 50%.
Diet
Dietary and nutritional choices may affect sleep duration and quality. One 2016 review indicated that a high-carbohydrate diet promoted a shorter onset to sleep and a longer duration of sleep than a high-fat diet. A 2012 investigation indicated that mixed micronutrients and macronutrients are needed to promote quality sleep. A varied diet containing fresh fruits and vegetables, low saturated fat, and whole grains may be optimal for individuals seeking to improve sleep quality. Epidemiological studies indicate fewer insomnia symptoms and better sleep quality with a Mediterranean diet. Two studies have indicated a benefit of tart cherry juice for insomnia, or for increasing sleep efficiency as well as total sleep time. High-quality clinical trials on long-term dietary practices are needed to better define the influence of diet on sleep quality.
In culture
Anthropology
Research suggests that sleep patterns vary significantly across cultures. The most striking differences are observed between societies that have plentiful sources of artificial light and ones that do not. The primary difference appears to be that pre-light cultures have more broken-up sleep patterns. For example, people without artificial light might go to sleep far sooner after the sun sets, but then wake up several times throughout the night, punctuating their sleep with periods of wakefulness, perhaps lasting several hours. During pre-industrial Europe, biphasic (bimodal) sleeping was considered the norm. Sleep onset was determined not by a set bedtime, but by whether there were things to do.
The boundaries between sleeping and waking are blurred in these societies. Some observers believe that nighttime sleep in these societies is most often split into two main periods, the first characterized primarily by deep sleep and the second by REM sleep.
Some societies display a fragmented sleep pattern in which people sleep at all times of the day and night for shorter periods. In many nomadic or hunter-gatherer societies, people sleep on and off throughout the day or night depending on what is happening. Plentiful artificial light has been available in the industrialized West since at least the mid-19th century, and sleep patterns have changed significantly everywhere that lighting has been introduced. In general, people sleep in a more concentrated burst through the night, going to sleep much later, although this is not always the case.
Historian A. Roger Ekirch thinks that the traditional pattern of "segmented sleep," as it is called, began to disappear among the urban upper class in Europe in the late 17th century and the change spread over the next 200 years; by the 1920s "the idea of a first and second sleep had receded entirely from our social consciousness." Ekirch attributes the change to increases in "street lighting, domestic lighting and a surge in coffee houses," which slowly made nighttime a legitimate time for activity, decreasing the time available for rest. Today in most societies people sleep during the night, but in very hot climates they may sleep during the day. During Ramadan, many Muslims sleep during the day rather than at night.
In some societies, people sleep with at least one other person (sometimes many) or with animals. In other cultures, people rarely sleep with anyone except for an intimate partner. In almost all societies, sleeping partners are strongly regulated by social standards. For example, a person might only sleep with the immediate family, the extended family, a spouse or romantic partner, children, children of a certain age, children of a specific gender, peers of a certain gender, friends, peers of equal social rank, or with no one at all. Sleep may be an actively social time, depending on the sleep groupings, with no constraints on noise or activity.
People sleep in a variety of locations. Some sleep directly on the ground; others on a skin or blanket; others sleep on platforms or beds. Some sleep with blankets, some with pillows, some with simple headrests, some with no head support. These choices are shaped by a variety of factors, such as climate, protection from predators, housing type, technology, personal preference, and the incidence of pests.
In mythology and literature
Sleep has been seen in culture as similar to death since antiquity; in Greek mythology, Hypnos (the god of sleep) and Thanatos (the god of death) were both said to be the children of Nyx (the goddess of night). John Donne, Samuel Taylor Coleridge, Percy Bysshe Shelley, John Keats and other poets have all written poems about the relationship between sleep and death. Shelley describes them as "both so passing, strange and wonderful!" Keats similarly poses the question: "Can death be sleep, when life is but a dream". Many people consider dying in one's sleep is the most peaceful way to die. Phrases such as "big sleep" and "rest in peace" are often used in reference to death, possibly in an effort to lessen its finality. Sleep and dreaming have sometimes been seen as providing the potential for visionary experiences. In medieval Irish tradition, in order to become a filí, the poet was required to undergo a ritual called the imbas forosnai, in which they would enter a mantic, trancelike sleep.
Many cultural stories have been told about people falling asleep for extended periods of time. The earliest of these stories is the ancient Greek legend of Epimenides of Knossos. According to the biographer Diogenes Laërtius, Epimenides was a shepherd on the Greek island of Crete. One day, one of his sheep went missing and he went out to look for it, but became tired and fell asleep in a cave under Mount Ida. When he awoke, he continued searching for the sheep, but could not find it, so he returned to his old farm, only to discover that it was now under new ownership. He went to his hometown, but discovered that nobody there knew him. Finally, he met his younger brother, who was now an old man, and learned that he had been asleep in the cave for fifty-seven years.
A far more famous instance of a "long sleep" today is the Christian legend of the Seven Sleepers of Ephesus, in which seven Christians flee into a cave during pagan times in order to escape persecution, but fall asleep and wake up 360 years later to discover, to their astonishment, that the Roman Empire is now predominantly Christian. The American author Washington Irving's short story "Rip Van Winkle", first published in 1819 in his collection of short stories The Sketch Book of Geoffrey Crayon, Gent., is about a man in colonial America named Rip Van Winkle who falls asleep on one of the Catskill Mountains and wakes up twenty years later after the American Revolution. The story is now considered one of the greatest classics of American literature.
In studies on consciousness and philosophy
As an altered state of consciousness, dreamless deep sleep has been used as a way to investigate animal/human consciousness and qualia. Insights about differences of the living sleeping brain to its wakeful state and the transition period may have implications for potential explanations of human subjective experience, the so-called hard problem of consciousness, often delegated to the realm of philosophy, including neurophilosophy (or in some cases to religion and similar approaches).
In art
Of the thematic representations of sleep in art, physician and sleep researcher Meir Kryger wrote, "[Artists] have intense fascination with mythology, dreams, religious themes, the parallel between sleep and death, reward, abandonment of conscious control, healing, a depiction of innocence and serenity, and the erotic."
| Biology and health sciences | Biology | null |
27838 | https://en.wikipedia.org/wiki/Sequence | Sequence | In mathematics, a sequence is an enumerated collection of objects in which repetitions are allowed and order matters. Like a set, it contains members (also called elements, or terms). The number of elements (possibly infinite) is called the length of the sequence. Unlike a set, the same elements can appear multiple times at different positions in a sequence, and unlike a set, the order does matter. Formally, a sequence can be defined as a function from natural numbers (the positions of elements in the sequence) to the elements at each position. The notion of a sequence can be generalized to an indexed family, defined as a function from an arbitrary index set.
For example, (M, A, R, Y) is a sequence of letters with the letter "M" first and "Y" last. This sequence differs from (A, R, M, Y). Also, the sequence (1, 1, 2, 3, 5, 8), which contains the number 1 at two different positions, is a valid sequence. Sequences can be finite, as in these examples, or infinite, such as the sequence of all even positive integers (2, 4, 6, ...).
The position of an element in a sequence is its rank or index; it is the natural number for which the element is the image. The first element has index 0 or 1, depending on the context or a specific convention. In mathematical analysis, a sequence is often denoted by letters in the form of , and , where the subscript n refers to the nth element of the sequence; for example, the nth element of the Fibonacci sequence is generally denoted as .
In computing and computer science, finite sequences are usually called strings, words or lists, with the specific technical term chosen depending on the type of object the sequence enumerates and the different ways to represent the sequence in computer memory. Infinite sequences are called streams.
The empty sequence ( ) is included in most notions of sequence. It may be excluded depending on the context.
Examples and notation
A sequence can be thought of as a list of elements with a particular order. Sequences are useful in a number of mathematical disciplines for studying functions, spaces, and other mathematical structures using the convergence properties of sequences. In particular, sequences are the basis for series, which are important in differential equations and analysis. Sequences are also of interest in their own right, and can be studied as patterns or puzzles, such as in the study of prime numbers.
There are a number of ways to denote a sequence, some of which are more useful for specific types of sequences. One way to specify a sequence is to list all its elements. For example, the first four odd numbers form the sequence (1, 3, 5, 7). This notation is used for infinite sequences as well. For instance, the infinite sequence of positive odd integers is written as (1, 3, 5, 7, ...). Because notating sequences with ellipsis leads to ambiguity, listing is most useful for customary infinite sequences which can be easily recognized from their first few elements. Other ways of denoting a sequence are discussed after the examples.
Examples
The prime numbers are the natural numbers greater than 1 that have no divisors but 1 and themselves. Taking these in their natural order gives the sequence (2, 3, 5, 7, 11, 13, 17, ...). The prime numbers are widely used in mathematics, particularly in number theory where many results related to them exist.
The Fibonacci numbers comprise the integer sequence in which each element is the sum of the previous two elements. The first two elements are either 0 and 1 or 1 and 1 so that the sequence is (0, 1, 1, 2, 3, 5, 8, 13, 21, 34, ...).
Other examples of sequences include those made up of rational numbers, real numbers and complex numbers. The sequence (.9, .99, .999, .9999, ...), for instance, approaches the number 1. In fact, every real number can be written as the limit of a sequence of rational numbers (e.g. via its decimal expansion, also see completeness of the real numbers). As another example, is the limit of the sequence (3, 3.1, 3.14, 3.141, 3.1415, ...), which is increasing. A related sequence is the sequence of decimal digits of , that is, (3, 1, 4, 1, 5, 9, ...). Unlike the preceding sequence, this sequence does not have any pattern that is easily discernible by inspection.
Other examples are sequences of functions, whose elements are functions instead of numbers.
The On-Line Encyclopedia of Integer Sequences comprises a large list of examples of integer sequences.
Indexing
Other notations can be useful for sequences whose pattern cannot be easily guessed or for sequences that do not have a pattern such as the digits of . One such notation is to write down a general formula for computing the nth term as a function of n, enclose it in parentheses, and include a subscript indicating the set of values that n can take. For example, in this notation the sequence of even numbers could be written as . The sequence of squares could be written as . The variable n is called an index, and the set of values that it can take is called the index set.
It is often useful to combine this notation with the technique of treating the elements of a sequence as individual variables. This yields expressions like , which denotes a sequence whose nth element is given by the variable . For example:
One can consider multiple sequences at the same time by using different variables; e.g. could be a different sequence than . One can even consider a sequence of sequences: denotes a sequence whose mth term is the sequence .
An alternative to writing the domain of a sequence in the subscript is to indicate the range of values that the index can take by listing its highest and lowest legal values. For example, the notation denotes the ten-term sequence of squares . The limits and are allowed, but they do not represent valid values for the index, only the supremum or infimum of such values, respectively. For example, the sequence is the same as the sequence , and does not contain an additional term "at infinity". The sequence is a bi-infinite sequence, and can also be written as .
In cases where the set of indexing numbers is understood, the subscripts and superscripts are often left off. That is, one simply writes for an arbitrary sequence. Often, the index k is understood to run from 1 to ∞. However, sequences are frequently indexed starting from zero, as in
In some cases, the elements of the sequence are related naturally to a sequence of integers whose pattern can be easily inferred. In these cases, the index set may be implied by a listing of the first few abstract elements. For instance, the sequence of squares of odd numbers could be denoted in any of the following ways.
Moreover, the subscripts and superscripts could have been left off in the third, fourth, and fifth notations, if the indexing set was understood to be the natural numbers. In the second and third bullets, there is a well-defined sequence , but it is not the same as the sequence denoted by the expression.
Defining a sequence by recursion
Sequences whose elements are related to the previous elements in a straightforward way are often defined using recursion. This is in contrast to the definition of sequences of elements as functions of their positions.
To define a sequence by recursion, one needs a rule, called recurrence relation to construct each element in terms of the ones before it. In addition, enough initial elements must be provided so that all subsequent elements of the sequence can be computed by successive applications of the recurrence relation.
The Fibonacci sequence is a simple classical example, defined by the recurrence relation
with initial terms and . From this, a simple computation shows that the first ten terms of this sequence are 0, 1, 1, 2, 3, 5, 8, 13, 21, and 34.
A complicated example of a sequence defined by a recurrence relation is Recamán's sequence, defined by the recurrence relation
with initial term
A linear recurrence with constant coefficients is a recurrence relation of the form
where are constants. There is a general method for expressing the general term of such a sequence as a function of ; see Linear recurrence. In the case of the Fibonacci sequence, one has and the resulting function of is given by Binet's formula.
A holonomic sequence is a sequence defined by a recurrence relation of the form
where are polynomials in . For most holonomic sequences, there is no explicit formula for expressing as a function of . Nevertheless, holonomic sequences play an important role in various areas of mathematics. For example, many special functions have a Taylor series whose sequence of coefficients is holonomic. The use of the recurrence relation allows a fast computation of values of such special functions.
Not all sequences can be specified by a recurrence relation. An example is the sequence of prime numbers in their natural order (2, 3, 5, 7, 11, 13, 17, ...).
Formal definition and basic properties
There are many different notions of sequences in mathematics, some of which (e.g., exact sequence) are not covered by the definitions and notations introduced below.
Definition
In this article, a sequence is formally defined as a function whose domain is an interval of integers. This definition covers several different uses of the word "sequence", including one-sided infinite sequences, bi-infinite sequences, and finite sequences (see below for definitions of these kinds of sequences). However, many authors use a narrower definition by requiring the domain of a sequence to be the set of natural numbers. This narrower definition has the disadvantage that it rules out finite sequences and bi-infinite sequences, both of which are usually called sequences in standard mathematical practice. Another disadvantage is that, if one removes the first terms of a sequence, one needs reindexing the remainder terms for fitting this definition. In some contexts, to shorten exposition, the codomain of the sequence is fixed by context, for example by requiring it to be the set R of real numbers, the set C of complex numbers, or a topological space.
Although sequences are a type of function, they are usually distinguished notationally from functions in that the input is written as a subscript rather than in parentheses, that is, rather than . There are terminological differences as well: the value of a sequence at the lowest input (often 1) is called the "first element" of the sequence, the value at the second smallest input (often 2) is called the "second element", etc. Also, while a function abstracted from its input is usually denoted by a single letter, e.g. f, a sequence abstracted from its input is usually written by a notation such as , or just as Here is the domain, or index set, of the sequence.
Sequences and their limits (see below) are important concepts for studying topological spaces. An important generalization of sequences is the concept of nets. A net is a function from a (possibly uncountable) directed set to a topological space. The notational conventions for sequences normally apply to nets as well.
Finite and infinite
The length of a sequence is defined as the number of terms in the sequence.
A sequence of a finite length n is also called an n-tuple. Finite sequences include the empty sequence ( ) that has no elements.
Normally, the term infinite sequence refers to a sequence that is infinite in one direction, and finite in the other—the sequence has a first element, but no final element. Such a sequence is called a singly infinite sequence or a one-sided infinite sequence when disambiguation is necessary. In contrast, a sequence that is infinite in both directions—i.e. that has neither a first nor a final element—is called a bi-infinite sequence, two-way infinite sequence, or doubly infinite sequence. A function from the set Z of all integers into a set, such as for instance the sequence of all even integers ( ..., −4, −2, 0, 2, 4, 6, 8, ... ), is bi-infinite. This sequence could be denoted .
Increasing and decreasing
A sequence is said to be monotonically increasing if each term is greater than or equal to the one before it. For example, the sequence is monotonically increasing if and only if for all If each consecutive term is strictly greater than (>) the previous term then the sequence is called strictly monotonically increasing. A sequence is monotonically decreasing if each consecutive term is less than or equal to the previous one, and is strictly monotonically decreasing if each is strictly less than the previous. If a sequence is either increasing or decreasing it is called a monotone sequence. This is a special case of the more general notion of a monotonic function.
The terms nondecreasing and nonincreasing are often used in place of increasing and decreasing in order to avoid any possible confusion with strictly increasing and strictly decreasing, respectively.
Bounded
If the sequence of real numbers (an) is such that all the terms are less than some real number M, then the sequence is said to be bounded from above. In other words, this means that there exists M such that for all n, an ≤ M. Any such M is called an upper bound. Likewise, if, for some real m, an ≥ m for all n greater than some N, then the sequence is bounded from below and any such m is called a lower bound. If a sequence is both bounded from above and bounded from below, then the sequence is said to be bounded.
Subsequences
A subsequence of a given sequence is a sequence formed from the given sequence by deleting some of the elements without disturbing the relative positions of the remaining elements. For instance, the sequence of positive even integers (2, 4, 6, ...) is a subsequence of the positive integers (1, 2, 3, ...). The positions of some elements change when other elements are deleted. However, the relative positions are preserved.
Formally, a subsequence of the sequence is any sequence of the form , where is a strictly increasing sequence of positive integers.
Other types of sequences
Some other types of sequences that are easy to define include:
An integer sequence is a sequence whose terms are integers.
A polynomial sequence is a sequence whose terms are polynomials.
A positive integer sequence is sometimes called multiplicative, if anm = an am for all pairs n, m such that n and m are coprime. In other instances, sequences are often called multiplicative, if an = na1 for all n. Moreover, a multiplicative Fibonacci sequence satisfies the recursion relation an = an−1 an−2.
A binary sequence is a sequence whose terms have one of two discrete values, e.g. base 2 values (0,1,1,0, ...), a series of coin tosses (Heads/Tails) H,T,H,H,T, ..., the answers to a set of True or False questions (T, F, T, T, ...), and so on.
Limits and convergence
An important property of a sequence is convergence. If a sequence converges, it converges to a particular value known as the limit. If a sequence converges to some limit, then it is convergent. A sequence that does not converge is divergent.
Informally, a sequence has a limit if the elements of the sequence become closer and closer to some value (called the limit of the sequence), and they become and remain arbitrarily close to , meaning that given a real number greater than zero, all but a finite number of the elements of the sequence have a distance from less than .
For example, the sequence shown to the right converges to the value 0. On the other hand, the sequences (which begins 1, 8, 27, ...) and (which begins −1, 1, −1, 1, ...) are both divergent.
If a sequence converges, then the value it converges to is unique. This value is called the limit of the sequence. The limit of a convergent sequence is normally denoted . If is a divergent sequence, then the expression is meaningless.
Formal definition of convergence
A sequence of real numbers converges to a real number if, for all , there exists a natural number such that for all we have
If is a sequence of complex numbers rather than a sequence of real numbers, this last formula can still be used to define convergence, with the provision that denotes the complex modulus, i.e. . If is a sequence of points in a metric space, then the formula can be used to define convergence, if the expression is replaced by the expression , which denotes the distance between and .
Applications and important results
If and are convergent sequences, then the following limits exist, and can be computed as follows:
for all real numbers
, provided that
for all and
Moreover:
If for all greater than some , then .
(Squeeze Theorem)If is a sequence such that for all then is convergent, and .
If a sequence is bounded and monotonic then it is convergent.
A sequence is convergent if and only if all of its subsequences are convergent.
Cauchy sequences
A Cauchy sequence is a sequence whose terms become arbitrarily close together as n gets very large. The notion of a Cauchy sequence is important in the study of sequences in metric spaces, and, in particular, in real analysis. One particularly important result in real analysis is Cauchy characterization of convergence for sequences:
A sequence of real numbers is convergent (in the reals) if and only if it is Cauchy.
In contrast, there are Cauchy sequences of rational numbers that are not convergent in the rationals, e.g. the sequence defined by and is Cauchy, but has no rational limit (cf. ). More generally, any sequence of rational numbers that converges to an irrational number is Cauchy, but not convergent when interpreted as a sequence in the set of rational numbers.
Metric spaces that satisfy the Cauchy characterization of convergence for sequences are called complete metric spaces and are particularly nice for analysis.
Infinite limits
In calculus, it is common to define notation for sequences which do not converge in the sense discussed above, but which instead become and remain arbitrarily large, or become and remain arbitrarily negative. If becomes arbitrarily large as , we write
In this case we say that the sequence diverges, or that it converges to infinity. An example of such a sequence is .
If becomes arbitrarily negative (i.e. negative and large in magnitude) as , we write
and say that the sequence diverges or converges to negative infinity.
Series
A series is, informally speaking, the sum of the terms of a sequence. That is, it is an expression of the form or , where is a sequence of real or complex numbers. The partial sums of a series are the expressions resulting from replacing the infinity symbol with a finite number, i.e. the Nth partial sum of the series is the number
The partial sums themselves form a sequence , which is called the sequence of partial sums of the series . If the sequence of partial sums converges, then we say that the series is convergent, and the limit is called the value of the series. The same notation is used to denote a series and its value, i.e. we write .
Use in other fields of mathematics
Topology
Sequences play an important role in topology, especially in the study of metric spaces. For instance:
A metric space is compact exactly when it is sequentially compact.
A function from a metric space to another metric space is continuous exactly when it takes convergent sequences to convergent sequences.
A metric space is a connected space if and only if, whenever the space is partitioned into two sets, one of the two sets contains a sequence converging to a point in the other set.
A topological space is separable exactly when there is a dense sequence of points.
Sequences can be generalized to nets or filters. These generalizations allow one to extend some of the above theorems to spaces without metrics.
Product topology
The topological product of a sequence of topological spaces is the cartesian product of those spaces, equipped with a natural topology called the product topology.
More formally, given a sequence of spaces , the product space
is defined as the set of all sequences such that for each i, is an element of . The canonical projections are the maps pi : X → Xi defined by the equation . Then the product topology on X is defined to be the coarsest topology (i.e. the topology with the fewest open sets) for which all the projections pi are continuous. The product topology is sometimes called the Tychonoff topology.
Analysis
When discussing sequences in analysis, one will generally consider sequences of the form
which is to say, infinite sequences of elements indexed by natural numbers.
A sequence may start with an index different from 1 or 0. For example, the sequence defined by xn = 1/log(n) would be defined only for n ≥ 2. When talking about such infinite sequences, it is usually sufficient (and does not change much for most considerations) to assume that the members of the sequence are defined at least for all indices large enough, that is, greater than some given N.
The most elementary type of sequences are numerical ones, that is, sequences of real or complex numbers. This type can be generalized to sequences of elements of some vector space. In analysis, the vector spaces considered are often function spaces. Even more generally, one can study sequences with elements in some topological space.
Sequence spaces
A sequence space is a vector space whose elements are infinite sequences of real or complex numbers. Equivalently, it is a function space whose elements are functions from the natural numbers to the field K, where K is either the field of real numbers or the field of complex numbers. The set of all such functions is naturally identified with the set of all possible infinite sequences with elements in K, and can be turned into a vector space under the operations of pointwise addition of functions and pointwise scalar multiplication. All sequence spaces are linear subspaces of this space. Sequence spaces are typically equipped with a norm, or at least the structure of a topological vector space.
The most important sequences spaces in analysis are the ℓp spaces, consisting of the p-power summable sequences, with the p-norm. These are special cases of Lp spaces for the counting measure on the set of natural numbers. Other important classes of sequences like convergent sequences or null sequences form sequence spaces, respectively denoted c and c0, with the sup norm. Any sequence space can also be equipped with the topology of pointwise convergence, under which it becomes a special kind of Fréchet space called an FK-space.
Linear algebra
Sequences over a field may also be viewed as vectors in a vector space. Specifically, the set of F-valued sequences (where F is a field) is a function space (in fact, a product space) of F-valued functions over the set of natural numbers.
Abstract algebra
Abstract algebra employs several types of sequences, including sequences of mathematical objects such as groups or rings.
Free monoid
If A is a set, the free monoid over A (denoted A*, also called Kleene star of A) is a monoid containing all the finite sequences (or strings) of zero or more elements of A, with the binary operation of concatenation. The free semigroup A+ is the subsemigroup of A* containing all elements except the empty sequence.
Exact sequences
In the context of group theory, a sequence
of groups and group homomorphisms is called exact, if the image (or range) of each homomorphism is equal to the kernel of the next:
The sequence of groups and homomorphisms may be either finite or infinite.
A similar definition can be made for certain other algebraic structures. For example, one could have an exact sequence of vector spaces and linear maps, or of modules and module homomorphisms.
Spectral sequences
In homological algebra and algebraic topology, a spectral sequence is a means of computing homology groups by taking successive approximations. Spectral sequences are a generalization of exact sequences, and since their introduction by , they have become an important research tool, particularly in homotopy theory.
Set theory
An ordinal-indexed sequence is a generalization of a sequence. If α is a limit ordinal and X is a set, an α-indexed sequence of elements of X is a function from α to X. In this terminology an ω-indexed sequence is an ordinary sequence.
Computing
In computer science, finite sequences are called lists. Potentially infinite sequences are called streams. Finite sequences of characters or digits are called strings.
Streams
Infinite sequences of digits (or characters) drawn from a finite alphabet are of particular interest in theoretical computer science. They are often referred to simply as sequences or streams, as opposed to finite strings. Infinite binary sequences, for instance, are infinite sequences of bits (characters drawn from the alphabet {0, 1}). The set C = {0, 1}∞ of all infinite binary sequences is sometimes called the Cantor space.
An infinite binary sequence can represent a formal language (a set of strings) by setting the n th bit of the sequence to 1 if and only if the n th string (in shortlex order) is in the language. This representation is useful in the diagonalization method for proofs.
| Mathematics | Calculus and analysis | null |
27856 | https://en.wikipedia.org/wiki/Schr%C3%B6dinger%27s%20cat | Schrödinger's cat | In quantum mechanics, Schrödinger's cat is a thought experiment concerning quantum superposition. In the thought experiment, a hypothetical cat may be considered simultaneously both alive and dead, while it is unobserved in a closed box, as a result of its fate being linked to a random subatomic event that may or may not occur. This experiment viewed this way is described as a paradox. This thought experiment was devised by physicist Erwin Schrödinger in 1935 in a discussion with Albert Einstein to illustrate what Schrödinger saw as the problems of the Copenhagen interpretation of quantum mechanics.
In Schrödinger's original formulation, a cat, a flask of poison, and a radioactive source are placed in a sealed box. If an internal radiation monitor (e.g. a Geiger counter) detects radioactivity (i.e. a single atom decaying), the flask is shattered, releasing the poison, which kills the cat. The Copenhagen interpretation implies that, after a while, the cat is simultaneously alive and dead. Yet, when one looks in the box, one sees the cat either alive or dead, not both alive and dead. This poses the question of when exactly quantum superposition ends and reality resolves into one possibility or the other.
Although originally a critique on the Copenhagen interpretation, Schrödinger's seemingly paradoxical thought experiment became part of the foundation of quantum mechanics. The scenario is often featured in theoretical discussions of the interpretations of quantum mechanics, particularly in situations involving the measurement problem. As a result, Schrödinger's cat has had enduring appeal in popular culture. The experiment is not intended to be actually performed on a cat, but rather as an easily understandable illustration of the behavior of atoms. Experiments at the atomic scale have been carried out, showing that very small objects may exist as superpositions; but superposing an object as large as a cat would pose considerable technical difficulties.
Fundamentally, the Schrödinger's cat experiment asks how long quantum superpositions last and when (or whether) they collapse. Different interpretations of the mathematics of quantum mechanics have been proposed that give different explanations for this process.
Origin and motivation
Schrödinger intended his thought experiment as a discussion of the EPR article—named after its authors Einstein, Podolsky, and Rosen—in 1935. The EPR article highlighted the counterintuitive nature of quantum superpositions, in which a quantum system for two particles does not separate even when the particles are detected far from their last point of contact. The EPR paper concludes with a claim that this lack of separability meant that quantum mechanics as a theory of reality was incomplete.
Schrödinger and Einstein exchanged letters about Einstein's EPR article, in the course of which Einstein pointed out that the state of an unstable keg of gunpowder will, after a while, contain a superposition of both exploded and unexploded states.
To further illustrate, Schrödinger described how one could, in principle, create a superposition in a large-scale system by making it dependent on a quantum particle that was in a superposition. He proposed a scenario with a cat in a closed steel chamber, wherein the cat's life or death depended on the state of a radioactive atom, whether it had decayed and emitted radiation or not. According to Schrödinger, the Copenhagen interpretation implies that the cat remains both alive and dead until the state has been observed. Schrödinger did not wish to promote the idea of dead-and-live cats as a serious possibility; on the contrary, he intended the example to illustrate the absurdity of the existing view of quantum mechanics, and thus he was employing reductio ad absurdum.
Since Schrödinger's time, various interpretations of the mathematics of quantum mechanics have been advanced by physicists, some of which regard the "alive and dead" cat superposition as quite real, others do not. Intended as a critique of the Copenhagen interpretation (the prevailing orthodoxy in 1935), the Schrödinger's cat thought experiment remains a touchstone for modern interpretations of quantum mechanics and can be used to illustrate and compare their strengths and weaknesses.
Thought experiment
Schrödinger wrote:
Schrödinger developed his famous thought experiment in correspondence with Einstein. He suggested this 'quite ridiculous case' to illustrate his conclusion that the wave function cannot represent reality.
The wave function description of the complete cat system implies that the reality of the cat mixes the living and dead cat. Einstein was impressed by the ability of the thought experiment to highlight these issues. In a letter to Schrödinger dated 1950, he wrote:
Note that the charge of gunpowder is not mentioned in Schrödinger's setup, which uses a Geiger counter as an amplifier and hydrocyanic poison instead of gunpowder. The gunpowder had been mentioned in Einstein's original suggestion to Schrödinger 15 years before, and Einstein carried it forward to the present discussion.
Analysis
In modern terms Schrodinger's hypothetical cat experiment describes the measurement problem: quantum theory describes the cat system as a combination of two possible outcomes but only one outcome is ever observed.
The experiment poses the question, "when does a quantum system stop existing as a superposition of states and become one or the other?" (More technically, when does the actual quantum state stop being a non-trivial linear combination of states, each of which resembles different classical states, and instead begin to have a unique classical description?) Standard microscopic quantum mechanics describes multiple possible outcomes of experiments but only one outcome is observed. The thought experiment illustrates this apparent paradox. Our intuition says that the cat cannot be in more than one state simultaneously—yet the quantum mechanical description of the thought experiment requires such a condition.
Interpretations
Since Schrödinger's time, other interpretations of quantum mechanics have been proposed that give different answers to the questions posed by Schrödinger's cat of how long superpositions last and when (or whether) they collapse.
Copenhagen interpretation
A commonly held interpretation of quantum mechanics is the Copenhagen interpretation. In the Copenhagen interpretation, a measurement results in only one state of a superposition. This thought experiment makes apparent the fact that this interpretation simply provides no explanation for the state of the cat while the box is closed. The wavefunction description of the system consists of a superposition of the states "decayed nucleus/dead cat" and "undecayed nucleus/living cat". Only when the box is opened and observed can we make a statement about the cat.
Von Neumann interpretation
In 1932, John von Neumann described in his book Mathematical Foundations a pattern where the radioactive source is observed by a device, which itself is observed by another device and so on. It makes no difference in the predictions of quantum theory where along this chain of causal effects the superposition collapses. This potentially infinite chain could be broken if the last device is replaced by a conscious observer. This solved the problem because it was claimed that an individual's consciousness cannot be multiple. Neumann asserted that a conscious observer is necessary for a collapse to one or the other (e.g., either a live cat or a dead cat) of the terms on the right-hand side of a wave function. This interpretation was later adopted by Eugene Wigner, who then rejected the interpretation in a thought experiment known as Wigner's friend.
Wigner supposed that a friend opened the box and observed the cat without telling anyone. From Wigner's conscious perspective, the friend is now part of the wave function and has seen a live cat and seen a dead cat. To a third person's conscious perspective, Wigner himself becomes part of the wave function once Wigner learns the outcome from the friend. This could be extended indefinitely.
A resolution of the paradox is that the triggering of the Geiger counter counts as a measurement of the state of the radioactive substance. Because a measurement has already occurred deciding the state of the cat, the subsequent observation by a human records only what has already occurred. Analysis of an actual experiment by Roger Carpenter and A. J. Anderson found that measurement alone (for example by a Geiger counter) is sufficient to collapse a quantum wave function before any human knows of the result. The apparatus indicates one of two colors depending on the outcome. The human observer sees which color is indicated, but they don't consciously know which outcome the color represents. A second human, the one who set up the apparatus, is told of the color and becomes conscious of the outcome, and the box is opened to check if the outcome matches. However, it is disputed whether merely observing the color counts as a conscious observation of the outcome.
Bohr's interpretation
Analysis of the work of Niels Bohr, one of the main scientists associated with the Copenhagen interpretation, suggests he viewed the state of the cat before the box is opened as indeterminate. The superposition itself had no physical meaning to Bohr: Schrödinger's cat would be either dead or alive long before the box is opened but the cat and box form a inseparable combination.
Bohr saw no role for a human observer.
Bohr emphasized the classical nature of measurement results.
An "irreversible" or effectively irreversible process imparts the classical behavior of "observation" or "measurement".
Many-worlds interpretation
In 1957, Hugh Everett formulated the many-worlds interpretation of quantum mechanics, which does not single out observation as a special process. In the many-worlds interpretation, both alive and dead states of the cat persist after the box is opened, but are decoherent from each other. In other words, when the box is opened, the observer and the possibly-dead cat split into an observer looking at a box with a dead cat and an observer looking at a box with a live cat. But since the dead and alive states are decoherent, there is no communication or interaction between them.
When opening the box, the observer becomes entangled with the cat, so "observer states" corresponding to the cat's being alive and dead are formed; each observer state is entangled, or linked, with the cat so that the observation of the cat's state and the cat's state correspond with each other. Quantum decoherence ensures that the different outcomes have no interaction with each other. Decoherence is generally considered to prevent simultaneous observation of multiple states.
A variant of the Schrödinger's cat experiment, known as the quantum suicide machine, has been proposed by cosmologist Max Tegmark. It examines the Schrödinger's cat experiment from the point of view of the cat, and argues that by using this approach, one may be able to distinguish between the Copenhagen interpretation and many-worlds.
Ensemble interpretation
The ensemble interpretation states that superpositions are nothing but subensembles of a larger statistical ensemble. The state vector would not apply to individual cat experiments, but only to the statistics of many similarly prepared cat experiments. Proponents of this interpretation state that this makes the Schrödinger's cat paradox a trivial matter, or a non-issue.
This interpretation serves to discard the idea that a single physical system in quantum mechanics has a mathematical description that corresponds to it in any way.
Relational interpretation
The relational interpretation makes no fundamental distinction between the human experimenter, the cat, and the apparatus or between animate and inanimate systems; all are quantum systems governed by the same rules of wavefunction evolution, and all may be considered "observers". But the relational interpretation allows that different observers can give different accounts of the same series of events, depending on the information they have about the system. The cat can be considered an observer of the apparatus; meanwhile, the experimenter can be considered another observer of the system in the box (the cat plus the apparatus). Before the box is opened, the cat, by nature of its being alive or dead, has information about the state of the apparatus (the atom has either decayed or not decayed); but the experimenter does not have information about the state of the box contents. In this way, the two observers simultaneously have different accounts of the situation: To the cat, the wavefunction of the apparatus has appeared to "collapse"; to the experimenter, the contents of the box appear to be in superposition. Not until the box is opened, and both observers have the same information about what happened, do both system states appear to "collapse" into the same definite result, a cat that is either alive or dead.
Transactional interpretation
In the transactional interpretation the apparatus emits an advanced wave backward in time, which combined with the wave that the source emits forward in time, forms a standing wave. The waves are seen as physically real, and the apparatus is considered an "observer". In the transactional interpretation, the collapse of the wavefunction is "atemporal" and occurs along the whole transaction between the source and the apparatus. The cat is never in superposition. Rather the cat is only in one state at any particular time, regardless of when the human experimenter looks in the box. The transactional interpretation resolves this quantum paradox.
Objective collapse theories
According to objective collapse theories, superpositions are destroyed spontaneously (irrespective of external observation) when some objective physical threshold (of time, mass, temperature, irreversibility, etc.) is reached. Thus, the cat would be expected to have settled into a definite state long before the box is opened. This could loosely be phrased as "the cat observes itself" or "the environment observes the cat".
Objective collapse theories require a modification of standard quantum mechanics to allow superpositions to be destroyed by the process of time evolution. These theories could ideally be tested by creating mesoscopic superposition states in the experiment. For instance, energy cat states has been proposed as a precise detector of the quantum gravity related energy decoherence models.
Applications and tests
The experiment as described is a purely theoretical one, and the machine proposed is not known to have been constructed. However, successful experiments involving similar principles, e.g. superpositions of relatively large (by the standards of quantum physics) objects have been performed. These experiments do not show that a cat-sized object can be superposed, but the known upper limit on "cat states" has been pushed upwards by them. In many cases the state is short-lived, even when cooled to near absolute zero.
A "cat state" has been achieved with photons.
A beryllium ion has been trapped in a superposed state.
An experiment involving a superconducting quantum interference device ("SQUID") has been linked to the theme of the thought experiment: "The superposition state does not correspond to a billion electrons flowing one way and a billion others flowing the other way. Superconducting electrons move en masse. All the superconducting electrons in the SQUID flow both ways around the loop at once when they are in the Schrödinger's cat state."
A piezoelectric "tuning fork" has been constructed, which can be placed into a superposition of vibrating and non vibrating states. The resonator comprises about 10 trillion atoms.
An experiment involving a flu virus has been proposed.
An experiment involving a bacterium and an electromechanical oscillator has been proposed.
In quantum computing the phrase "cat state" sometimes refers to the GHZ state, wherein several qubits are in an equal superposition of all being 0 and all being 1; e.g.,
According to at least one proposal, it may be possible to determine the state of the cat before observing it.
Extensions
In August 2020, physicists presented studies involving interpretations of quantum mechanics that are related to the Schrödinger's cat and Wigner's friend paradoxes, resulting in conclusions that challenge seemingly established assumptions about reality.
| Physical sciences | Quantum mechanics | Physics |
27859 | https://en.wikipedia.org/wiki/Sphere | Sphere | A sphere (from Greek , ) is a geometrical object that is a three-dimensional analogue to a two-dimensional circle. Formally, a sphere is the set of points that are all at the same distance from a given point in three-dimensional space. That given point is the center of the sphere, and is the sphere's radius. The earliest known mentions of spheres appear in the work of the ancient Greek mathematicians.
The sphere is a fundamental surface in many fields of mathematics. Spheres and nearly-spherical shapes also appear in nature and industry. Bubbles such as soap bubbles take a spherical shape in equilibrium. The Earth is often approximated as a sphere in geography, and the celestial sphere is an important concept in astronomy. Manufactured items including pressure vessels and most curved mirrors and lenses are based on spheres. Spheres roll smoothly in any direction, so most balls used in sports and toys are spherical, as are ball bearings.
Basic terminology
As mentioned earlier is the sphere's radius; any line from the center to a point on the sphere is also called a radius. 'Radius' is used in two senses: as a line segment and also as its length.
If a radius is extended through the center to the opposite side of the sphere, it creates a diameter. Like the radius, the length of a diameter is also called the diameter, and denoted . Diameters are the longest line segments that can be drawn between two points on the sphere: their length is twice the radius, . Two points on the sphere connected by a diameter are antipodal points of each other.
A unit sphere is a sphere with unit radius (). For convenience, spheres are often taken to have their center at the origin of the coordinate system, and spheres in this article have their center at the origin unless a center is mentioned.
A great circle on the sphere has the same center and radius as the sphere, and divides it into two equal hemispheres.
Although the figure of Earth is not perfectly spherical, terms borrowed from geography are convenient to apply to the sphere.
A particular line passing through its center defines an axis (as in Earth's axis of rotation).
The sphere-axis intersection defines two antipodal poles (north pole and south pole). The great circle equidistant to the poles is called the equator. Great circles through the poles are called lines of longitude or meridians. Small circles on the sphere that are parallel to the equator are circles of latitude (or parallels). In geometry unrelated to astronomical bodies, geocentric terminology should be used only for illustration and noted as such, unless there is no chance of misunderstanding.
Mathematicians consider a sphere to be a two-dimensional closed surface embedded in three-dimensional Euclidean space. They draw a distinction between a sphere and a ball, which is a solid figure, a three-dimensional manifold with boundary that includes the volume contained by the sphere. An open ball excludes the sphere itself, while a closed ball includes the sphere: a closed ball is the union of the open ball and the sphere, and a sphere is the boundary of a (closed or open) ball. The distinction between ball and sphere has not always been maintained and especially older mathematical references talk about a sphere as a solid. The distinction between "circle" and "disk" in the plane is similar.
Small spheres or balls are sometimes called spherules (e.g., in Martian spherules).
Equations
In analytic geometry, a sphere with center and radius is the locus of all points such that
Since it can be expressed as a quadratic polynomial, a sphere is a quadric surface, a type of algebraic surface.
Let be real numbers with and put
Then the equation
has no real points as solutions if and is called the equation of an imaginary sphere. If , the only solution of is the point and the equation is said to be the equation of a point sphere. Finally, in the case , is an equation of a sphere whose center is and whose radius is .
If in the above equation is zero then is the equation of a plane. Thus, a plane may be thought of as a sphere of infinite radius whose center is a point at infinity.
Parametric
A parametric equation for the sphere with radius and center can be parameterized using trigonometric functions.
The symbols used here are the same as those used in spherical coordinates. is constant, while varies from 0 to and varies from 0 to 2.
Properties
Enclosed volume
In three dimensions, the volume inside a sphere (that is, the volume of a ball, but classically referred to as the volume of a sphere) is
where is the radius and is the diameter of the sphere. Archimedes first derived this formula by showing that the volume inside a sphere is twice the volume between the sphere and the circumscribed cylinder of that sphere (having the height and diameter equal to the diameter of the sphere). This may be proved by inscribing a cone upside down into semi-sphere, noting that the area of a cross section of the cone plus the area of a cross section of the sphere is the same as the area of the cross section of the circumscribing cylinder, and applying Cavalieri's principle. This formula can also be derived using integral calculus (i.e., disk integration) to sum the volumes of an infinite number of circular disks of infinitesimally small thickness stacked side by side and centered along the -axis from to , assuming the sphere of radius is centered at the origin.
At any given , the incremental volume () equals the product of the cross-sectional area of the disk at and its thickness ():
The total volume is the summation of all incremental volumes:
In the limit as approaches zero, this equation becomes:
At any given , a right-angled triangle connects , and to the origin; hence, applying the Pythagorean theorem yields:
Using this substitution gives
which can be evaluated to give the result
An alternative formula is found using spherical coordinates, with volume element
so
For most practical purposes, the volume inside a sphere inscribed in a cube can be approximated as 52.4% of the volume of the cube, since , where is the diameter of the sphere and also the length of a side of the cube and ≈ 0.5236. For example, a sphere with diameter 1 m has 52.4% the volume of a cube with edge length 1m, or about 0.524 m3.
Surface area
The surface area of a sphere of radius is:
Archimedes first derived this formula from the fact that the projection to the lateral surface of a circumscribed cylinder is area-preserving. Another approach to obtaining the formula comes from the fact that it equals the derivative of the formula for the volume with respect to because the total volume inside a sphere of radius can be thought of as the summation of the surface area of an infinite number of spherical shells of infinitesimal thickness concentrically stacked inside one another from radius 0 to radius . At infinitesimal thickness the discrepancy between the inner and outer surface area of any given shell is infinitesimal, and the elemental volume at radius is simply the product of the surface area at radius and the infinitesimal thickness.
At any given radius , the incremental volume () equals the product of the surface area at radius () and the thickness of a shell ():
The total volume is the summation of all shell volumes:
In the limit as approaches zero this equation becomes:
Substitute :
Differentiating both sides of this equation with respect to yields as a function of :
This is generally abbreviated as:
where is now considered to be the fixed radius of the sphere.
Alternatively, the area element on the sphere is given in spherical coordinates by . The total area can thus be obtained by integration:
The sphere has the smallest surface area of all surfaces that enclose a given volume, and it encloses the largest volume among all closed surfaces with a given surface area. The sphere therefore appears in nature: for example, bubbles and small water drops are roughly spherical because the surface tension locally minimizes surface area.
The surface area relative to the mass of a ball is called the specific surface area and can be expressed from the above stated equations as
where is the density (the ratio of mass to volume).
Other geometric properties
A sphere can be constructed as the surface formed by rotating a circle one half revolution about any of its diameters; this is very similar to the traditional definition of a sphere as given in Euclid's Elements. Since a circle is a special type of ellipse, a sphere is a special type of ellipsoid of revolution. Replacing the circle with an ellipse rotated about its major axis, the shape becomes a prolate spheroid; rotated about the minor axis, an oblate spheroid.
A sphere is uniquely determined by four points that are not coplanar. More generally, a sphere is uniquely determined by four conditions such as passing through a point, being tangent to a plane, etc. This property is analogous to the property that three non-collinear points determine a unique circle in a plane.
Consequently, a sphere is uniquely determined by (that is, passes through) a circle and a point not in the plane of that circle.
By examining the common solutions of the equations of two spheres, it can be seen that two spheres intersect in a circle and the plane containing that circle is called the radical plane of the intersecting spheres. Although the radical plane is a real plane, the circle may be imaginary (the spheres have no real point in common) or consist of a single point (the spheres are tangent at that point).
The angle between two spheres at a real point of intersection is the dihedral angle determined by the tangent planes to the spheres at that point. Two spheres intersect at the same angle at all points of their circle of intersection. They intersect at right angles (are orthogonal) if and only if the square of the distance between their centers is equal to the sum of the squares of their radii.
Pencil of spheres
If and are the equations of two distinct spheres then
is also the equation of a sphere for arbitrary values of the parameters and . The set of all spheres satisfying this equation is called a pencil of spheres determined by the original two spheres. In this definition a sphere is allowed to be a plane (infinite radius, center at infinity) and if both the original spheres are planes then all the spheres of the pencil are planes, otherwise there is only one plane (the radical plane) in the pencil.
Properties of the sphere
In their book Geometry and the Imagination, David Hilbert and Stephan Cohn-Vossen describe eleven properties of the sphere and discuss whether these properties uniquely determine the sphere. Several properties hold for the plane, which can be thought of as a sphere with infinite radius. These properties are:
The points on the sphere are all the same distance from a fixed point. Also, the ratio of the distance of its points from two fixed points is constant.
The first part is the usual definition of the sphere and determines it uniquely. The second part can be easily deduced and follows a similar result of Apollonius of Perga for the circle. This second part also holds for the plane.
The contours and plane sections of the sphere are circles.
This property defines the sphere uniquely.
The sphere has constant width and constant girth.
The width of a surface is the distance between pairs of parallel tangent planes. Numerous other closed convex surfaces have constant width, for example the Meissner body. The girth of a surface is the circumference of the boundary of its orthogonal projection on to a plane. Each of these properties implies the other.
All points of a sphere are umbilics.
At any point on a surface a normal direction is at right angles to the surface because on the sphere these are the lines radiating out from the center of the sphere. The intersection of a plane that contains the normal with the surface will form a curve that is called a normal section, and the curvature of this curve is the normal curvature. For most points on most surfaces, different sections will have different curvatures; the maximum and minimum values of these are called the principal curvatures. Any closed surface will have at least four points called umbilical points. At an umbilic all the sectional curvatures are equal; in particular the principal curvatures are equal. Umbilical points can be thought of as the points where the surface is closely approximated by a sphere.
For the sphere the curvatures of all normal sections are equal, so every point is an umbilic. The sphere and plane are the only surfaces with this property.
The sphere does not have a surface of centers.
For a given normal section exists a circle of curvature that equals the sectional curvature, is tangent to the surface, and the center lines of which lie along on the normal line. For example, the two centers corresponding to the maximum and minimum sectional curvatures are called the focal points, and the set of all such centers forms the focal surface.
For most surfaces the focal surface forms two sheets that are each a surface and meet at umbilical points. Several cases are special:
* For channel surfaces one sheet forms a curve and the other sheet is a surface
* For cones, cylinders, tori and cyclides both sheets form curves.
* For the sphere the center of every osculating circle is at the center of the sphere and the focal surface forms a single point. This property is unique to the sphere.
All geodesics of the sphere are closed curves.
Geodesics are curves on a surface that give the shortest distance between two points. They are a generalization of the concept of a straight line in the plane. For the sphere the geodesics are great circles. Many other surfaces share this property.
Of all the solids having a given volume, the sphere is the one with the smallest surface area; of all solids having a given surface area, the sphere is the one having the greatest volume.
It follows from isoperimetric inequality. These properties define the sphere uniquely and can be seen in soap bubbles: a soap bubble will enclose a fixed volume, and surface tension minimizes its surface area for that volume. A freely floating soap bubble therefore approximates a sphere (though such external forces as gravity will slightly distort the bubble's shape). It can also be seen in planets and stars where gravity minimizes surface area for large celestial bodies.
The sphere has the smallest total mean curvature among all convex solids with a given surface area.
The mean curvature is the average of the two principal curvatures, which is constant because the two principal curvatures are constant at all points of the sphere.
The sphere has constant mean curvature.
The sphere is the only embedded surface that lacks boundary or singularities with constant positive mean curvature. Other such immersed surfaces as minimal surfaces have constant mean curvature.
The sphere has constant positive Gaussian curvature.
Gaussian curvature is the product of the two principal curvatures. It is an intrinsic property that can be determined by measuring length and angles and is independent of how the surface is embedded in space. Hence, bending a surface will not alter the Gaussian curvature, and other surfaces with constant positive Gaussian curvature can be obtained by cutting a small slit in the sphere and bending it. All these other surfaces would have boundaries, and the sphere is the only surface that lacks a boundary with constant, positive Gaussian curvature. The pseudosphere is an example of a surface with constant negative Gaussian curvature.
The sphere is transformed into itself by a three-parameter family of rigid motions.
Rotating around any axis a unit sphere at the origin will map the sphere onto itself. Any rotation about a line through the origin can be expressed as a combination of rotations around the three-coordinate axis (see Euler angles). Therefore, a three-parameter family of rotations exists such that each rotation transforms the sphere onto itself; this family is the rotation group SO(3). The plane is the only other surface with a three-parameter family of transformations (translations along the - and -axes and rotations around the origin). Circular cylinders are the only surfaces with two-parameter families of rigid motions and the surfaces of revolution and helicoids are the only surfaces with a one-parameter family.
Treatment by area of mathematics
Spherical geometry
The basic elements of Euclidean plane geometry are points and lines. On the sphere, points are defined in the usual sense. The analogue of the "line" is the geodesic, which is a great circle; the defining characteristic of a great circle is that the plane containing all its points also passes through the center of the sphere. Measuring by arc length shows that the shortest path between two points lying on the sphere is the shorter segment of the great circle that includes the points.
Many theorems from classical geometry hold true for spherical geometry as well, but not all do because the sphere fails to satisfy some of classical geometry's postulates, including the parallel postulate. In spherical trigonometry, angles are defined between great circles. Spherical trigonometry differs from ordinary trigonometry in many respects. For example, the sum of the interior angles of a spherical triangle always exceeds 180 degrees. Also, any two similar spherical triangles are congruent.
Any pair of points on a sphere that lie on a straight line through the sphere's center (i.e., the diameter) are called antipodal pointson the sphere, the distance between them is exactly half the length of the circumference. Any other (i.e., not antipodal) pair of distinct points on a sphere
lie on a unique great circle,
segment it into one minor (i.e., shorter) and one major (i.e., longer) arc, and
have the minor arc's length be the shortest distance between them on the sphere.
Spherical geometry is a form of elliptic geometry, which together with hyperbolic geometry makes up non-Euclidean geometry.
Differential geometry
The sphere is a smooth surface with constant Gaussian curvature at each point equal to . As per Gauss's Theorema Egregium, this curvature is independent of the sphere's embedding in 3-dimensional space. Also following from Gauss, a sphere cannot be mapped to a plane while maintaining both areas and angles. Therefore, any map projection introduces some form of distortion.
A sphere of radius has area element . This can be found from the volume element in spherical coordinates with held constant.
A sphere of any radius centered at zero is an integral surface of the following differential form:
This equation reflects that the position vector and tangent plane at a point are always orthogonal to each other. Furthermore, the outward-facing normal vector is equal to the position vector scaled by .
In Riemannian geometry, the filling area conjecture states that the hemisphere is the optimal (least area) isometric filling of the Riemannian circle.
Topology
Remarkably, it is possible to turn an ordinary sphere inside out in a three-dimensional space with possible self-intersections but without creating any creases, in a process called sphere eversion.
The antipodal quotient of the sphere is the surface called the real projective plane, which can also be thought of as the Northern Hemisphere with antipodal points of the equator identified.
Curves on a sphere
Circles
Circles on the sphere are, like circles in the plane, made up of all points a certain distance from a fixed point on the sphere. The intersection of a sphere and a plane is a circle, a point, or empty. Great circles are the intersection of the sphere with a plane passing through the center of a sphere: others are called small circles.
More complicated surfaces may intersect a sphere in circles, too: the intersection of a sphere with a surface of revolution whose axis contains the center of the sphere (are coaxial) consists of circles and/or points if not empty. For example, the diagram to the right shows the intersection of a sphere and a cylinder, which consists of two circles. If the cylinder radius were that of the sphere, the intersection would be a single circle. If the cylinder radius were larger than that of the sphere, the intersection would be empty.
Loxodrome
In navigation, a loxodrome or rhumb line is a path whose bearing, the angle between its tangent and due North, is constant. Loxodromes project to straight lines under the Mercator projection. Two special cases are the meridians which are aligned directly North–South and parallels which are aligned directly East–West. For any other bearing, a loxodrome spirals infinitely around each pole. For the Earth modeled as a sphere, or for a general sphere given a spherical coordinate system, such a loxodrome is a kind of spherical spiral.
Clelia curves
Another kind of spherical spiral is the Clelia curve, for which the longitude (or azimuth) and the colatitude (or polar angle) are in a linear relationship, . Clelia curves project to straight lines under the equirectangular projection. Viviani's curve () is a special case. Clelia curves approximate the ground track of satellites in polar orbit.
Spherical conics
The analog of a conic section on the sphere is a spherical conic, a quartic curve which can be defined in several equivalent ways.
The intersection of a sphere with a quadratic cone whose vertex is the sphere center
The intersection of a sphere with an elliptic or hyperbolic cylinder whose axis passes through the sphere center
The locus of points whose sum or difference of great-circle distances from a pair of foci is a constant
Many theorems relating to planar conic sections also extend to spherical conics.
Intersection of a sphere with a more general surface
If a sphere is intersected by another surface, there may be more complicated spherical curves.
Example sphere–cylinder
The intersection of the sphere with equation and the cylinder with equation is not just one or two circles. It is the solution of the non-linear system of equations
(see implicit curve and the diagram)
Generalizations
Ellipsoids
An ellipsoid is a sphere that has been stretched or compressed in one or more directions. More exactly, it is the image of a sphere under an affine transformation. An ellipsoid bears the same relationship to the sphere that an ellipse does to a circle.
Dimensionality
Spheres can be generalized to spaces of any number of dimensions. For any natural number , an -sphere, often denoted , is the set of points in ()-dimensional Euclidean space that are at a fixed distance from a central point of that space, where is, as before, a positive real number. In particular:
: a 0-sphere consists of two discrete points, and
: a 1-sphere is a circle of radius r
: a 2-sphere is an ordinary sphere
: a 3-sphere is a sphere in 4-dimensional Euclidean space.
Spheres for are sometimes called hyperspheres.
The -sphere of unit radius centered at the origin is denoted and is often referred to as "the" -sphere. The ordinary sphere is a 2-sphere, because it is a 2-dimensional surface which is embedded in 3-dimensional space.
In topology, the -sphere is an example of a compact topological manifold without boundary. A topological sphere need not be smooth; if it is smooth, it need not be diffeomorphic to the Euclidean sphere (an exotic sphere).
The sphere is the inverse image of a one-point set under the continuous function , so it is closed; is also bounded, so it is compact by the Heine–Borel theorem.
Metric spaces
More generally, in a metric space , the sphere of center and radius is the set of points such that .
If the center is a distinguished point that is considered to be the origin of , as in a normed space, it is not mentioned in the definition and notation. The same applies for the radius if it is taken to equal one, as in the case of a unit sphere.
Unlike a ball, even a large sphere may be an empty set. For example, in with Euclidean metric, a sphere of radius is nonempty only if can be written as sum of squares of integers.
An octahedron is a sphere in taxicab geometry, and a cube is a sphere in geometry using the Chebyshev distance.
History
The geometry of the sphere was studied by the Greeks. Euclid's Elements defines the sphere in book XI, discusses various properties of the sphere in book XII, and shows how to inscribe the five regular polyhedra within a sphere in book XIII. Euclid does not include the area and volume of a sphere, only a theorem that the volume of a sphere varies as the third power of its diameter, probably due to Eudoxus of Cnidus. The volume and area formulas were first determined in Archimedes's On the Sphere and Cylinder by the method of exhaustion. Zenodorus was the first to state that, for a given surface area, the sphere is the solid of maximum volume.
Archimedes wrote about the problem of dividing a sphere into segments whose volumes are in a given ratio, but did not solve it. A solution by means of the parabola and hyperbola was given by Dionysodorus. A similar problemto construct a segment equal in volume to a given segment, and in surface to another segmentwas solved later by al-Quhi.
Gallery
Regions
Hemisphere
Spherical cap
Spherical lune
Spherical polygon
Spherical sector
Spherical segment
Spherical wedge
Spherical zone
| Mathematics | Geometry | null |
27863 | https://en.wikipedia.org/wiki/Sword | Sword | A sword is an edged, bladed weapon intended for manual cutting or thrusting. Its blade, longer than a knife or dagger, is attached to a hilt and can be straight or curved. A thrusting sword tends to have a straighter blade with a pointed tip. A slashing sword is more likely to be curved and to have a sharpened cutting edge on one or both sides of the blade. Many swords are designed for both thrusting and slashing. The precise definition of a sword varies by historical epoch and geographic region.
Historically, the sword developed in the Bronze Age, evolving from the dagger; the earliest specimens date to about 1600 BC. The later Iron Age sword remained fairly short and without a crossguard. The spatha, as it developed in the Late Roman army, became the predecessor of the European sword of the Middle Ages, at first adopted as the Migration Period sword, and only in the High Middle Ages, developed into the classical arming sword with crossguard. The word sword continues the Old English, sweord.
The use of a sword is known as swordsmanship or, in a modern context, as fencing. In the early modern period, western sword design diverged into two forms, the thrusting swords and the sabres.
Thrusting swords such as the rapier and eventually the smallsword were designed to impale their targets quickly and inflict deep stab wounds. Their long and straight yet light and well balanced design made them highly maneuverable and deadly in a duel but fairly ineffective when used in a slashing or chopping motion. A well aimed lunge and thrust could end a fight in seconds with just the sword's point, leading to the development of a fighting style which closely resembles modern fencing.
Slashing swords such as the sabre and similar blades such as the cutlass were built more heavily and were more typically used in warfare. Built for slashing and chopping at multiple enemies, often from horseback, the sabre's long curved blade and slightly forward weight balance gave it a deadly character all its own on the battlefield. Most sabres also had sharp points and double-edged blades, making them capable of piercing soldier after soldier in a cavalry charge. Sabres continued to see battlefield use until the early 20th century. The US Navy M1917 Cutlass used in World War I was kept in their armory well into World War II and many Marines were issued a variant called the M1941 Cutlass as a makeshift jungle machete during the Pacific War.
Non-European weapons classified as swords include single-edged weapons such as the Middle Eastern scimitar, the Chinese dao and the related Japanese katana. The Chinese jiàn is an example of a non-European double-edged sword, like the European models derived from the double-edged Iron Age sword.
History
Prehistory and antiquity
Bronze Age
The first weapons that can be described as "swords" date to around 3300 BC. They have been found in Arslantepe, Turkey, are made from arsenical bronze, and are about long. Some of them are inlaid with silver.
The sword developed from the knife or dagger. The sword became differentiated from the dagger during the Bronze Age (c. 3000 BC), when copper and bronze weapons were produced with long leaf-shaped blades and with hilts consisting of an extension of the blade in handle form. A knife is unlike a dagger in that a knife has only one cutting surface, while a dagger has two cutting surfaces. Construction of longer blades became possible during the 3rd millennium BC in the Middle East, first in arsenic copper, then in tin-bronze.
Blades longer than were rare and not practical until the late Bronze Age because the Young's modulus (stiffness) of bronze is relatively low, and consequently longer blades would bend easily. The development of the sword out of the dagger was gradual; the first weapons that can be classified as swords without any ambiguity are those found in Minoan Crete, dated to about 1700 BC, reaching a total length of more than . These are the "type A" swords of the Aegean Bronze Age.
One of the most important, and longest-lasting, types of swords of the European Bronze Age was the Naue II type (named for Julius Naue who first described them), also known as Griffzungenschwert (lit. "grip-tongue sword"). This type first appears in c. the 13th century BC in Northern Italy (or a general Urnfield background), and survives well into the Iron Age, with a life-span of about seven centuries. During its lifetime, metallurgy changed from bronze to iron, but not its basic design.
Naue II swords were exported from Europe to the Aegean, and as far afield as Ugarit, beginning about 1200 BC, i.e. just a few decades before the final collapse of the palace cultures in the Bronze Age collapse. Naue II swords could be as long as 85 cm, but most specimens fall into the 60 to 70 cm range. Robert Drews linked the Naue Type II Swords, which spread from Southern Europe into the Mediterranean, with the Bronze Age collapse. Naue II swords, along with Nordic full-hilted swords, were made with functionality and aesthetics in mind. The hilts of these swords were beautifully crafted and often contained false rivets in order to make the sword more visually appealing. Swords coming from northern Denmark and northern Germany usually contained three or more fake rivets in the hilt.
Sword production in China is attested from the Bronze Age Shang dynasty. The technology for bronze swords reached its high point during the Warring States period and Qin dynasty. Amongst the Warring States period swords, some unique technologies were used, such as casting high tin edges over softer, lower tin cores, or the application of diamond shaped patterns on the blade (see sword of Goujian). Also unique for Chinese bronzes is the consistent use of high tin bronze (17–21% tin) which is very hard and breaks if stressed too far, whereas other cultures preferred lower tin bronze (usually 10%), which bends if stressed too far. Although iron swords were made alongside bronze, it was not until the early Han period that iron completely replaced bronze.
In the Indian subcontinent, earliest available Bronze age swords of copper were discovered in the Indus Valley civilization sites in the northwestern regions of South Asia. Swords have been recovered in archaeological findings throughout the Ganges-Jamuna Doab region of Indian subcontinent, consisting of bronze but more commonly copper. Diverse specimens have been discovered in Fatehgarh, where there are several varieties of hilt. These swords have been variously dated to times between 1700 and 1400 BC. Other swords from this period in India have been discovered from Kallur, Raichur.
Iron Age
Iron became increasingly common from the 13th century BC. Before that the use of swords was less frequent. The iron was not quench-hardened although often containing sufficient carbon, but work-hardened like bronze by hammering. This made them comparable or only slightly better in terms of strength and hardness to bronze swords. They could still bend during use rather than spring back into shape. But the easier production, and the better availability of the raw material for the first time permitted the equipping of entire armies with metal weapons, though Bronze Age Egyptian armies were sometimes fully equipped with bronze weapons.
Ancient swords are often found at burial sites. The sword was often placed on the right side of the corpse. Many times the sword was kept over the corpse. In many late Iron Age graves, the sword and the scabbard were bent at 180 degrees. It was known as killing the sword. Thus they might have considered swords as the most potent and powerful object.
Indian antiquity
High-carbon steel for swords, which would later appear as Damascus steel, was likely introduced in India around the mid-1st millennium BC. The Periplus of the Erythraean Sea mentions swords of Indian iron and steel being exported from ancient India to ancient Greece. Blades from the Indian subcontinent made of Damascus steel also found their way into Persia.
Greco-Roman antiquity
By the time of Classical Antiquity and the Parthian and Sassanid Empires in Iran, iron swords were common. The Greek xiphos and the Roman gladius are typical examples of the type, measuring some . The late Roman Empire introduced the longer spatha (the term for its wielder, spatharius, became a court rank in Constantinople), and from this time, the term longsword is applied to swords comparatively long for their respective periods.
Swords from the Parthian and Sassanian Empires were quite long, the blades on some late Sassanian swords being just under a metre long.
Swords were also used to administer various physical punishments, such as non-surgical amputation or capital punishment by decapitation. The use of a sword, an honourable weapon, was regarded in Europe since Roman times as a privilege reserved for the nobility and the upper classes.
Persian antiquity
In the first millennium BC, the Persian armies used a sword that was originally of Scythian design called the akinaka (acinaces). However, the great conquests of the Persians made the sword more famous as a Persian weapon, to the extent that the true nature of the weapon has been lost somewhat as the name akinaka has been used to refer to whichever form of sword the Persian army favoured at the time.
It is widely believed that the original akinaka was a 35 to 45 cm (14 to 18 inch) double-edged sword. The design was not uniform and in fact identification is made more on the nature of the scabbard than the weapon itself; the scabbard usually has a large, decorative mount allowing it to be suspended from a belt on the wearer's right side. Because of this, it is assumed that the sword was intended to be drawn with the blade pointing downwards ready for surprise stabbing attacks.
In the 12th century, the Seljuq dynasty had introduced the curved shamshir to Persia, and this was in extensive use by the early 16th century.
Chinese antiquity
Chinese iron swords made their first appearance in the later part of the Western Zhou dynasty, but iron and steel swords were not widely used until the 3rd century BC Han dynasty. The Chinese dao (刀 pinyin dāo) is single-edged, sometimes translated as sabre or broadsword, and the jian (劍 or 剑 pinyin jiàn) is double-edged. The zhanmadao (literally "horse chopping sword") is an extremely long, anti-cavalry sword from the Song dynasty era.
Middle Ages
Europe
Early and High Middle Ages
During the Middle Ages, sword technology improved, and the sword became a very advanced weapon. The spatha type remained popular throughout the Migration period and well into the Middle Ages. Vendel Age spathas were decorated with Germanic artwork (not unlike the Germanic bracteates fashioned after Roman coins). The Viking Age saw again a more standardized production, but the basic design remained indebted to the spatha.
Around the 10th century, the use of properly quenched hardened and tempered steel started to become much more common than in previous periods. The Frankish 'Ulfberht' blades (the name of the maker inlaid in the blade) were of particularly consistent high quality. Charles the Bald tried to prohibit the export of these swords, as they were used by Vikings in raids against the Franks.
Wootz steel (which is also known as Damascus steel) was a unique and highly prized steel developed on the Indian subcontinent as early as the 5th century BC. Its properties were unique due to the special smelting and reworking of the steel creating networks of iron carbides described as a globular cementite in a matrix of pearlite. The use of Damascus steel in swords became extremely popular in the 16th and 17th centuries.
It was only from the 11th century that Norman swords began to develop the crossguard (quillons). During the Crusades of the 12th to 13th century, this cruciform type of arming sword remained essentially stable, with variations mainly concerning the shape of the pommel. These swords were designed as cutting weapons, although effective points were becoming common to counter improvements in armour, especially the 14th-century change from mail to plate armour.
It was during the 14th century, with the growing use of more advanced armour, that the hand and a half sword, also known as a "bastard sword", came into being. It had an extended grip that meant it could be used with either one or two hands. Though these swords did not provide a full two-hand grip they allowed their wielders to hold a shield or parrying dagger in their off hand, or to use it as a two-handed sword for a more powerful blow.
In the Middle Ages, the sword was often used as a symbol of the word of God. The names given to many swords in mythology, literature, and history reflected the high prestige of the weapon and the wealth of the owner.
Late Middle Ages
From around 1300 to 1500, in concert with improved armour, innovative sword designs evolved more and more rapidly. The main transition was the lengthening of the grip, allowing two-handed use, and a longer blade. By 1400, this type of sword, at the time called langes Schwert (longsword) or spadone, was common, and a number of 15th- and 16th-century Fechtbücher offering instructions on their use survive. Another variant was the specialized armour-piercing swords of the estoc type. The longsword became popular due to its extreme reach and its cutting and thrusting abilities.
The estoc became popular because of its ability to thrust into the gaps between plates of armour. The grip was sometimes wrapped in wire or coarse animal hide to provide a better grip and to make it harder to knock a sword out of the user's hand.
A number of manuscripts covering longsword combat and techniques dating from the 13th–16th centuries exist in German, Italian, and English, providing extensive information on longsword combatives as used throughout this period. Many of these are now readily available online.
In the 16th century, the large zweihänder was used by the elite German and Swiss mercenaries known as doppelsöldners. Zweihänder, literally translated, means two-hander. The zweihänder possesses a long blade, as well as a huge guard for protection. It is estimated that some zweihänder swords were over long, with the one ascribed to Frisian warrior Pier Gerlofs Donia being long. The gigantic blade length was perfectly designed for manipulating and pushing away enemy polearms, which were major weapons around this time, in both Germany and Eastern Europe. Doppelsöldners also used katzbalgers, which means 'cat-gutter'. The katzbalger's S-shaped guard and blade made it perfect for bringing in when the fighting became too close to use a zweihänder.
Civilian use of swords became increasingly common during the late Renaissance, with duels being a preferred way to honourably settle disputes.
The side-sword was a type of war sword used by infantry during the Renaissance of Europe. This sword was a direct descendant of the knightly sword. Quite popular between the 16th and 17th centuries, they were ideal for handling the mix of armoured and unarmoured opponents of that time. A new technique of placing one's finger on the ricasso to improve the grip (a practice that would continue in the rapier) led to the production of hilts with a guard for the finger. This sword design eventually led to the development of the civilian rapier, but it was not replaced by it, and the side-sword continued to be used during the rapier's lifetime. As it could be used for both cutting and thrusting, the term "cut and thrust sword" is sometimes used interchangeably with side-sword. As rapiers became more popular, attempts were made to hybridize the blade, sacrificing the effectiveness found in each unique weapon design. These are still considered side-swords and are sometimes labeled sword rapier or cutting rapier by modern collectors.
Side-swords used in conjunction with bucklers became so popular that it caused the term swashbuckler to be coined. This word stems from the new fighting style of the side-sword and buckler which was filled with much "swashing and making a noise on the buckler".
Within the Ottoman Empire, the use of a curved sabre called the yatagan started in the mid-16th century. It would become the weapon of choice for many in Turkey and the Balkans.
The sword in this time period was the most personal weapon, the most prestigious, and the most versatile for close combat, but it came to decline in military use as technology, such as the crossbow and firearms changed warfare. However, it maintained a key role in civilian self-defence.
Middle East
The earliest evidence of curved swords, or scimitars (and other regional variants as the Arabian saif, the Persian shamshir and the Turkic kilij) is from the 9th century, when it was used among soldiers in the Khurasan region of Persia.
Africa
The takoba is a type of broadsword originating in the western Sahel, descended from various Byzantine and Islamic swords. It has a straight double-edged blade measuring about one meter in length, usually imported from Europe.
Abyssinian swords related to the Persian shamshir are known as shotel. The Asante people adopted swords under the name of akrafena. They are still used today in ceremonies, such as the Odwira festival.
East Asia
As steel technology improved, single-edged weapons became popular throughout Asia. Derived from the Chinese jian or dao, the Korean hwandudaedo are known from the early medieval Three Kingdoms. Production of the Japanese tachi, a precursor to the katana, is recorded from c. AD 900 (see Japanese sword).
Japan was famous for the swords it forged in the early 13th century for the class of warrior-nobility known as the Samurai. Western historians have said that Japanese katana were among the finest cutting weapons in world military history. The types of swords used by the Samurai included the ōdachi (extra long field sword), tachi (long cavalry sword), katana (long sword), and wakizashi (shorter companion sword for katana). Japanese swords that pre-date the rise of the samurai caste include the tsurugi (straight double-edged blade) and chokutō (straight one-edged blade). Japanese swordmaking reached the height of its development in the 15th and 16th centuries, when samurai increasingly found a need for a sword to use in closer quarters, leading to the creation of the modern katana. High quality Japanese swords have been exported to neighboring Asian countries since before the 11th century. From the 15th century to the 16th century, more than 200,000 swords were exported, reaching a quantitative peak, but these were simple swords made exclusively for mass production, specialized for export and lending to conscripted farmers (ashigaru).
South Asia
The khanda is a double-edge straight sword. It is often featured in religious iconography, theatre and art depicting the ancient history of India. Some communities venerate the weapon as a symbol of Shiva. It is a common weapon in the martial arts in the Indian subcontinent. The khanda often appears in Hindu, Buddhist and Sikh scriptures and art. In Sri Lanka, a unique wind furnace was used to produce the high-quality steel. This gave the blade a very hard cutting edge and beautiful patterns. For these reasons it became a very popular trading material.
The firangi (, derived from the Arabic term for a Western European, a "Frank") was a sword type which used blades manufactured in Western Europe and imported by the Portuguese, or made locally in imitation of European blades. Because of its length the firangi is usually regarded as primarily a cavalry weapon. The sword has been especially associated with the Marathas, who were famed for their cavalry. However, the firangi was also widely used by Sikhs and Rajputs.
The talwar () is a type of curved sword from India and other countries of the Indian subcontinent, it was adopted by communities such as Rajputs, Sikhs and Marathas, who favored the sword as their main weapon. It became more widespread in the medieval era.
The urumi ( , lit. curling blade; ; Hindi: ) is a "sword" with a flexible whip-like blade.
Southeast Asia
In Indonesia, the images of Indian style swords can be found in Hindu gods statues from ancient Java circa 8th to 10th century. However the native types of blade known as kris, parang, klewang and golok were more popular as weapons. These daggers are shorter than a sword but longer than a common dagger.
In the Philippines, traditional large swords known as kampilans and panabas were used in combat by the natives. A notable wielder of the kampilan was Lapu-Lapu, the king of Mactan and his warriors who defeated the Spaniards and killed Portuguese explorer Ferdinand Magellan at the Battle of Mactan on 27 April 1521. Traditional swords in the Philippines were immediately banned, but the training in swordsmanship was later hidden from the occupying Spaniards by practices in dances. But because of the banning, Filipinos were forced to use swords that were disguised as farm tools. Bolos and baliswords were used during the revolutions against the colonialists not only because ammunition for guns was scarce, but also for concealability while walking in crowded streets and homes. Bolos were also used by young boys who joined their parents in the revolution and by young girls and their mothers in defending the town while the men were on the battlefields. During the Philippine–American War in events such as the Battle of Balangiga, most of an American company was hacked to death or seriously injured by bolo-wielding guerillas in Balangiga, Samar. When the Japanese took control of the country, several American special operations groups stationed in the Philippines were introduced to Filipino martial arts and swordsmanship, leading to this style reaching America despite the fact that natives were reluctant to allow outsiders in on their fighting secrets.
Pre-Columbian Americas
The macuahuitl is a wooden broadsword and club that was utilized by various Mesoamerican civilizations, such as those of the Aztecs, Maya, Olmecs, Toltecs, and Mixtecs.
Pacific Islands
In the Gilbert Islands, the native Kiribati people have developed a type of broadsword made from shark teeth, which serves a similar function to the leiomano used by the Native Hawaiians.
Early modern history
Military sword
A single-edged type of sidearm used by the Hussites was popularized in 16th-century Germany under its Czech name dusack, also known as Säbel auf Teutsch gefasst ("sabre fitted in the German manner"). A closely related weapon is the schnepf or Swiss sabre used in Early Modern Switzerland.
The cut-and-thrust mortuary sword was used after 1625 by cavalry during the English Civil War. This (usually) two-edged sword sported a half-basket hilt with a straight blade some 90–105 cm long. Later in the 17th century, the swords used by cavalry became predominantly single-edged. The so-called walloon sword (épée wallone) was common in the Thirty Years' War and Baroque era. Its hilt was ambidextrous with shell-guards and knuckle-bow that inspired 18th-century continental hunting hangers. Following their campaign in the Netherlands in 1672, the French began producing this weapon as their first regulation sword. Weapons of this design were also issued to the Swedish army from the time of Gustavus Adolphus until as late as the 1850s.
Duelling sword
The rapier is believed to have evolved either from the Spanish espada ropera or from the swords of the Italian nobility somewhere in the later part of the 16th century. The rapier differed from most earlier swords in that it was not a military weapon but a primarily civilian sword. Both the rapier and the Italian schiavona developed the crossguard into a basket-shaped guard for hand protection.
During the 17th and 18th centuries, the shorter small sword became an essential fashion accessory in European countries and the New World, though in some places such as the Scottish Highlands large swords as the basket-hilted broadsword were preferred, and most wealthy men and military officers carried one slung from a belt. Both the small sword and the rapier remained popular dueling swords well into the 18th century.
As the wearing of swords fell out of fashion, canes took their place in a gentleman's wardrobe. This developed to the gentlemen in the Victorian era to use the umbrella. Some examples of canes—those known as sword canes or swordsticks—incorporate a concealed blade. The French martial art la canne developed to fight with canes and swordsticks and has now evolved into a sport. The English martial art singlestick is very similar.
With the rise of the pistol duel, the duelling sword fell out of fashion long before the practice of duelling itself. By about 1770, English duelists enthusiastically adopted the pistol, and sword duels dwindled. However, the custom of duelling with epées persisted well into the 20th century in France. Such modern duels were not fought to the death; the duellists' aim was instead merely to draw blood from the opponent's sword arm.
Late modern history
Military sidearm
Towards the end of its useful life, the sword served more as a weapon of self-defence than for use on the battlefield, and the military importance of swords steadily decreased during the Modern Age. Even as a personal sidearm, the sword began to lose its preeminence in the early 19th century, reflecting the development of reliable handguns.
However, swords were still normally carried in combat by cavalrymen and by officers of other branches throughout the 19th and early 20th centuries, both in colonial and European warfare. For example, during the Aceh War the Acehnese klewangs, a sword similar to the machete, proved very effective in close quarters combat with Dutch troops, leading the Royal Netherlands East Indies Army to adopt a heavy cutlass, also called klewang (very similar in appearance to the US Navy Model 1917 Cutlass) to counter it. Mobile troops armed with carbines and klewangs succeeded in suppressing Aceh resistance where traditional infantry with rifle and bayonet had failed. From that time on until the 1950s the Royal Dutch East Indies Army, Royal Dutch Army, Royal Dutch Navy and Dutch police used these cutlasses called Klewang.
Swords continued in general peacetime use by cavalry of most armies during the years prior to World War I. The British Army formally adopted a completely new design of cavalry sword in 1908, almost the last change in British Army weapons before the outbreak of the war. At the outbreak of World War I infantry officers in all combatant armies then involved (French, German, British, Austro-Hungarian, Russian, Belgian and Serbian) still carried swords as part of their field equipment. On mobilization in August 1914 all serving British Army officers were required to have their swords sharpened as the only peacetime use of the weapon had been for saluting on parade. The high visibility and limited practical use of the sword however led to it being abandoned within weeks, although most cavalry continued to carry sabres throughout the war. While retained as a symbol of rank and status by at least senior officers of infantry, artillery and other branches, the sword was usually left with non-essential baggage when units reached the front line. It was not until the late 1920s and early 1930s that this historic weapon was finally discarded for all but ceremonial purposes by most remaining horse mounted regiments of Europe and the Americas.
In China troops used the long anti-cavalry miao dao well into the Second Sino-Japanese War. The last units of British heavy cavalry switched to using armoured vehicles as late as 1938. Swords and other dedicated melee weapons were used occasionally by many countries during World War II, but typically as a secondary weapon as they were outclassed by coexisting firearms. A notable exception was the Imperial Japanese Army where, for cultural reasons, all officers and warrant officers carried the shin-gunto ("new military sword") into battle from 1934 until 1945.
Ceremonial use
Swords are commonly worn as a ceremonial item by officers in many military and naval services throughout the world. Occasions to wear swords include any event in dress uniforms where the rank-and-file carry arms: parades, reviews, courts-martial, tattoos, and changes of command. They are also commonly worn for officers' weddings, and when wearing dress uniforms to church—although they are rarely actually worn in the church itself.
In the British forces they are also worn for any appearance at Court. In the United States, every Naval officer at or above the rank of Lieutenant Commander is required to own a sword, which can be prescribed for any formal outdoor ceremonial occasion; they are normally worn for changes of command and parades. For some Navy parades, cutlasses are issued to petty officers and chief petty officers.
In the U.S. Marine Corps every officer must own a sword, which is prescribed for formal parades and other ceremonies where dress uniforms are worn and the rank-and-file are under arms. On these occasions depending on their billet, Marine Non-Commissioned Officers (E-4 and above) may also be required to carry swords, which have hilts of a pattern similar to U.S. Naval officers' swords but are actually sabres. The USMC Model 1859 NCO Sword is the longest continuously issued edged weapon in the U.S. inventory
The Marine officer swords are of the Mameluke pattern which was adopted in 1825 in recognition of the Marines' key role in the capture of the Tripolitan city of Derna during the First Barbary War. Taken out of issue for approximately 20 years from 1855 until 1875, it was restored to service in the year of the Corps' centennial and has remained in issue since.
Religious
In the occult practices of Wicca, a sword or knife often referred to as an athame is used as a magical tool.
Sword replicas
The production of replicas of historical swords originates with 19th-century historicism. Contemporary replicas can range from cheap factory produced look-alikes to exact recreations of individual artifacts, including an approximation of the historical production methods.
Some kinds of swords are still commonly used today as weapons, often as a side arm for military infantry. The Japanese katana, wakizashi and tantō are carried by some infantry and officers in Japan and other parts of Asia and the kukri is the official melee weapon for Nepal. Other swords in use today are the sabre, the scimitar, the shortsword and the machete.
In the case of a rat-tail tang, the maker welds a thin rod to the end of the blade at the crossguard; this rod goes through the grip.
In traditional construction, Swordsmiths peened such tangs over the end of the pommel, or occasionally welded the hilt furniture to the tang and threaded the end for screwing on a pommel. This style is often referred to as a "narrow" or "hidden" tang. Modern, less traditional, replicas often feature a threaded pommel or a pommel nut which holds the hilt together and allows dismantling.
In a "full" tang (most commonly used in knives and machetes), the tang has about the same width as the blade, and is generally the same shape as the grip. In European or Asian swords sold today, many advertised "full" tangs may actually involve a forged rat-tail tang.
Morphology
The sword consists of the blade and the hilt. The term scabbard applies to the cover for the sword blade when not in use.
Blade
There is considerable variation in the detailed design of sword blades. The diagram opposite shows a typical Medieval European sword.
Early iron blades have rounded points due to the limited metallurgy of the time. These were still effective for thrusting against lightly armoured opponents. As armour advanced, blades were made narrower, stiffer and sharply pointed to defeat the armour by thrusting.
Dedicated cutting blades are wide and thin, and often have grooves known as fullers which lighten the blade at the cost of some of the blade's stiffness. The edges of a cutting sword are almost parallel. Blades oriented for the thrust have thicker blades, sometimes with a distinct midrib for increased stiffness, with a strong taper and an acute point. The geometry of a cutting sword blade allows for acute edge angles. An edge with an acuter angle is more inclined to degrade quickly in combat situations than an edge with a more obtuse angle. Also, an acute edge angle is not the primary factor of a blade's sharpness.
The part of the blade between the center of percussion (CoP) and the point is called the foible (weak) of the blade, and that between the center of balance (CoB) and the hilt is the forte (strong). The section in between the CoP and the CoB is the middle.
The ricasso or shoulder identifies a short section of blade immediately below the guard that is left completely unsharpened. Many swords have no ricasso. On some large weapons, such as the German Zweihänder, a metal cover surrounded the ricasso, and a swordsman might grip it in one hand to wield the weapon more easily in close-quarter combat.
The ricasso normally bears the maker's mark.
The tang is the extension of the blade to which the hilt is fitted.
On Japanese blades, the maker's mark appears on the tang under the grip.
Hilt
The hilt is the collective term for the parts allowing for the handling and control of the blade; these consist of the grip, the pommel, and a simple or elaborate guard, which in post-Viking Age swords could consist of only a crossguard (called a cruciform hilt or quillons). The pommel was originally designed as a stop to prevent the sword slipping from the hand. From around the 11th century onward it became a counterbalance to the blade, allowing a more fluid style of fighting.
It can also be used as a blunt instrument at close range, and its weight affects the centre of percussion. In later times a sword knot or tassel was sometimes added. By the 17th century, with the growing use of firearms and the accompanying decline in the use of armour, many rapiers and dueling swords had developed elaborate basket hilts, which protect the palm of the wielder and rendered the gauntlet obsolete. By contrast, Japanese swords of the early modern period customarily used a small disc guard, or .
In late medieval and Renaissance era European swords, a flap of leather called the chappe or rain guard was attached to a sword's crossguard at the base of the hilt to protect the mouth of the scabbard and prevent water from entering.
Sword scabbards and suspension
Common accessories to the sword include the scabbard and baldric, known as a 'sword belt'.
The scabbard, also known as the sheath, is a protective cover often provided for the sword blade. Scabbards have been made of many materials, including leather, wood, and metals such as brass or steel. The metal fitting where the blade enters the leather or metal scabbard is called the throat, which is often part of a larger scabbard mount, or locket, that bears a carrying ring or stud to facilitate wearing the sword. The blade's point in leather scabbards is usually protected by a metal tip, or chape, which on both leather and metal scabbards is often given further protection from wear by an extension called a drag, or shoe.
A sword belt is a belt with an attachment for the sword's scabbard, used to carry it when not in use. It is usually fixed to the scabbard of the sword, providing a fast means of drawing the sword in battle. Examples of sword belts include the Balteus used by the Roman legionary. Swords and sword belts continue in use for ceremonial occasions by military forces.
Typology
Sword typology is based on morphological criteria on the one hand (blade shape (cross-section, taper, and length), shape and size of the hilt and pommel), and age and place of origin on the other (Bronze Age, Iron Age, European (medieval, early modern, modern), Asian).
The relatively comprehensive Oakeshott typology was created by historian and illustrator Ewart Oakeshott as a way to define and catalogue European swords of the medieval period based on physical form, including blade shape and hilt configuration. The typology also focuses on the smaller, and in some cases contemporary, single-handed swords such as the arming sword.
Single vs. double-edged
As noted above, the terms longsword, broad sword, great sword, and Gaelic claymore are used relative to the era under consideration, and each term designates a particular type of sword.
Jian
In most Asian countries, a sword (jian 劍, geom (검), ken/tsurugi (剣) is a double-edged straight-bladed weapon, while a knife or sabre (dāo 刀, do (도), to/katana (刀) refers to a single-edged object.
Kirpan
Among the Sikhs, the sword is held in very high esteem. A single-edged sword is called a kirpan, and its double-edged counterpart a khanda or tegha.
Churika
The South Indian churika is a handheld double-edged sword traditionally used in the Malabar region of Kerala. It is also worshipped as the weapon of Vettakkorumakan, the hunter god in Hinduism.
Backsword and falchion
European terminology does give generic names for single-edged and double-edged blades but refers to specific types with the term 'sword' covering them all. For example, the backsword may be so called because it is single-edged but the falchion which is also single-edged is given its own specific name.
Single vs. two-handed use
Two-handed
A two-handed sword is any sword that usually requires two hands to wield, or more specifically the very large swords of the 16th century.
Throughout history two-handed swords have generally been less common than their one-handed counterparts, one exception being their common use in Japan. Two-handed grips have two advantages: obviously they allow the strength of two hands to be used, not just one, but by spacing the hands apart they also allow a torque to be applied, rotating the sword in a slashing manner.
A two-handed grip may be needed for one of two reasons: either to wield a particularly large sword or else with the single-sided Japanese tachi for a slashing cut. Slashing swords may have distinctively long hilt grips to facilitate this.
Hand and a half sword
A Hand and a half sword, colloquially known as a "bastard sword", was a sword with an extended grip and sometimes pommel so that it could be used with either one or two hands. Although these swords may not provide a full two-hand grip, they allowed its wielders to hold a shield or parrying dagger in their off hand, or to use it as a two-handed sword for a more powerful blow. These should not be confused with a longsword, two-handed sword, or Zweihänder, which were always intended to be used with two hands.
Laws on carrying a sword
The Visigothic Code of Ervig (680-687) made ownership of a sword mandatory for men joining the Visigothic army, regardless of whether the men were Goth or Roman. A number of Charlemagne capitularies made ownership of a sword mandatory, for example, those who owned a warhorse needed to also own a sword.
In fiction
In fantasy, magic swords often appear, based on their use in myth and legend. The science fiction counterpart to these is known as an energy sword (sometimes also referred to as a "beam sword" or "laser sword"), a sword whose blade consists of, or is augmented by, concentrated energy. A well known example of this type of sword is the lightsaber, shown in the Star Wars franchise.
| Technology | Melee weapons | null |
27865 | https://en.wikipedia.org/wiki/Surface%20%28topology%29 | Surface (topology) | In the part of mathematics referred to as topology, a surface is a two-dimensional manifold. Some surfaces arise as the boundaries of three-dimensional solid figures; for example, the sphere is the boundary of the solid ball. Other surfaces arise as graphs of functions of two variables; see the figure at right. However, surfaces can also be defined abstractly, without reference to any ambient space. For example, the Klein bottle is a surface that cannot be embedded in three-dimensional Euclidean space.
Topological surfaces are sometimes equipped with additional information, such as a Riemannian metric or a complex structure, that connects them to other disciplines within mathematics, such as differential geometry and complex analysis. The various mathematical notions of surface can be used to model surfaces in the physical world.
In general
In mathematics, a surface is a geometrical shape that resembles a deformed plane. The most familiar examples arise as boundaries of solid objects in ordinary three-dimensional Euclidean space R3, such as spheres. The exact definition of a surface may depend on the context. Typically, in algebraic geometry, a surface may cross itself (and may have other singularities), while, in topology and differential geometry, it may not.
A surface is a two-dimensional space; this means that a moving point on a surface may move in two directions (it has two degrees of freedom). In other words, around almost every point, there is a coordinate patch on which a two-dimensional coordinate system is defined. For example, the surface of the Earth resembles (ideally) a two-dimensional sphere, and latitude and longitude provide two-dimensional coordinates on it (except at the poles and along the 180th meridian).
The concept of surface is widely used in physics, engineering, computer graphics, and many other disciplines, primarily in representing the surfaces of physical objects. For example, in analyzing the aerodynamic properties of an airplane, the central consideration is the flow of air along its surface.
Definitions and first examples
A (topological) surface is a topological space in which every point has an open neighbourhood homeomorphic to some open subset of the Euclidean plane E2. Such a neighborhood, together with the corresponding homeomorphism, is known as a (coordinate) chart. It is through this chart that the neighborhood inherits the standard coordinates on the Euclidean plane. These coordinates are known as local coordinates and these homeomorphisms lead us to describe surfaces as being locally Euclidean.
In most writings on the subject, it is often assumed, explicitly or implicitly, that as a topological space a surface is also nonempty, second-countable, and Hausdorff. It is also often assumed that the surfaces under consideration are connected.
The rest of this article will assume, unless specified otherwise, that a surface is nonempty, Hausdorff, second-countable, and connected.
More generally, a (topological) surface with boundary is a Hausdorff topological space in which every point has an open neighbourhood homeomorphic to some open subset of the closure of the upper half-plane H2 in C. These homeomorphisms are also known as (coordinate) charts. The boundary of the upper half-plane is the x-axis. A point on the surface mapped via a chart to the x-axis is termed a boundary point. The collection of such points is known as the boundary of the surface which is necessarily a one-manifold, that is, the union of closed curves. On the other hand, a point mapped to above the x-axis is an interior point. The collection of interior points is the interior of the surface which is always non-empty. The closed disk is a simple example of a surface with boundary. The boundary of the disc is a circle.
The term surface used without qualification refers to surfaces without boundary. In particular, a surface with empty boundary is a surface in the usual sense. A surface with empty boundary which is compact is known as a 'closed' surface. The two-dimensional sphere, the two-dimensional torus, and the real projective plane are examples of closed surfaces.
The Möbius strip is a surface on which the distinction between clockwise and counterclockwise can be defined locally, but not globally. In general, a surface is said to be orientable if it does not contain a homeomorphic copy of the Möbius strip; intuitively, it has two distinct "sides". For example, the sphere and torus are orientable, while the real projective plane is not (because the real projective plane with one point removed is homeomorphic to the open Möbius strip).
In differential and algebraic geometry, extra structure is added upon the topology of the surface. This added structure can be a smoothness structure (making it possible to define differentiable maps to and from the surface), a Riemannian metric (making it possible to define length and angles on the surface), a complex structure (making it possible to define holomorphic maps to and from the surface—in which case the surface is called a Riemann surface), or an algebraic structure (making it possible to detect singularities, such as self-intersections and cusps, that cannot be described solely in terms of the underlying topology).
Extrinsically defined surfaces and embeddings
Historically, surfaces were initially defined as subspaces of Euclidean spaces. Often, these surfaces were the locus of zeros of certain functions, usually polynomial functions. Such a definition considered the surface as part of a larger (Euclidean) space, and as such was termed extrinsic.
In the previous section, a surface is defined as a topological space with certain properties, namely Hausdorff and locally Euclidean. This topological space is not considered a subspace of another space. In this sense, the definition given above, which is the definition that mathematicians use at present, is intrinsic.
A surface defined as intrinsic is not required to satisfy the added constraint of being a subspace of Euclidean space. It may seem possible for some surfaces defined intrinsically to not be surfaces in the extrinsic sense. However, the Whitney embedding theorem asserts every surface can in fact be embedded homeomorphically into Euclidean space, in fact into E4: The extrinsic and intrinsic approaches turn out to be equivalent.
In fact, any compact surface that is either orientable or has a boundary can be embedded in E3; on the other hand, the real projective plane, which is compact, non-orientable and without boundary, cannot be embedded into E3 (see Gramain). Steiner surfaces, including Boy's surface, the Roman surface and the cross-cap, are models of the real projective plane in E3, but only the Boy surface is an immersed surface. All these models are singular at points where they intersect themselves.
The Alexander horned sphere is a well-known pathological embedding of the two-sphere into the three-sphere.
The chosen embedding (if any) of a surface into another space is regarded as extrinsic information; it is not essential to the surface itself. For example, a torus can be embedded into E3 in the "standard" manner (which looks like a bagel) or in a knotted manner (see figure). The two embedded tori are homeomorphic, but not isotopic: They are topologically equivalent, but their embeddings are not.
The image of a continuous, injective function from R2 to higher-dimensional Rn is said to be a parametric surface. Such an image is so-called because the x- and y- directions of the domain R2 are 2 variables that parametrize the image. A parametric surface need not be a topological surface. A surface of revolution can be viewed as a special kind of parametric surface.
If f is a smooth function from R3 to R whose gradient is nowhere zero, then the locus of zeros of f does define a surface, known as an implicit surface. If the condition of non-vanishing gradient is dropped, then the zero locus may develop singularities.
Construction from polygons
Each closed surface can be constructed from an oriented polygon with an even number of sides, called a fundamental polygon of the surface, by pairwise identification of its edges. For example, in each polygon below, attaching the sides with matching labels (A with A, B with B), so that the arrows point in the same direction, yields the indicated surface.
Any fundamental polygon can be written symbolically as follows. Begin at any vertex, and proceed around the perimeter of the polygon in either direction until returning to the starting vertex. During this traversal, record the label on each edge in order, with an exponent of -1 if the edge points opposite to the direction of traversal. The four models above, when traversed clockwise starting at the upper left, yield
sphere:
real projective plane:
torus:
Klein bottle: .
Note that the sphere and the projective plane can both be realized as quotients of the 2-gon, while the torus and Klein bottle require a 4-gon (square).
The expression thus derived from a fundamental polygon of a surface turns out to be the sole relation in a presentation of the fundamental group of the surface with the polygon edge labels as generators. This is a consequence of the Seifert–van Kampen theorem.
Gluing edges of polygons is a special kind of quotient space process. The quotient concept can be applied in greater generality to produce new or alternative constructions of surfaces. For example, the real projective plane can be obtained as the quotient of the sphere by identifying all pairs of opposite points on the sphere. Another example of a quotient is the connected sum.
Connected sums
The connected sum of two surfaces M and N, denoted M # N, is obtained by removing a disk from each of them and gluing them along the boundary components that result. The boundary of a disk is a circle, so these boundary components are circles. The Euler characteristic of is the sum of the Euler characteristics of the summands, minus two:
The sphere S is an identity element for the connected sum, meaning that . This is because deleting a disk from the sphere leaves a disk, which simply replaces the disk deleted from M upon gluing.
Connected summation with the torus T is also described as attaching a "handle" to the other summand M. If M is orientable, then so is . The connected sum is associative, so the connected sum of a finite collection of surfaces is well-defined.
The connected sum of two real projective planes, , is the Klein bottle K. The connected sum of the real projective plane and the Klein bottle is homeomorphic to the connected sum of the real projective plane with the torus; in a formula, . Thus, the connected sum of three real projective planes is homeomorphic to the connected sum of the real projective plane with the torus. Any connected sum involving a real projective plane is nonorientable.
Closed surfaces
A closed surface is a surface that is compact and without boundary. Examples of closed surfaces include the sphere, the torus and the Klein bottle. Examples of non-closed surfaces include an open disk (which is a sphere with a puncture), a cylinder (which is a sphere with two punctures), and the Möbius strip.
A surface embedded in three-dimensional space is closed if and only if it is the boundary of a solid. As with any closed manifold, a surface embedded in Euclidean space that is closed with respect to the inherited Euclidean topology is not necessarily a closed surface; for example, a disk embedded in that contains its boundary is a surface that is topologically closed but not a closed surface.
Classification of closed surfaces
The classification theorem of closed surfaces states that any connected closed surface is homeomorphic to some member of one of these three families:
the sphere,
the connected sum of g tori for g ≥ 1,
the connected sum of k real projective planes for k ≥ 1.
The surfaces in the first two families are orientable. It is convenient to combine the two families by regarding the sphere as the connected sum of 0 tori. The number g of tori involved is called the genus of the surface. The sphere and the torus have Euler characteristics 2 and 0, respectively, and in general the Euler characteristic of the connected sum of g tori is .
The surfaces in the third family are nonorientable. The Euler characteristic of the real projective plane is 1, and in general the Euler characteristic of the connected sum of k of them is .
It follows that a closed surface is determined, up to homeomorphism, by two pieces of information: its Euler characteristic, and whether it is orientable or not. In other words, Euler characteristic and orientability completely classify closed surfaces up to homeomorphism.
Closed surfaces with multiple connected components are classified by the class of each of their connected components, and thus one generally assumes that the surface is connected.
Monoid structure
Relating this classification to connected sums, the closed surfaces up to homeomorphism form a commutative monoid under the operation of connected sum, as indeed do manifolds of any fixed dimension. The identity is the sphere, while the real projective plane and the torus generate this monoid, with a single relation , which may also be written , since . This relation is sometimes known as after Walther von Dyck, who proved it in , and the triple cross surface is accordingly called .
Geometrically, connect-sum with a torus () adds a handle with both ends attached to the same side of the surface, while connect-sum with a Klein bottle () adds a handle with the two ends attached to opposite sides of an orientable surface; in the presence of a projective plane (), the surface is not orientable (there is no notion of side), so there is no difference between attaching a torus and attaching a Klein bottle, which explains the relation.
Proof
The classification of closed surfaces has been known since the 1860s, and today a number of proofs exist.
Topological and combinatorial proofs in general rely on the difficult result that every compact 2-manifold is homeomorphic to a simplicial complex, which is of interest in its own right. The most common proof of the classification is , which brings every triangulated surface to a standard form. A simplified proof, which avoids a standard form, was discovered by John H. Conway circa 1992, which he called the "Zero Irrelevancy Proof" or "ZIP proof" and is presented in .
A geometric proof, which yields a stronger geometric result, is the uniformization theorem. This was originally proven only for Riemann surfaces in the 1880s and 1900s by Felix Klein, Paul Koebe, and Henri Poincaré.
Surfaces with boundary
Compact surfaces, possibly with boundary, are simply closed surfaces with a finite number of holes (open discs that have been removed). Thus, a connected compact surface is classified by the number of boundary components and the genus of the corresponding closed surface – equivalently, by the number of boundary components, the orientability, and Euler characteristic. The genus of a compact surface is defined as the genus of the corresponding closed surface.
This classification follows almost immediately from the classification of closed surfaces: removing an open disc from a closed surface yields a compact surface with a circle for boundary component, and removing k open discs yields a compact surface with k disjoint circles for boundary components. The precise locations of the holes are irrelevant, because the homeomorphism group acts k-transitively on any connected manifold of dimension at least 2.
Conversely, the boundary of a compact surface is a closed 1-manifold, and is therefore the disjoint union of a finite number of circles; filling these circles with disks (formally, taking the cone) yields a closed surface.
The unique compact orientable surface of genus g and with k boundary components is often denoted for example in the study of the mapping class group.
Non-compact surfaces
Non-compact surfaces are more difficult to classify. As a simple example, a non-compact surface can be obtained by puncturing (removing a finite set of points from) a closed manifold. On the other hand, any open subset of a compact surface is itself a non-compact surface; consider, for example, the complement of a Cantor set in the sphere, otherwise known as the Cantor tree surface. However, not every non-compact surface is a subset of a compact surface; two canonical counterexamples are the Jacob's ladder and the Loch Ness monster, which are non-compact surfaces with infinite genus.
A non-compact surface M has a non-empty space of ends E(M), which informally speaking describes the ways that the surface "goes off to infinity". The space E(M) is always topologically equivalent to a closed subspace of the Cantor set. M may have a finite or countably infinite number Nh of handles, as well as a finite or countably infinite number Np of projective planes. If both Nh and Np are finite, then these two numbers, and the topological type of space of ends, classify the surface M up to topological equivalence. If either or both of Nh and Np is infinite, then the topological type of M depends not only on these two numbers but also on how the infinite one(s) approach the space of ends. In general the topological type of M is determined by the four subspaces of E(M) that are limit points of infinitely many handles and infinitely many projective planes, limit points of only handles, limit points of only projective planes, and limit points of neither.
Assumption of second-countability
If one removes the assumption of second-countability from the definition of a surface, there exist (necessarily non-compact) topological surfaces having no countable base for their topology. Perhaps the simplest example is the Cartesian product of the long line with the space of real numbers.
Another surface having no countable base for its topology, but not requiring the Axiom of Choice to prove its existence, is the Prüfer manifold, which can be described by simple equations that show it to be a real-analytic surface. The Prüfer manifold may be thought of as the upper half plane together with one additional "tongue" Tx hanging down from it directly below the point (x,0), for each real x.
In 1925, Tibor Radó proved that all Riemann surfaces (i.e., one-dimensional complex manifolds) are necessarily second-countable (Radó's theorem). By contrast, if one replaces the real numbers in the construction of the Prüfer surface by the complex numbers, one obtains a two-dimensional complex manifold (which is necessarily a 4-dimensional real manifold) with no countable base.
Surfaces in geometry
Polyhedra, such as the boundary of a cube, are among the first surfaces encountered in geometry. It is also possible to define smooth surfaces, in which each point has a neighborhood diffeomorphic to some open set in E2. This elaboration allows calculus to be applied to surfaces to prove many results.
Two smooth surfaces are diffeomorphic if and only if they are homeomorphic. (The analogous result does not hold for higher-dimensional manifolds.) Thus closed surfaces are classified up to diffeomorphism by their Euler characteristic and orientability.
Smooth surfaces equipped with Riemannian metrics are of foundational importance in differential geometry. A Riemannian metric endows a surface with notions of geodesic, distance, angle, and area. It also gives rise to Gaussian curvature, which describes how curved or bent the surface is at each point. Curvature is a rigid, geometric property, in that it is not preserved by general diffeomorphisms of the surface. However, the famous Gauss–Bonnet theorem for closed surfaces states that the integral of the Gaussian curvature K over the entire surface S is determined by the Euler characteristic:
This result exemplifies the deep relationship between the geometry and topology of surfaces (and, to a lesser extent, higher-dimensional manifolds).
Another way in which surfaces arise in geometry is by passing into the complex domain. A complex one-manifold is a smooth oriented surface, also called a Riemann surface. Any complex nonsingular algebraic curve viewed as a complex manifold is a Riemann surface. In fact, every compact orientable surface is realizable as a Riemann surface. Thus compact Riemann surfaces are characterized topologically by their genus: 0, 1, 2, .... On the other hand, the genus does not characterize the complex structure. For example, there are uncountably many non-isomorphic compact Riemann surfaces of genus 1 (the elliptic curves).
Complex structures on a closed oriented surface correspond to conformal equivalence classes of Riemannian metrics on the surface. One version of the uniformization theorem (due to Poincaré) states that any Riemannian metric on an oriented, closed surface is conformally equivalent to an essentially unique metric of constant curvature. This provides a starting point for one of the approaches to Teichmüller theory, which provides a finer classification of Riemann surfaces than the topological one by Euler characteristic alone.
A complex surface is a complex two-manifold and thus a real four-manifold; it is not a surface in the sense of this article. Neither are algebraic curves defined over fields other than the complex numbers,
nor are algebraic surfaces defined over fields other than the real numbers.
| Mathematics | Topology | null |
27873 | https://en.wikipedia.org/wiki/Surjective%20function | Surjective function | In mathematics, a surjective function (also known as surjection, or onto function ) is a function such that, for every element of the function's codomain, there exists one element in the function's domain such that . In other words, for a function , the codomain is the image of the function's domain . It is not required that be unique; the function may map one or more elements of to the same element of .
The term surjective and the related terms injective and bijective were introduced by Nicolas Bourbaki, a group of mainly French 20th-century mathematicians who, under this pseudonym, wrote a series of books presenting an exposition of modern advanced mathematics, beginning in 1935. The French word sur means over or above, and relates to the fact that the image of the domain of a surjective function completely covers the function's codomain.
Any function induces a surjection by restricting its codomain to the image of its domain. Every surjective function has a right inverse assuming the axiom of choice, and every function with a right inverse is necessarily a surjection. The composition of surjective functions is always surjective. Any function can be decomposed into a surjection and an injection.
Definition
A surjective function is a function whose image is equal to its codomain. Equivalently, a function with domain and codomain is surjective if for every in there exists at least one in with . Surjections are sometimes denoted by a two-headed rightwards arrow (), as in .
Symbolically,
If , then is said to be surjective if
.
Examples
For any set X, the identity function idX on X is surjective.
The function defined by f(n) = n mod 2 (that is, even integers are mapped to 0 and odd integers to 1) is surjective.
The function defined by f(x) = 2x + 1 is surjective (and even bijective), because for every real number y, we have an x such that f(x) = y: such an appropriate x is (y − 1)/2.
The function defined by f(x) = x3 − 3x is surjective, because the pre-image of any real number y is the solution set of the cubic polynomial equation x3 − 3x − y = 0, and every cubic polynomial with real coefficients has at least one real root. However, this function is not injective (and hence not bijective), since, for example, the pre-image of y = 2 is {x = −1, x = 2}. (In fact, the pre-image of this function for every y, −2 ≤ y ≤ 2 has more than one element.)
The function defined by g(x) = x2 is not surjective, since there is no real number x such that x2 = −1. However, the function defined by (with the restricted codomain) is surjective, since for every y in the nonnegative real codomain Y, there is at least one x in the real domain X such that x2 = y.
The natural logarithm function is a surjective and even bijective (mapping from the set of positive real numbers to the set of all real numbers). Its inverse, the exponential function, if defined with the set of real numbers as the domain and the codomain, is not surjective (as its range is the set of positive real numbers).
The matrix exponential is not surjective when seen as a map from the space of all n×n matrices to itself. It is, however, usually defined as a map from the space of all n×n matrices to the general linear group of degree n (that is, the group of all n×n invertible matrices). Under this definition, the matrix exponential is surjective for complex matrices, although still not surjective for real matrices.
The projection from a cartesian product to one of its factors is surjective, unless the other factor is empty.
In a 3D video game, vectors are projected onto a 2D flat screen by means of a surjective function.
Properties
A function is bijective if and only if it is both surjective and injective.
If (as is often done) a function is identified with its graph, then surjectivity is not a property of the function itself, but rather a property of the mapping. This is, the function together with its codomain. Unlike injectivity, surjectivity cannot be read off of the graph of the function alone.
Surjections as right invertible functions
The function is said to be a right inverse of the function if f(g(y)) = y for every y in Y (g can be undone by f). In other words, g is a right inverse of f if the composition of g and f in that order is the identity function on the domain Y of g. The function g need not be a complete inverse of f because the composition in the other order, , may not be the identity function on the domain X of f. In other words, f can undo or "reverse" g, but cannot necessarily be reversed by it.
Every function with a right inverse is necessarily a surjection. The proposition that every surjective function has a right inverse is equivalent to the axiom of choice.
If is surjective and B is a subset of Y, then f(f −1(B)) = B. Thus, B can be recovered from its preimage .
For example, in the first illustration in the gallery, there is some function g such that g(C) = 4. There is also some function f such that f(4) = C. It doesn't matter that g is not unique (it would also work if g(C) equals 3); it only matters that f "reverses" g.
Surjections as epimorphisms
A function is surjective if and only if it is right-cancellative: given any functions , whenever g o f = h o f, then g = h. This property is formulated in terms of functions and their composition and can be generalized to the more general notion of the morphisms of a category and their composition. Right-cancellative morphisms are called epimorphisms. Specifically, surjective functions are precisely the epimorphisms in the category of sets. The prefix epi is derived from the Greek preposition ἐπί meaning over, above, on.
Any morphism with a right inverse is an epimorphism, but the converse is not true in general. A right inverse g of a morphism f is called a section of f. A morphism with a right inverse is called a split epimorphism.
Surjections as binary relations
Any function with domain X and codomain Y can be seen as a left-total and right-unique binary relation between X and Y by identifying it with its function graph. A surjective function with domain X and codomain Y is then a binary relation between X and Y that is right-unique and both left-total and right-total.
Cardinality of the domain of a surjection
The cardinality of the domain of a surjective function is greater than or equal to the cardinality of its codomain: If is a surjective function, then X has at least as many elements as Y, in the sense of cardinal numbers. (The proof appeals to the axiom of choice to show that a function
satisfying f(g(y)) = y for all y in Y exists. g is easily seen to be injective, thus the formal definition of |Y| ≤ |X| is satisfied.)
Specifically, if both X and Y are finite with the same number of elements, then is surjective if and only if f is injective.
Given two sets X and Y, the notation is used to say that either X is empty or that there is a surjection from Y onto X. Using the axiom of choice one can show that and together imply that |Y| = |X|, a variant of the Schröder–Bernstein theorem.
Composition and decomposition
The composition of surjective functions is always surjective: If f and g are both surjective, and the codomain of g is equal to the domain of f, then is surjective. Conversely, if is surjective, then f is surjective (but g, the function applied first, need not be). These properties generalize from surjections in the category of sets to any epimorphisms in any category.
Any function can be decomposed into a surjection and an injection: For any function there exist a surjection and an injection such that h = g o f. To see this, define Y to be the set of preimages where z is in . These preimages are disjoint and partition X. Then f carries each x to the element of Y which contains it, and g carries each element of Y to the point in Z to which h sends its points. Then f is surjective since it is a projection map, and g is injective by definition.
Induced surjection and induced bijection
Any function induces a surjection by restricting its codomain to its range. Any surjective function induces a bijection defined on a quotient of its domain by collapsing all arguments mapping to a given fixed image. More precisely, every surjection can be factored as a projection followed by a bijection as follows. Let A/~ be the equivalence classes of A under the following equivalence relation: x ~ y if and only if f(x) = f(y). Equivalently, A/~ is the set of all preimages under f. Let P(~) : A → A/~ be the projection map which sends each x in A to its equivalence class [x]~, and let fP : A/~ → B be the well-defined function given by fP([x]~) = f(x). Then f = fP o P(~).
The set of surjections
Given fixed finite sets and , one can form the set of surjections . The cardinality of this set is one of the twelve aspects of Rota's Twelvefold way, and is given by , where denotes a Stirling number of the second kind.
Gallery
| Mathematics | Functions: General | null |
27919 | https://en.wikipedia.org/wiki/Sociobiology | Sociobiology | Sociobiology is a field of biology that aims to explain social behavior in terms of evolution. It draws from disciplines including psychology, ethology, anthropology, evolution, zoology, archaeology, and population genetics. Within the study of human societies, sociobiology is closely allied to evolutionary anthropology, human behavioral ecology, evolutionary psychology, and sociology.
Sociobiology investigates social behaviors such as mating patterns, territorial fights, pack hunting, and the hive society of social insects. It argues that just as selection pressure led to animals evolving useful ways of interacting with the natural environment, so also it led to the genetic evolution of advantageous social behavior.
While the term "sociobiology" originated at least as early as the 1940s; the concept did not gain major recognition until the publication of E. O. Wilson's book Sociobiology: The New Synthesis in 1975. The new field quickly became the subject of controversy. Critics, led by Richard Lewontin and Stephen Jay Gould, argued that genes played a role in human behavior, but that traits such as aggressiveness could be explained by social environment rather than by biology. Sociobiologists responded by pointing to the complex relationship between nature and nurture. Among sociobiologists, the controversy between laying weight to different levels of selection was settled between D.S. Wilson and E.O. Wilson in 2007.
Definition
E. O. Wilson defined sociobiology as "the extension of population biology and evolutionary theory to social organization".
Sociobiology is based on the premise that some behaviors (social and individual) are at least partly inherited and can be affected by natural selection. It begins with the idea that behaviors have evolved over time, similar to the way that physical traits are thought to have evolved. It predicts that animals will act in ways that have proven to be evolutionarily successful over time. This can, among other things, result in the formation of complex social processes conducive to evolutionary fitness.
The discipline seeks to explain behavior as a product of natural selection. Behavior is therefore seen as an effort to preserve one's genes in the population. Inherent in sociobiological reasoning is the idea that certain genes or gene combinations that influence particular behavioral traits can be inherited from generation to generation.
For example, newly dominant male lions often kill cubs in the pride that they did not sire. This behavior is adaptive because killing the cubs eliminates competition for their own offspring and causes the nursing females to come into heat faster, thus allowing more of his genes to enter into the population. Sociobiologists would view this instinctual cub-killing behavior as being inherited through the genes of successfully reproducing male lions, whereas non-killing behavior may have died out as those lions were less successful in reproducing.
History
The philosopher of biology Daniel Dennett suggested that the political philosopher Thomas Hobbes was the first proto-sociobiologist, arguing that in his 1651 book Leviathan Hobbes had explained the origins of morals in human society from an amoral sociobiological perspective.
The geneticist of animal behavior John Paul Scott coined the word sociobiology at a 1948 conference on genetics and social behavior, which called for a conjoint development of field and laboratory studies in animal behavior research. With John Paul Scott's organizational efforts, a "Section of Animal Behavior and Sociobiology" of the Ecological Society of America was created in 1956, which became a Division of Animal Behavior of the American Society of Zoology in 1958. In 1956, E. O. Wilson came in contact with this emerging sociobiology through his PhD student Stuart A. Altmann, who had been in close relation with the participants to the 1948 conference. Altmann developed his own brand of sociobiology to study the social behavior of rhesus macaques, using statistics, and was hired as a "sociobiologist" at the Yerkes Regional Primate Research Center in 1965.
Wilson's sociobiology is different from John Paul Scott's or Altmann's, insofar as he drew on mathematical models of social behavior centered on the maximization of the genetic fitness by W. D. Hamilton, Robert Trivers, John Maynard Smith, and George R. Price. The three sociobiologies by Scott, Altmann and Wilson have in common to place naturalist studies at the core of the research on animal social behavior and by drawing alliances with emerging research methodologies, at a time when "biology in the field" was threatened to be made old-fashioned by "modern" practices of science (laboratory studies, mathematical biology, molecular biology).
Once a specialist term, "sociobiology" became widely known in 1975 when Wilson published his book Sociobiology: The New Synthesis, which sparked an intense controversy. Since then "sociobiology" has largely been equated with Wilson's vision. The book pioneered and popularized the attempt to explain the evolutionary mechanics behind social behaviors such as altruism, aggression, and nurturance, primarily in ants (Wilson's own research specialty) and other Hymenoptera, but also in other animals. However, the influence of evolution on behavior has been of interest to biologists and philosophers since soon after the discovery of evolution itself. Peter Kropotkin's Mutual Aid: A Factor of Evolution, written in the early 1890s, is a popular example. The final chapter of the book is devoted to sociobiological explanations of human behavior, and Wilson later wrote a Pulitzer Prize winning book, On Human Nature, that addressed human behavior specifically.
Edward H. Hagen writes in The Handbook of Evolutionary Psychology that sociobiology is, despite the public controversy regarding the applications to humans, "one of the scientific triumphs of the twentieth century." "Sociobiology is now part of the core research and curriculum of virtually all biology departments, and it is a foundation of the work of almost all field biologists.
" Sociobiological research on nonhuman organisms has increased dramatically and continuously in the world's top scientific journals such as Nature and Science. The more general term behavioral ecology is commonly substituted for the term sociobiology in order to avoid the public controversy.
Theory
Sociobiologists maintain that human behavior, as well as nonhuman animal behavior, can be partly explained as the outcome of natural selection. They contend that in order to fully understand behavior, it must be analyzed in terms of evolutionary considerations.
Natural selection is fundamental to evolutionary theory. Variants of hereditary traits which increase an organism's ability to survive and reproduce will be more greatly represented in subsequent generations, i.e., they will be "selected for". Thus, inherited behavioral mechanisms that allowed an organism a greater chance of surviving and/or reproducing in the past are more likely to survive in present organisms. That inherited adaptive behaviors are present in nonhuman animal species has been multiply demonstrated by biologists, and it has become a foundation of evolutionary biology. However, there is continued resistance by some researchers over the application of evolutionary models to humans, particularly from within the social sciences, where culture has long been assumed to be the predominant driver of behavior.
Sociobiology is based upon two fundamental premises:
Certain behavioral traits are inherited,
Inherited behavioral traits have been honed by natural selection. Therefore, these traits were probably "adaptive" in the environment in which the species evolved.
Sociobiology uses Nikolaas Tinbergen's four categories of questions and explanations of animal behavior. Two categories are at the species level; two, at the individual level. The species-level categories (often called "ultimate explanations") are
the function (i.e., adaptation) that a behavior serves and
the evolutionary process (i.e., phylogeny) that resulted in this functionality.
The individual-level categories (often called "proximate explanations") are
the development of the individual (i.e., ontogeny) and
the proximate mechanism (e.g., brain anatomy and hormones).
Sociobiologists are interested in how behavior can be explained logically as a result of selective pressures in the history of a species. Thus, they are often interested in instinctive, or intuitive behavior, and in explaining the similarities, rather than the differences, between cultures. For example, mothers within many species of mammals – including humans – are very protective of their offspring. Sociobiologists reason that this protective behavior likely evolved over time because it helped the offspring of the individuals which had the characteristic to survive. This parental protection would increase in frequency in the population. The social behavior is believed to have evolved in a fashion similar to other types of nonbehavioral adaptations, such as a coat of fur, or the sense of smell.
Individual genetic advantage fails to explain certain social behaviors as a result of gene-centred selection. E.O. Wilson argued that evolution may also act upon groups. The mechanisms responsible for group selection employ paradigms and population statistics borrowed from evolutionary game theory. Altruism is defined as "a concern for the welfare of others". If altruism is genetically determined, then altruistic individuals must reproduce their own altruistic genetic traits for altruism to survive, but when altruists lavish their resources on non-altruists at the expense of their own kind, the altruists tend to die out and the others tend to increase. An extreme example is a soldier losing his life trying to help a fellow soldier. This example raises the question of how altruistic genes can be passed on if this soldier dies without having any children.
Within sociobiology, a social behavior is first explained as a sociobiological hypothesis by finding an evolutionarily stable strategy that matches the observed behavior. Stability of a strategy can be difficult to prove, but usually, it will predict gene frequencies. The hypothesis can be supported by establishing a correlation between the gene frequencies predicted by the strategy, and those expressed in a population.
Altruism between social insects and littermates has been explained in such a way. Altruistic behavior, behavior that increases the reproductive fitness of others at the apparent expense of the altruist, in some animals has been correlated to the degree of genome shared between altruistic individuals. A quantitative description of infanticide by male harem-mating animals when the alpha male is displaced as well as rodent female infanticide and fetal resorption are active areas of study. In general, females with more bearing opportunities may value offspring less, and may also arrange bearing opportunities to maximize the food and protection from mates.
An important concept in sociobiology is that temperament traits exist in an ecological balance. Just as an expansion of a sheep population might encourage the expansion of a wolf population, an expansion of altruistic traits within a gene pool may also encourage increasing numbers of individuals with dependent traits.
Studies of human behavior genetics have generally found behavioral traits such as creativity, extroversion, aggressiveness, and IQ have high heritability. The researchers who carry out those studies are careful to point out that heritability does not constrain the influence that environmental or cultural factors may have on those traits.
Various theorists have argued that in some environments criminal behavior might be adaptive. The evolutionary neuroandrogenic (ENA) theory, by sociologist/criminologist Lee Ellis, posits that female sexual selection has led to increased competitive behavior among men, sometimes resulting in criminality. In another theory, Mark van Vugt argues that a history of intergroup conflict for resources between men have led to differences in violence and aggression between men and women. The novelist Elias Canetti also has noted applications of sociobiological theory to cultural practices such as slavery and autocracy.
Support for premise
Genetic mouse mutants illustrate the power that genes exert on behavior. For example, the transcription factor FEV (aka Pet1), through its role in maintaining the serotonergic system in the brain, is required for normal aggressive and anxiety-like behavior. Thus, when FEV is genetically deleted from the mouse genome, male mice will instantly attack other males, whereas their wild-type counterparts take significantly longer to initiate violent behavior. In addition, FEV has been shown to be required for correct maternal behavior in mice, such that offspring of mothers without the FEV factor do not survive unless cross-fostered to other wild-type female mice.
A genetic basis for instinctive behavioral traits among non-human species, such as in the above example, is commonly accepted among many biologists; however, attempting to use a genetic basis to explain complex behaviors in human societies has remained extremely controversial.
Reception
Steven Pinker argues that critics have been overly swayed by politics and a fear of biological determinism, accusing among others Stephen Jay Gould and Richard Lewontin of being "radical scientists", whose stance on human nature is influenced by politics rather than science, while Lewontin, Steven Rose and Leon Kamin, who drew a distinction between the politics and history of an idea and its scientific validity, argue that sociobiology fails on scientific grounds. Gould grouped sociobiology with eugenics, criticizing both in his book The Mismeasure of Man. When Napoleon Chagnon scheduled sessions on sociobiology at the 1976 American Anthropological Association convention, other scholars attempted to cancel them with what Chagnon later described as "Impassioned accusations of racism, fascism and Nazism"; Margaret Mead's support caused the sessions to occur as scheduled.
Noam Chomsky has expressed views on sociobiology on several occasions. During a 1976 meeting of the Sociobiology Study Group, as reported by Ullica Segerstråle, Chomsky argued for the importance of a sociobiologically informed notion of human nature. Chomsky argued that human beings are biological organisms and ought to be studied as such, with his criticism of the "blank slate" doctrine in the social sciences (which would inspire a great deal of Steven Pinker's and others' work in evolutionary psychology), in his 1975 Reflections on Language. Chomsky further hinted at the possible reconciliation of his anarchist political views and sociobiology in a discussion of Peter Kropotkin's Mutual Aid: A Factor of Evolution, which focused more on altruism than aggression, suggesting that anarchist societies were feasible because of an innate human tendency to cooperate.
Wilson has claimed that he had never meant to imply what ought to be, only what is the case. However, some critics have argued that the language of sociobiology readily slips from "is" to "ought", an instance of the naturalistic fallacy. Pinker has argued that opposition to stances considered anti-social, such as ethnic nepotism, is based on moral assumptions, meaning that such opposition is not falsifiable by scientific advances. The history of this debate, and others related to it, are covered in detail by , , and .
| Biology and health sciences | Basics_4 | Biology |
27969 | https://en.wikipedia.org/wiki/Structural%20isomer | Structural isomer | In chemistry, a structural isomer (or constitutional isomer in the IUPAC nomenclature) of a compound is another compound whose molecule has the same number of atoms of each element, but with logically distinct bonds between them. The term metamer was formerly used for the same concept.
For example, butanol , methyl propyl ether , and diethyl ether have the same molecular formula but are three distinct structural isomers.
The concept applies also to polyatomic ions with the same total charge. A classical example is the cyanate ion and the fulminate ion . It is also extended to ionic compounds, so that (for example) ammonium cyanate and urea are considered structural isomers, and so are methylammonium formate and ammonium acetate .
Structural isomerism is the most radical type of isomerism. It is opposed to stereoisomerism, in which the atoms and bonding scheme are the same, but only the relative spatial arrangement of the atoms is different. Examples of the latter are the enantiomers, whose molecules are mirror images of each other, and the cis and trans versions of 2-butene.
Among the structural isomers, one can distinguish several classes including skeletal isomers, positional isomers (or regioisomers), functional isomers, tautomers, and structural isotopomers.
Skeletal isomerism
A skeletal isomer of a compound is a structural isomer that differs from it in the atoms and bonds that are considered to comprise the "skeleton" of the molecule. For organic compounds, such as alkanes, that usually means the carbon atoms and the bonds between them.
For example, there are three skeletal isomers of pentane: n-pentane (often called simply "pentane"), isopentane (2-methylbutane) and neopentane (dimethylpropane).
If the skeleton is acyclic, as in the above example, one may use the term chain isomerism.
Position isomerism (regioisomerism)
Position isomers (also positional isomers or regioisomers) are structural isomers that can be viewed as differing only on the position of a functional group, substituent, or some other feature on the same "parent" structure.
For example, replacing one of the 12 hydrogen atoms –H by a hydroxyl group –OH on the n-pentane parent molecule can give any of three different position isomers:
Another example of regioisomers are α-linolenic and γ-linolenic acids, both octadecatrienoic acids, each of which has three double bonds, but on different positions along the chain.
Functional isomerism
Functional isomers are structural isomers which have different functional groups, resulting in significantly different chemical and physical properties.
An example is the pair propanal H3C–CH2–C(=O)-H and acetone H3C–C(=O)–CH3: the first has a –C(=O)H functional group, which makes it an aldehyde, whereas the second has a C–C(=O)–C group, that makes it a ketone.
Another example is the pair ethanol H3C–CH2–OH (an alcohol) and dimethyl ether H3C–O–CH2H (an ether). In contrast, 1-propanol and 2-propanol are structural isomers, but not functional isomers, since they have the same significant functional group (the hydroxyl –OH) and are both alcohols.
Besides the different chemistry, functional isomers typically have very different infrared spectra. The infrared spectrum is largely determined by the vibration modes of the molecule, and functional groups like hydroxyl and esters have very different vibration modes. Thus 1-propanol and 2-propanol have relatively similar infrared spectra because of the hydroxyl group, which are fairly different from that of methyl ethyl ether.
Structural isotopomers
In chemistry, one usually ignores distinctions between isotopes of the same element. However, in some situations (for instance in Raman, NMR, or microwave spectroscopy) one may treat different isotopes of the same element as different elements. In the second case, two molecules with the same number of atoms of each isotope but distinct bonding schemes are said to be structural isotopomers.
Thus, for example, ethene would have no structural isomers under the first interpretation; but replacing two of the hydrogen atoms (1H) by deuterium atoms (2H) may yield any of two structural isotopomers (1,1-dideuteroethene and 1,2-dideuteroethene), if both carbon atoms are the same isotope. If, in addition, the two carbons are different isotopes (say, 12C and 13C), there would be three distinct structural isotopomers, since 1-13C-1,1-dideuteroethene would be different from 1-13C-2,2-dideuteroethene. And, in both cases, the 1,2-dideutero structural isotopomer would occur as two stereoisotopomers, cis and trans.
Structural equivalence and symmetry
Structural equivalence
Two molecules (including polyatomic ions) A and B have the same structure if each atom of A can be paired with an atom of B of the same element, in a one-to-one way, so that for every bond in A there is a bond in B, of the same type, between corresponding atoms; and vice versa. This requirement applies also to complex bonds that involve three or more atoms, such as the delocalized bonding in the benzene molecule and other aromatic compounds.
Depending on the context, one may require that each atom be paired with an atom of the same isotope, not just of the same element.
Two molecules then can be said to be structural isomers (or, if isotopes matter, structural isotopomers) if they have the same molecular formula but do not have the same structure.
Structural symmetry and equivalent atoms
Structural symmetry of a molecule can be defined mathematically as a permutation of the atoms that exchanges at least two atoms but does not change the molecule's structure. Two atoms then can be said to be structurally equivalent if there is a structural symmetry that takes one to the other.
Thus, for example, all four hydrogen atoms of methane are structurally equivalent, because any permutation of them will preserve all the bonds of the molecule.
Likewise, all six hydrogens of ethane () are structurally equivalent to each other, as are the two carbons; because any hydrogen can be switched with any other, either by a permutation that swaps just those two atoms, or by a permutation that swaps the two carbons and each hydrogen in one methyl group with a different hydrogen on the other methyl. Either operation preserves the structure of the molecule. That is the case also for the hydrogen atoms in cyclopentane, allene, 2-butyne, hexamethylenetetramine, prismane, cubane, dodecahedrane, etc.
On the other hand, the hydrogen atoms of propane are not all structurally equivalent. The six hydrogens attached to the first and third carbons are equivalent, as in ethane, and the two attached to the middle carbon are equivalent to each other; but there is no equivalence between these two equivalence classes.
Symmetry and positional isomerism
Structural equivalences between atoms of a parent molecule reduce the number of positional isomers that can be obtained by replacing those atoms for a different element or group. Thus, for example, the structural equivalence between the six hydrogens of ethane means that there is just one structural isomer of ethanol , not 6. The eight hydrogens of propane are partitioned into two structural equivalence classes (the six on the methyl groups, and the two on the central carbon); therefore there are only two positional isomers of propanol (1-propanol and 2-propanol). Likewise there are only two positional isomers of butanol, and three of pentanol or hexanol.
Symmetry breaking by substitutions
Once a substitution is made on a parent molecule, its structural symmetry is usually reduced, meaning that atoms that were formerly equivalent may no longer be so. Thus substitution of two or more equivalent atoms by the same element may generate more than one positional isomer.
The classical example is the derivatives of benzene. Its six hydrogens are all structurally equivalent, and so are the six carbons; because the structure is not changed if the atoms are permuted in ways that correspond to flipping the molecule over or rotating it by multiples of 60 degrees. Therefore, replacing any hydrogen by chlorine yields only one chlorobenzene. However, with that replacement, the atom permutations that moved that hydrogen are no longer valid. Only one permutation remains, that corresponds to flipping the molecule over while keeping the chlorine fixed. The five remaining hydrogens then fall into three different equivalence classes: the one opposite to the chlorine is a class by itself (called the para position), the two closest to the chlorine form another class (ortho), and the remaining two are the third class (meta). Thus a second substitution of hydrogen by chlorine can yield three positional isomers: 1,2- or ortho-, 1,3- or meta-, and 1,4- or para-dichlorobenzene.
For the same reason, there is only one phenol (hydroxybenzene), but three benzenediols; and one toluene (methylbenzene), but three toluols, and three xylenes.
On the other hand, the second replacement (by the same substituent) may preserve or even increase the symmetry of the molecule, and thus may preserve or reduce the number of equivalence classes for the next replacement. Thus, the four remaining hydrogens in meta-dichlorobenzene still fall into three classes, while those of ortho- fall into two, and those of para- are all equivalent again. Still, some of these 3 + 2 + 1 = 6 substitutions end up yielding the same structure, so there are only three structurally distinct trichlorobenzenes: 1,2,3-, 1,2,4-, and 1,3,5-.
If the substituents at each step are different, there will usually be more structural isomers. Xylenol, which is benzene with one hydroxyl substituent and two methyl substituents, has a total of 6 isomers:
Isomer enumeration and counting
Enumerating or counting structural isomers in general is a difficult problem, since one must take into account several bond types (including delocalized ones), cyclic structures, and structures that cannot possibly be realized due to valence or geometric constraints, and non-separable tautomers.
For example, there are nine structural isomers with molecular formula C3H6O having different bond connectivities. Seven of them are air-stable at room temperature, and these are given in the table below.
Two structural isomers are the enol tautomers of the carbonyl isomers (propionaldehyde and acetone), but these are not stable.
| Physical sciences | Substance | Chemistry |
27970 | https://en.wikipedia.org/wiki/Stereoisomerism | Stereoisomerism | In stereochemistry, stereoisomerism, or spatial isomerism, is a form of isomerism in which molecules have the same molecular formula and sequence of bonded atoms (constitution), but differ in the three-dimensional orientations of their atoms in space. This contrasts with structural isomers, which share the same molecular formula, but the bond connections or their order differs. By definition, molecules that are stereoisomers of each other represent the same structural isomer.
Enantiomers
Enantiomers, also known as optical isomers, are two stereoisomers that are related to each other by a reflection: they are mirror images of each other that are non-superposable. Human hands are a macroscopic analog of this. Every stereogenic center in one has the opposite configuration in the other. Two compounds that are enantiomers of each other have the same physical properties, except for the direction in which they rotate polarized light and how they interact with different enantiomers of other compounds. As a result, different enantiomers of a compound may have substantially different biological effects. Pure enantiomers also exhibit the phenomenon of optical activity and can be separated only with the use of a chiral agent. In nature, only one enantiomer of most chiral biological compounds, such as amino acids (except glycine, which is achiral), is present. An optically active compound shows two forms: D-(+) form and L-(−) form.
Diastereomers
Diastereomers are stereoisomers not related through a reflection operation. They are not mirror images of each other. These include meso compounds, cis–trans isomers, E-Z isomers, and non-enantiomeric optical isomers. Diastereomers seldom have the same physical properties. In the example shown below, the meso form of tartaric acid forms a diastereomeric pair with both levo- and dextro-tartaric acids, which form an enantiomeric pair.
The D- and L- labeling of the isomers above is not the same as the d- and l- labeling more commonly seen, explaining why these may appear reversed to those familiar with only the latter naming convention.
A Fischer projection can be used to differentiate between L- and D- molecules Chirality (chemistry). For instance, by definition, in a Fischer projection the penultimate carbon of D-sugars are depicted with hydrogen on the left and hydroxyl on the right. L-sugars will be shown with the hydrogen on the right and the hydroxyl on the left.
The other refers to Optical rotation, when looking at the source of light, the rotation of the plane of polarization may be either to the right (dextrorotary — d-rotary, represented by (+), clockwise), or to the left (levorotary — l-rotary, represented by (−), counter-clockwise) depending on which stereoisomer is dominant. For instance, sucrose and camphor are d-rotary whereas cholesterol is l-rotary.
Cis–trans and E–Z isomerism
Stereoisomerism about double bonds arises because rotation about the double bond is restricted, keeping the substituents fixed relative to each other. If the two substituents on at least one end of a double bond are the same, then there is no stereoisomer and the double bond is not a stereocenter, e.g. propene, CH3CH=CH2 where the two substituents at one end are both H.
Traditionally, double bond stereochemistry was described as either cis (Latin, on this side) or trans (Latin, across), in reference to the relative position of substituents on either side of a double bond. A simple example of cis–trans isomerism is the 1,2-disubstituted ethenes, like the dichloroethene (C2H2Cl2) isomers shown below.
Molecule I is cis-1,2-dichloroethene and molecule II is trans-1,2-dichloroethene. Due to occasional ambiguity, IUPAC adopted a more rigorous system wherein the substituents at each end of the double bond are assigned priority based on their atomic number. If the high-priority substituents are on the same side of the bond, it is assigned Z (Ger. zusammen, together). If they are on opposite sides, it is E (Ger. entgegen, opposite). Since chlorine has a larger atomic number than hydrogen, it is the highest-priority group. Using this notation to name the above pictured molecules, molecule I is (Z)-1,2-dichloroethene and molecule II is (E)-1,2-dichloroethene. It is not the case that Z and cis, or E and trans, are always interchangeable. Consider the following fluoromethylpentene:
The proper name for this molecule is either trans-2-fluoro-3-methylpent-2-ene because the alkyl groups that form the backbone chain (i.e., methyl and ethyl) reside across the double bond from each other, or (Z)-2-fluoro-3-methylpent-2-ene because the highest-priority groups on each side of the double bond are on the same side of the double bond. Fluoro is the highest-priority group on the left side of the double bond, and ethyl is the highest-priority group on the right side of the molecule.
The terms cis and trans are also used to describe the relative position of two substituents on a ring; cis if on the same side, otherwise trans.
Conformers
Conformational isomerism is a form of isomerism that describes the phenomenon of molecules with the same structural formula but with different shapes due to rotations about one or more bonds. Different conformations can have different energies, can usually interconvert, and are very rarely isolatable. For example, there exists a variety of Cyclohexane conformations (which cyclohexane is an essential intermediate for the synthesis of nylon–6,6) including a chair conformation where four of the carbon atoms form the "seat" of the chair, one carbon atom is the "back" of the chair, and one carbon atom is the "foot rest"; and a boat conformation, the boat conformation represents the energy maximum on a conformational itinerary between the two equivalent chair forms; however, it does not represent the transition state for this process, because there are lower-energy pathways. The conformational inversion of substituted cyclohexanes is a very rapid process at room temperature, with a half-life of 0.00001 seconds.
There are some molecules that can be isolated in several conformations, due to the large energy barriers between different conformations. 2,2',6,6'-Tetrasubstituted biphenyls can fit into this latter category.
Anomers
Anomerism is an identity for single bonded ring structures where "cis" or "Z" and "trans" or "E" (geometric isomerism) needs to name the substitutions on a carbon atom that also displays the identity of chirality; so anomers have carbon atoms that have geometric isomerism and optical isomerism (enantiomerism) on one or more of the carbons of the ring. Anomers are named "alpha" or "axial" and "beta" or "equatorial" when substituting a cyclic ring structure that has single bonds between the carbon atoms of the ring for example, a hydroxyl group, a methyl hydroxyl group, a methoxy group or another pyranose or furanose group which are typical single bond substitutions but not limited to these. Axial geometric isomerism will be perpendicular (90 degrees) to a reference plane and equatorial will be 120 degrees away from the axial bond or deviate 30 degrees from the reference plane.
Atropisomers
Atropisomers are stereoisomers resulting from hindered rotation about single bonds where the steric strain barrier to rotation is high enough to allow for the isolation of the conformers.
More definitions
A configurational stereoisomer is a stereoisomer of a reference molecule that has the opposite configuration at a stereocenter (e.g., R- vs S- or E- vs Z-). This means that configurational isomers can be interconverted only by breaking covalent bonds to the stereocenter, for example, by inverting the configurations of some or all of the stereocenters in a compound.
An epimer is a diastereoisomer that has the opposite configuration at only one of the stereocenters.
Le Bel-van't Hoff rule
Le Bel-van't Hoff rule states that for a structure with n asymmetric carbon atoms, there is a maximum of 2n different stereoisomers possible. As an example, D-glucose is an aldohexose and has the formula C6H12O6. Four of its six carbon atoms are stereogenic, which means D-glucose is one of 24=16 possible stereoisomers.
| Physical sciences | Stereochemistry | Chemistry |
27978 | https://en.wikipedia.org/wiki/Skin | Skin | Skin is the layer of usually soft, flexible outer tissue covering the body of a vertebrate animal, with three main functions: protection, regulation, and sensation.
Other animal coverings, such as the arthropod exoskeleton, have different developmental origin, structure and chemical composition. The adjective cutaneous means "of the skin" (from Latin cutis 'skin'). In mammals, the skin is an organ of the integumentary system made up of multiple layers of ectodermal tissue and guards the underlying muscles, bones, ligaments, and internal organs. Skin of a different nature exists in amphibians, reptiles, and birds. Skin (including cutaneous and subcutaneous tissues) plays crucial roles in formation, structure, and function of extraskeletal apparatus such as horns of bovids (e.g., cattle) and rhinos, cervids' antlers, giraffids' ossicones, armadillos' osteoderm, and os penis/os clitoris.
All mammals have some hair on their skin, even marine mammals like whales, dolphins, and porpoises that appear to be hairless.
The skin interfaces with the environment and is the first line of defense from external factors. For example, the skin plays a key role in protecting the body against pathogens and excessive water loss. Its other functions are insulation, temperature regulation, sensation, and the production of vitamin D folates. Severely damaged skin may heal by forming scar tissue. This is sometimes discoloured and depigmented. The thickness of skin also varies from location to location on an organism. In humans, for example, the skin located under the eyes and around the eyelids is the thinnest skin on the body at 0.5 mm thick and is one of the first areas to show signs of aging such as "crows feet" and wrinkles. The skin on the palms and the soles of the feet is the thickest skin on the body at 4 mm thick. The speed and quality of wound healing in skin is promoted by estrogen.
Fur is dense hair. Primarily, fur augments the insulation the skin provides but can also serve as a secondary sexual characteristic or as camouflage. On some animals, the skin is very hard and thick and can be processed to create leather. Reptiles and most fish have hard protective scales on their skin for protection, and birds have hard feathers, all made of tough beta-keratins. Amphibian skin is not a strong barrier, especially regarding the passage of chemicals via skin, and is often subject to osmosis and diffusive forces. For example, a frog sitting in an anesthetic solution would be sedated quickly as the chemical diffuses through its skin. Amphibian skin plays key roles in everyday survival and their ability to exploit a wide range of habitats and ecological conditions.
On 11 January 2024, biologists reported the discovery of the oldest known skin, fossilized about 289 million years ago, and possibly the skin from an ancient reptile.
Etymology
The word skin originally only referred to dressed and tanned animal hide and the usual word for human skin was hide. Skin is a borrowing from Old Norse "animal hide, fur", ultimately from the Proto-Indo-European root *sek-, meaning "to cut" (probably a reference to the fact that in those times animal hide was commonly cut off to be used as garment).
Structure in mammals
Mammalian skin is composed of two primary layers:
The epidermis, which provides waterproofing and serves as a barrier to infection.
The dermis, which serves as a location for the appendages of skin.
Epidermis
The epidermis is composed of the outermost layers of the skin. It forms a protective barrier over the body's surface, responsible for keeping water in the body and preventing pathogens from entering, and is a stratified squamous epithelium, composed of proliferating basal and differentiated suprabasal keratinocytes.
Keratinocytes are the major cells, constituting 95% of the epidermis, while Merkel cells, melanocytes and Langerhans cells are also present. The epidermis can be further subdivided into the following strata or layers (beginning with the outermost layer):
Stratum corneum
Stratum lucidum (only in palms and soles)
Stratum granulosum
Stratum spinosum
Stratum basale (also called the stratum germinativum)
Keratinocytes in the stratum basale proliferate through mitosis and the daughter cells move up the strata changing shape and composition as they undergo multiple stages of cell differentiation to eventually become anucleated. During that process, keratinocytes will become highly organized, forming cellular junctions (desmosomes) between each other and secreting keratin proteins and lipids which contribute to the formation of an extracellular matrix and provide mechanical strength to the skin. Keratinocytes from the stratum corneum are eventually shed from the surface (desquamation).
The epidermis contains no blood vessels, and cells in the deepest layers are nourished by diffusion from blood capillaries extending to the upper layers of the dermis.
Basement membrane
The epidermis and dermis are separated by a thin sheet of fibers called the basement membrane, which is made through the action of both tissues.
The basement membrane controls the traffic of the cells and molecules between the dermis and epidermis but also serves, through the binding of a variety of cytokines and growth factors, as a reservoir for their controlled release during physiological remodeling or repair processes.
Dermis
The dermis is the layer of skin beneath the epidermis that consists of connective tissue and cushions the body from stress and strain. The dermis provides tensile strength and elasticity to the skin through an extracellular matrix composed of collagen fibrils, microfibrils, and elastic fibers, embedded in hyaluronan and proteoglycans. Skin proteoglycans are varied and have very specific locations. For example, hyaluronan, versican and decorin are present throughout the dermis and epidermis extracellular matrix, whereas biglycan and perlecan are only found in the epidermis.
It harbors many mechanoreceptors (nerve endings) that provide the sense of touch and heat through nociceptors and thermoreceptors. It also contains the hair follicles, sweat glands, sebaceous glands, apocrine glands, lymphatic vessels and blood vessels. The blood vessels in the dermis provide nourishment and waste removal from its own cells as well as for the epidermis.
Dermis and subcutaneous tissues are thought to contain germinative cells involved in formation of horns, osteoderm, and other extra-skeletal apparatus in mammals.
The dermis is tightly connected to the epidermis through a basement membrane and is structurally divided into two areas: a superficial area adjacent to the epidermis, called the papillary region, and a deep thicker area known as the reticular region.
Papillary region
The papillary region is composed of loose areolar connective tissue. This is named for its fingerlike projections called papillae that extend toward the epidermis. The papillae provide the dermis with a "bumpy" surface that interdigitates with the epidermis, strengthening the connection between the two layers of skin.
Reticular region
The reticular region lies deep in the papillary region and is usually much thicker. It is composed of dense irregular connective tissue and receives its name from the dense concentration of collagenous, elastic, and reticular fibers that weave throughout it. These protein fibers give the dermis its properties of strength, extensibility, and elasticity.
Also located within the reticular region are the roots of the hair, sweat glands, sebaceous glands, receptors, nails, and blood vessels.
Subcutaneous tissue
The subcutaneous tissue (also hypodermis) is not part of the skin, and lies below the dermis. Its purpose is to attach the skin to underlying bone and muscle as well as supplying it with blood vessels and nerves. It consists of loose connective tissue and elastin. The main cell types are fibroblasts, macrophages and adipocytes (the subcutaneous tissue contains 50% of body fat). Fat serves as padding and insulation for the body.
Microorganisms like Staphylococcus epidermidis colonize the skin surface. The density of skin flora depends on region of the skin. The disinfected skin surface gets recolonized from bacteria residing in the deeper areas of the hair follicle, gut and urogenital openings.
Detailed cross section
Structure in fish, amphibians, birds, and reptiles
Fish
The epidermis of fish and of most amphibians consists entirely of live cells, with only minimal quantities of keratin in the cells of the superficial layer. It is generally permeable, and in the case of many amphibians, may actually be a major respiratory organ. The dermis of bony fish typically contains relatively little of the connective tissue found in tetrapods. Instead, in most species, it is largely replaced by solid, protective bony scales. Apart from some particularly large dermal bones that form parts of the skull, these scales are lost in tetrapods, although many reptiles do have scales of a different kind, as do pangolins. Cartilaginous fish have numerous tooth-like denticles embedded in their skin, in place of true scales.
Sweat glands and sebaceous glands are both unique to mammals, but other types of skin gland are found in other vertebrates. Fish typically have a numerous individual mucus-secreting skin cells that aid in insulation and protection, but may also have poison glands, photophores, or cells that produce a more watery, serous fluid. In amphibians, the mucous cells are gathered together to form sac-like glands. Most living amphibians also possess granular glands in the skin, that secrete irritating or toxic compounds.
Although melanin is found in the skin of many species, in the reptiles, the amphibians, and fish, the epidermis is often relatively colorless. Instead, the color of the skin is largely due to chromatophores in the dermis, which, in addition to melanin, may contain guanine or carotenoid pigments. Many species, such as chameleons and flounders may be able to change the color of their skin by adjusting the relative size of their chromatophores.
Amphibians
Overview
Amphibians possess two types of glands, mucous and granular (serous). Both of these glands are part of the integument and thus considered cutaneous. Mucous and granular glands are both divided into three different sections which all connect to structure the gland as a whole. The three individual parts of the gland are the duct, the intercalary region, and lastly the alveolar gland (sac). Structurally, the duct is derived via keratinocytes and passes through to the surface of the epidermal or outer skin layer thus allowing external secretions of the body. The gland alveolus is a sac-shaped structure that is found on the bottom or base region of the granular gland. The cells in this sac specialize in secretion. Between the alveolar gland and the duct is the intercalary system which can be summed up as a transitional region connecting the duct to the grand alveolar beneath the epidermal skin layer. In general, granular glands are larger in size than the mucous glands, which are greater in number.
Granular glands
Granular glands can be identified as venomous and often differ in the type of toxin as well as the concentrations of secretions across various orders and species within the amphibians. They are located in clusters differing in concentration depending on amphibian taxa. The toxins can be fatal to most vertebrates or have no effect against others. These glands are alveolar meaning they structurally have little sacs in which venom is produced and held before it is secreted upon defensive behaviors.
Structurally, the ducts of the granular gland initially maintain a cylindrical shape. When the ducts mature and fill with fluid, the base of the ducts become swollen due to the pressure from the inside. This causes the epidermal layer to form a pit like opening on the surface of the duct in which the inner fluid will be secreted in an upwards fashion.
The intercalary region of granular glands is more developed and mature in comparison with mucous glands. This region resides as a ring of cells surrounding the basal portion of the duct which are argued to have an ectodermal muscular nature due to their influence over the lumen (space inside the tube) of the duct with dilation and constriction functions during secretions. The cells are found radially around the duct and provide a distinct attachment site for muscle fibers around the gland's body.
The gland alveolus is a sac that is divided into three specific regions/layers. The outer layer or tunica fibrosa is composed of densely packed connective-tissue which connects with fibers from the spongy intermediate layer where elastic fibers, as well as nerves, reside. The nerves send signals to the muscles as well as the epithelial layers. Lastly, the epithelium or tunica propria encloses the gland.
Mucous glands
Mucous glands are non-venomous and offer a different functionality for amphibians than granular. Mucous glands cover the entire surface area of the amphibian body and specialize in keeping the body lubricated. There are many other functions of the mucous glands such as controlling the pH, thermoregulation, adhesive properties to the environment, anti-predator behaviors (slimy to the grasp), chemical communication, even anti-bacterial/viral properties for protection against pathogens.
The ducts of the mucous gland appear as cylindrical vertical tubes that break through the epidermal layer to the surface of the skin. The cells lining the inside of the ducts are oriented with their longitudinal axis forming 90-degree angles surrounding the duct in a helical fashion.
Intercalary cells react identically to those of granular glands but on a smaller scale. Among the amphibians, there are taxa which contain a modified intercalary region (depending on the function of the glands), yet the majority share the same structure.
The alveolar or mucous glands are much more simple and only consist of an epithelium layer as well as connective tissue which forms a cover over the gland. This gland lacks a tunica propria and appears to have delicate and intricate fibers which pass over the gland's muscle and epithelial layers.
Birds and reptiles
The epidermis of birds and reptiles is closer to that of mammals, with a layer of dead keratin-filled cells at the surface, to help reduce water loss. A similar pattern is also seen in some of the more terrestrial amphibians such as toads. In these animals, there is no clear differentiation of the epidermis into distinct layers, as occurs in humans, with the change in cell type being relatively gradual. The mammalian epidermis always possesses at least a stratum germinativum and stratum corneum, but the other intermediate layers found in humans are not always distinguishable.
Hair is a distinctive feature of mammalian skin, while feathers are (at least among living species) similarly unique to birds.
Birds and reptiles have relatively few skin glands, although there may be a few structures for specific purposes, such as pheromone-secreting cells in some reptiles, or the uropygial gland of most birds.
Development
Cutaneous structures arise from the epidermis and include a variety of features such as hair, feathers, claws and nails. During embryogenesis, the epidermis splits into two layers: the periderm (which is lost) and the basal layer. The basal layer is a stem cell layer and through asymmetrical divisions, becomes the source of skin cells throughout life. It is maintained as a stem cell layer through an autocrine signal, TGF alpha, and through paracrine signaling from FGF7 (keratinocyte growth factor) produced by the dermis below the basal cells. In mice, over-expression of these factors leads to an overproduction of granular cells and thick skin.
It is believed that the mesoderm defines the pattern. The epidermis instructs the mesodermal cells to condense and then the mesoderm instructs the epidermis of what structure to make through a series of reciprocal inductions. Transplantation experiments involving frog and newt epidermis indicated that the mesodermal signals are conserved between species but the epidermal response is species-specific meaning that the mesoderm instructs the epidermis of its position and the epidermis uses this information to make a specific structure.
Functions
Skin performs the following functions:
Protection: an anatomical barrier from pathogens and damage between the internal and external environment in bodily defense. (See Skin absorption.) Langerhans cells in the skin are part of the adaptive immune system.
Sensation: contains a variety of nerve endings that jump to heat and cold, touch, pressure, vibration, and tissue injury (see somatosensory system and haptic perception).
Thermoregulation: Eccrine (sweat) glands and dilated blood vessels (increased superficial perfusion) aid heat loss, while constricted vessels greatly reduce cutaneous blood flow and conserve heat. Erector pili muscles in mammals adjust the angle of hair shafts to change the degree of insulation provided by hair or fur.
Control of evaporation: the skin provides a relatively dry and semi-impermeable barrier to reduce fluid loss.
Storage and synthesis: acts as a storage center for lipids and water
Absorption through the skin: Oxygen, nitrogen and carbon dioxide can diffuse into the epidermis in small amounts; some animals use their skin as their sole respiration organ (in humans, the cells comprising the outermost 0.25–0.40 mm of the skin are "almost exclusively supplied by external oxygen", although the "contribution to total respiration is negligible") Some medications are absorbed through the skin.
Water resistance: The skin acts as a water resistant barrier so essential nutrients aren't washed out of the body. The nutrients and oils that help hydrate the skin are covered by the most outer skin layer, the epidermis. This is helped in part by the sebaceous glands that release sebum, an oily liquid. Water itself will not cause the elimination of oils on the skin, because the oils residing in our dermis flow and would be affected by water without the epidermis.
Camouflage, whether the skin is naked or covered in fur, scales, or feathers, skin structures provide protective coloration and patterns that help to conceal animals from predators or prey.
Mechanics
Skin is a soft tissue and exhibits key mechanical behaviors of these tissues. The most pronounced feature is the J-curve stress strain response, in which a region of large strain and minimal stress exists and corresponds to the microstructural straightening and reorientation of collagen fibrils. In some cases the intact skin is prestreched, like wetsuits around the diver's body, and in other cases the intact skin is under compression. Small circular holes punched on the skin may widen or close into ellipses, or shrink and remain circular, depending on preexisting stresses.
Aging
Tissue homeostasis generally declines with age, in part because stem/progenitor cells fail to self-renew or differentiate. Skin aging is caused in part by TGF-β by blocking the conversion of dermal fibroblasts into fat cells which provide support. Common changes in the skin as a result of aging range from wrinkles, discoloration, and skin laxity, but can manifest in more severe forms such as skin malignancies. Moreover, these factors may be worsened by sun exposure in a process known as photoaging.
| Biology and health sciences | Biology | null |
27979 | https://en.wikipedia.org/wiki/Sunlight | Sunlight | Sunlight is a portion of the electromagnetic radiation given off by the Sun, in particular infrared, visible, and ultraviolet light. On Earth, sunlight is scattered and filtered through Earth's atmosphere as daylight when the Sun is above the horizon. When direct solar radiation is not blocked by clouds, it is experienced as sunshine, a combination of bright light and radiant heat (atmospheric). When blocked by clouds or reflected off other objects, sunlight is diffused. Sources estimate a global average of between 164 watts to 340 watts per square meter over a 24-hour day; this figure is estimated by NASA to be about a quarter of Earth's average total solar irradiance.
The ultraviolet radiation in sunlight has both positive and negative health effects, as it is both a requisite for vitamin D3 synthesis and a mutagen.
Sunlight takes about 8.3 minutes to reach Earth from the surface of the Sun. A photon starting at the center of the Sun and changing direction every time it encounters a charged particle would take between 10,000 and 170,000 years to get to the surface.
Sunlight is a key factor in photosynthesis, the process used by plants and other autotrophic organisms to convert light energy, normally from the Sun, into chemical energy that can be used to synthesize carbohydrates and fuel the organisms' activities.
Daylighting is the natural lighting of interior spaces by admitting sunlight.
Solar irradiance is the solar energy available from sunlight.
Measurement
Researchers can measure the intensity of sunlight using a sunshine recorder, pyranometer, or pyrheliometer. To calculate the amount of sunlight reaching the ground, both the eccentricity of Earth's elliptic orbit and the attenuation by Earth's atmosphere have to be taken into account. The extraterrestrial solar illuminance (), corrected for the elliptic orbit by using the day number of the year (dn), is given to a good approximation by
where dn=1 on January 1; dn=32 on February 1; dn=59 on March 1 (except on leap years, where dn=60), etc. In this formula dn–3 is used, because in modern times Earth's perihelion, the closest approach to the Sun and, therefore, the maximum occurs around January 3 each year. The value of 0.033412 is determined knowing that the ratio between the perihelion (0.98328989 AU) squared and the aphelion (1.01671033 AU) squared should be approximately 0.935338.
The solar illuminance constant (), is equal to 128×103 lux. The direct normal illuminance (), corrected for the attenuating effects of the atmosphere is given by:
where is the atmospheric extinction and is the relative optical airmass. The atmospheric extinction brings the number of lux down to around 100,000 lux.
The total amount of energy received at ground level from the Sun at the zenith depends on the distance to the Sun and thus on the time of year. It is about 3.3% higher than average in January and 3.3% lower in July (see below). If the extraterrestrial solar radiation is 1,367 watts per square meter (the value when the Earth–Sun distance is 1 astronomical unit), then the direct sunlight at Earth's surface when the Sun is at the zenith is about 1,050 W/m2, but the total amount (direct and indirect from the atmosphere) hitting the ground is around 1,120 W/m2. In terms of energy, sunlight at Earth's surface is around 52 to 55 percent infrared (above 700 nm), 42 to 43 percent visible (400 to 700 nm), and 3 to 5 percent ultraviolet (below 400 nm). At the top of the atmosphere, sunlight is about 30% more intense, having about 8% ultraviolet (UV), with most of the extra UV consisting of biologically damaging short-wave ultraviolet.
has a luminous efficacy of about 93 lumens per watt of radiant flux. This is higher than the efficacy (of source) of artificial lighting other than LEDs, which means using sunlight for illumination heats up a room less than fluorescent or incandescent lighting. Multiplying the figure of 1,050 watts per square meter by 93 lumens per watt indicates that bright sunlight provides an illuminance of approximately 98,000 lux (lumens per square meter) on a perpendicular surface at sea level. The illumination of a horizontal surface will be considerably less than this if the Sun is not very high in the sky. Averaged over a day, the highest amount of sunlight on a horizontal surface occurs in January at the South Pole (see insolation).
Dividing the irradiance of 1,050 W/m2 by the size of the Sun's disk in steradians gives an average radiance of 15.4 MW per square metre per steradian. (However, the radiance at the center of the sun's disk is somewhat higher than the average over the whole disk due to limb darkening.) Multiplying this by π gives an upper limit to the irradiance which can be focused on a surface using mirrors: 48.5 MW/m2.
Composition and power
The spectrum of the Sun's solar radiation can be compared to that of a black body with a temperature of about 5,800 K (see graph). The Sun emits EM radiation across most of the electromagnetic spectrum. Although the radiation created in the solar core consists mostly of x rays, internal absorption and thermalization convert these super-high-energy photons to lower-energy photons before they reach the Sun's surface and are emitted out into space. As a result, the photosphere of the Sun does not emit much X radiation (solar X-rays), although it does emit such "hard radiations" as X-rays and even gamma rays during solar flares. The quiet (non-flaring) Sun, including its corona, emits a broad range
of wavelengths: X-rays, ultraviolet, visible light, infrared, and radio waves. Different depths in the photosphere have different temperatures, and this partially explains the deviations from a black-body spectrum.
There is also a flux of gamma rays from the quiescent sun, obeying a power law between 0.5 and 2.6 TeV. Some gamma rays are caused by cosmic rays interacting with the solar atmosphere, but this does not explain these findings.
The only direct signature of the nuclear processes in the core of the Sun is via the very weakly interacting neutrinos.
Although the solar corona is a source of extreme ultraviolet and X-ray radiation, these rays make up only a very small amount of the power output of the Sun (see spectrum at right). The spectrum of nearly all solar electromagnetic radiation striking the Earth's atmosphere spans a range of 100 nm to about 1 mm (1,000,000 nm). This band of significant radiation power can be divided into five regions in increasing order of wavelengths:
Ultraviolet C or (UVC) range, which spans a range of 100 to 280 nm. The term ultraviolet refers to the fact that the radiation is at higher frequency than violet light (and, hence, also invisible to the human eye). Due to absorption by the atmosphere very little reaches Earth's surface. This spectrum of radiation has germicidal properties, as used in germicidal lamps.
Ultraviolet B or (UVB) range spans 280 to 315 nm. It is also greatly absorbed by the Earth's atmosphere, and along with UVC causes the photochemical reaction leading to the production of the ozone layer. It directly damages DNA and causes sunburn. In addition to this short-term effect it enhances skin ageing and significantly promotes the development of skin cancer, but is also required for vitamin D synthesis in the skin of mammals.
Ultraviolet A or (UVA) spans 315 to 400 nm. This band was once held to be less damaging to DNA, and hence is used in cosmetic artificial sun tanning (tanning booths and tanning beds) and PUVA therapy for psoriasis. However, UVA is now known to cause significant damage to DNA via indirect routes (formation of free radicals and reactive oxygen species), and can cause cancer.
Visible range or light spans 380 to 700 nm. As the name suggests, this range is visible to the naked eye. It is also the strongest output range of the Sun's total irradiance spectrum.
Infrared range that spans 700 nm to 1,000,000 nm (1 mm). It comprises an important part of the electromagnetic radiation that reaches Earth. Scientists divide the infrared range into three types on the basis of wavelength:
Infrared-A: 700 nm to 1,400 nm
Infrared-B: 1,400 nm to 3,000 nm
Infrared-C: 3,000 nm to 1 mm.
Published tables
Tables of direct solar radiation on various slopes from 0 to 60 degrees north latitude, in calories per square centimetre, issued in 1972 and published by Pacific Northwest Forest and Range Experiment Station, Forest Service, U.S. Department of Agriculture, Portland, Oregon, USA, appear on the web.
Intensity in the Solar System
Different bodies of the Solar System receive light of an intensity inversely proportional to the square of their distance from Sun.
A table comparing the amount of solar radiation received by each planet in the Solar System at the top of its atmosphere:
The actual brightness of sunlight that would be observed at the surface also depends on the presence and composition of an atmosphere. For example, Venus's thick atmosphere reflects more than 60% of the solar light it receives. The actual illumination of the surface is about 14,000 lux, comparable to that on Earth "in the daytime with overcast clouds".
Sunlight on Mars would be more or less like daylight on Earth during a slightly overcast day, and, as can be seen in the pictures taken by the rovers, there is enough diffuse sky radiation that shadows would not seem particularly dark. Thus, it would give perceptions and "feel" very much like Earth daylight. The spectrum on the surface is slightly redder than that on Earth, due to scattering by reddish dust in the Martian atmosphere.
For comparison, sunlight on Saturn is slightly brighter than Earth sunlight at the average sunset or sunrise. Even on Pluto, the sunlight would still be bright enough to almost match the average living room. To see sunlight as dim as full moonlight on Earth, a distance of about 500 AU (~69 light-hours) is needed; only a handful of objects in the Solar System have been discovered that are known to orbit farther than such a distance, among them 90377 Sedna and .
Variations in solar irradiance
Seasonal and orbital variation
On Earth, the solar radiation varies with the angle of the Sun above the horizon, with longer sunlight duration at high latitudes during summer, varying to no sunlight at all in winter near the pertinent pole. When the direct radiation is not blocked by clouds, it is experienced as sunshine. The warming of the ground (and other objects) depends on the absorption of the electromagnetic radiation in the form of heat.
The amount of radiation intercepted by a planetary body varies inversely with the square of the distance between the star and the planet. Earth's orbit and obliquity change with time (over thousands of years), sometimes forming a nearly perfect circle, and at other times stretching out to an orbital eccentricity of 5% (currently 1.67%). As the orbital eccentricity changes, the average distance from the Sun (the semimajor axis does not significantly vary, and so the total insolation over a year remains almost constant due to Kepler's second law,
where is the "areal velocity" invariant. That is, the integration over the orbital period (also invariant) is a constant.
If we assume the solar radiation power as a constant over time and the solar irradiation given by the inverse-square law, we obtain also the average insolation as a constant. However, the seasonal and latitudinal distribution and intensity of solar radiation received at Earth's surface does vary. The effect of Sun angle on climate results in the change in solar energy in summer and winter. For example, at latitudes of 65 degrees, this can vary by more than 25% as a result of Earth's orbital variation. Because changes in winter and summer tend to offset, the change in the annual average insolation at any given location is near zero, but the redistribution of energy between summer and winter does strongly affect the intensity of seasonal cycles. Such changes associated with the redistribution of solar energy are considered a likely cause for the coming and going of recent ice ages (see: Milankovitch cycles).
Solar intensity variation
Space-based observations of solar irradiance started in 1978. These measurements show that the solar constant is not constant. It varies on many time scales, including the 11-year sunspot solar cycle. When going further back in time, one has to rely on irradiance reconstructions, using sunspots for the past 400 years or cosmogenic radionuclides for going back 10,000 years.
Such reconstructions have been done. These studies show that in addition to the solar irradiance variation with the solar cycle (the (Schwabe) cycle), the solar activity varies with longer cycles, such as the proposed 88 year (Gleisberg cycle), 208 year (DeVries cycle) and 1,000 year (Eddy cycle).
Solar irradiance
Solar constant
The solar constant is a measure of flux density, is the amount of incoming solar electromagnetic radiation per unit area that would be incident on a plane perpendicular to the rays, at a distance of one astronomical unit (AU) (roughly the mean distance from the Sun to Earth). The "solar constant" includes all types of solar radiation, not just the visible light. Its average value was thought to be approximately 1,366 W/m2, varying slightly with solar activity, but recent recalibrations of the relevant satellite observations indicate a value closer to 1,361 W/m2 is more realistic.
Total solar irradiance (TSI) and spectral solar irradiance (SSI) upon Earth
Since 1978, a series of overlapping NASA and ESA satellite experiments have measured total solar irradiance (TSI) – the amount of solar radiation received at the top of Earth's atmosphere – as 1.365 kilowatts per square meter (kW/m2). TSI observations continue with the ACRIMSAT/ACRIM3, SOHO/VIRGO and SORCE/TIM satellite experiments. Observations have revealed variation of TSI on many timescales, including the solar magnetic cycle and many shorter periodic cycles. TSI provides the energy that drives Earth's climate, so continuation of the TSI time-series database is critical to understanding the role of solar variability in climate change.
Since 2003, the SORCE Spectral Irradiance Monitor (SIM) has monitored Spectral solar irradiance (SSI) – the spectral distribution of the TSI. Data indicate that SSI at UV (ultraviolet) wavelength corresponds in a less clear, and probably more complicated fashion, with Earth's climate responses than earlier assumed, fueling broad avenues of new research in "the connection of the Sun and stratosphere, troposphere, biosphere, ocean, and Earth's climate".
Surface illumination and spectrum
The spectrum of surface illumination depends upon solar elevation due to atmospheric effects, with the blue spectral component dominating during twilight before and after sunrise and sunset, respectively, and red dominating during sunrise and sunset. These effects are apparent in natural light photography where the principal source of illumination is sunlight as mediated by the atmosphere.
While the color of the sky is usually determined by Rayleigh scattering, an exception occurs at sunset and twilight. "Preferential absorption of sunlight by ozone over long horizon paths gives the zenith sky its blueness when the sun is near the horizon".
Spectral composition of sunlight at Earth's surface
The Sun may be said to illuminate, which is a measure of the light within a specific sensitivity range. Many animals (including humans) have a sensitivity range of approximately 400–700 nm, and given optimal conditions the absorption and scattering by Earth's atmosphere produces illumination that approximates an equal-energy illuminant for most of this range. The useful range for color vision in humans, for example, is approximately 450–650 nm. Aside from effects that arise at sunset and sunrise, the spectral composition changes primarily in respect to how directly sunlight is able to illuminate. When illumination is indirect, Rayleigh scattering in the upper atmosphere will lead blue wavelengths to dominate. Water vapour in the lower atmosphere produces further scattering and ozone, dust and water particles will also absorb particular wavelengths.
Life on Earth
The existence of nearly all life on Earth is fueled by light from the Sun. Most autotrophs, such as plants, use the energy of sunlight, combined with carbon dioxide and water, to produce simple sugars—a process known as photosynthesis. These sugars are then used as building-blocks and in other synthetic pathways that allow the organism to grow.
Heterotrophs, such as animals, use light from the Sun indirectly by consuming the products of autotrophs, either by consuming autotrophs, by consuming their products, or by consuming other heterotrophs. The sugars and other molecular components produced by the autotrophs are then broken down, releasing stored solar energy, and giving the heterotroph the energy required for survival. This process is known as cellular respiration.
In prehistory, humans began to further extend this process by putting plant and animal materials to other uses. They used animal skins for warmth, for example, or wooden weapons to hunt. These skills allowed humans to harvest more of the sunlight than was possible through glycolysis alone, and human population began to grow.
During the Neolithic Revolution, the domestication of plants and animals further increased human access to solar energy. Fields devoted to crops were enriched by inedible plant matter, providing sugars and nutrients for future harvests. Animals that had previously provided humans with only meat and tools once they were killed were now used for labour throughout their lives, fueled by grasses inedible to humans. Fossil fuels are the remnants of ancient plant and animal matter, formed using energy from sunlight and then trapped within Earth for millions of years.
Cultural aspects
The effect of sunlight is relevant to painting, evidenced for instance in works of Édouard Manet and Claude Monet on outdoor scenes and landscapes.
Many people find direct sunlight to be too bright for comfort; indeed, looking directly at the Sun can cause long-term vision damage. To compensate for the brightness of sunlight, many people wear sunglasses. Cars, many helmets and caps are equipped with visors to block the Sun from direct vision when the Sun is at a low angle. Sunshine is often blocked from entering buildings through the use of walls, window blinds, awnings, shutters, curtains, or nearby shade trees. Sunshine exposure is needed biologically for the production of Vitamin D in the skin, a vital compound needed to make strong bone and muscle in the body.
In many world religions, such as Hinduism, the Sun is considered to be a god, as it is the source of life and energy on Earth. The Sun was also considered to be a god in Ancient Egypt.
Sunbathing
Sunbathing is a popular leisure activity in which a person sits or lies in direct sunshine. People often sunbathe in comfortable places where there is ample sunlight. Some common places for sunbathing include beaches, open air swimming pools, parks, gardens, and sidewalk cafes. Sunbathers typically wear limited amounts of clothing or some simply go nude. For some, an alternative to sunbathing is the use of a sunbed that generates ultraviolet light and can be used indoors regardless of weather conditions. Tanning beds have been banned in a number of states in the world.
For many people with light skin, one purpose for sunbathing is to darken one's skin color (get a sun tan), as this is considered in some cultures to be attractive, associated with outdoor activity, vacations/holidays, and health. Some people prefer naked sunbathing so that an "all-over" or "even" tan can be obtained, sometimes as part of a specific lifestyle.
Controlled heliotherapy, or sunbathing, has been used as a treatment for psoriasis and other maladies.
Skin tanning is achieved by an increase in the dark pigment inside skin cells called melanocytes, and is an automatic response mechanism of the body to sufficient exposure to ultraviolet radiation from the Sun or from artificial sunlamps. Thus, the tan gradually disappears with time, when one is no longer exposed to these sources.
Effects on human health
The ultraviolet radiation in sunlight has both positive and negative health effects, as it is both a principal source of vitamin D3 and a mutagen. A dietary supplement can supply vitamin D without this mutagenic effect, but bypasses natural mechanisms that would prevent overdoses of vitamin D generated internally from sunlight. Vitamin D has a wide range of positive health effects, which include strengthening bones and possibly inhibiting the growth of some cancers. Sun exposure has also been associated with the timing of melatonin synthesis, maintenance of normal circadian rhythms, and reduced risk of seasonal affective disorder.
Long-term sunlight exposure is known to be associated with the development of skin cancer, skin aging, immune suppression, and eye diseases such as cataracts and macular degeneration. Short-term overexposure is the cause of sunburn, snow blindness, and solar retinopathy.
UV rays, and therefore sunlight and sunlamps, are the only listed carcinogens that are known to have health benefits, and a number of public health organizations state that there needs to be a balance between the risks of having too much sunlight or too little. There is a general consensus that sunburn should always be avoided.
Epidemiological data shows that people who have more exposure to sunlight have less high blood pressure and cardiovascular-related mortality. While sunlight (and its UV rays) are a risk factor for skin cancer, "sun avoidance may carry more of a cost than benefit for over-all good health". A study found that there is no evidence that UV reduces lifespan in contrast to other risk factors like smoking, alcohol and high blood pressure.
Effect on plant genomes
Elevated solar UV-B doses increase the frequency of DNA recombination in Arabidopsis thaliana and tobacco (Nicotiana tabacum) plants. These increases are accompanied by strong induction of an enzyme with a key role in recombinational repair of DNA damage. Thus the level of terrestrial solar UV-B radiation likely affects genome stability in plants.
| Physical sciences | Atmospheric optics | Earth science |
27980 | https://en.wikipedia.org/wiki/Stellar%20evolution | Stellar evolution | Stellar evolution is the process by which a star changes over the course of its lifetime and how it can lead to the creation of a new star. Depending on the mass of the star, its lifetime can range from a few million years for the most massive to trillions of years for the least massive, which is considerably longer than the current age of the universe. The table shows the lifetimes of stars as a function of their masses. All stars are formed from collapsing clouds of gas and dust, often called nebulae or molecular clouds. Over the course of millions of years, these protostars settle down into a state of equilibrium, becoming what is known as a main-sequence star.
Nuclear fusion powers a star for most of its existence. Initially the energy is generated by the fusion of hydrogen atoms at the core of the main-sequence star. Later, as the preponderance of atoms at the core becomes helium, stars like the Sun begin to fuse hydrogen along a spherical shell surrounding the core. This process causes the star to gradually grow in size, passing through the subgiant stage until it reaches the red-giant phase. Stars with at least half the mass of the Sun can also begin to generate energy through the fusion of helium at their core, whereas more massive stars can fuse heavier elements along a series of concentric shells. Once a star like the Sun has exhausted its nuclear fuel, its core collapses into a dense white dwarf and the outer layers are expelled as a planetary nebula. Stars with around ten or more times the mass of the Sun can explode in a supernova as their inert iron cores collapse into an extremely dense neutron star or black hole. Although the universe is not old enough for any of the smallest red dwarfs to have reached the end of their existence, stellar models suggest they will slowly become brighter and hotter before running out of hydrogen fuel and becoming low-mass white dwarfs.
Stellar evolution is not studied by observing the life of a single star, as most stellar changes occur too slowly to be detected, even over many centuries. Instead, astrophysicists come to understand how stars evolve by observing numerous stars at various points in their lifetime, and by simulating stellar structure using computer models.
Star formation
Protostar
Stellar evolution starts with the gravitational collapse of a giant molecular cloud. Typical giant molecular clouds are roughly across and contain up to . As it collapses, a giant molecular cloud breaks into smaller and smaller pieces. In each of these fragments, the collapsing gas releases gravitational potential energy as heat. As its temperature and pressure increase, a fragment condenses into a rotating ball of superhot gas known as a protostar. Filamentary structures are truly ubiquitous in the molecular cloud. Dense molecular filaments will fragment into gravitationally bound cores, which are the precursors of stars. Continuous accretion of gas, geometrical bending, and magnetic fields may control the detailed fragmentation manner of the filaments. In supercritical filaments, observations have revealed quasi-periodic chains of dense cores with spacing comparable to the filament inner width, and embedded two protostars with gas outflows.
A protostar continues to grow by accretion of gas and dust from the molecular cloud, becoming a pre-main-sequence star as it reaches its final mass. Further development is determined by its mass. Mass is typically compared to the mass of the Sun: means 1 solar mass.
Protostars are encompassed in dust, and are thus more readily visible at infrared wavelengths.
Observations from the Wide-field Infrared Survey Explorer (WISE) have been especially important for unveiling numerous galactic protostars and their parent star clusters.
Brown dwarfs and sub-stellar objects
Protostars with masses less than roughly never reach temperatures high enough for nuclear fusion of hydrogen to begin. These are known as brown dwarfs. The International Astronomical Union defines brown dwarfs as stars massive enough to fuse deuterium at some point in their lives (13 Jupiter masses (), 2.5 × 1028 kg, or ). Objects smaller than are classified as sub-brown dwarfs (but if they orbit around another stellar object they are classified as planets). Both types, deuterium-burning and not, shine dimly and fade away slowly, cooling gradually over hundreds of millions of years.
Main sequence stellar mass objects
For a more-massive protostar, the core temperature will eventually reach 10 million kelvin, initiating the proton–proton chain reaction and allowing hydrogen to fuse, first to deuterium and then to helium. In stars of slightly over , the carbon–nitrogen–oxygen fusion reaction (CNO cycle) contributes a large portion of the energy generation. The onset of nuclear fusion leads relatively quickly to a hydrostatic equilibrium in which energy released by the core maintains a high gas pressure, balancing the weight of the star's matter and preventing further gravitational collapse. The star thus evolves rapidly to a stable state, beginning the main-sequence phase of its evolution.
A new star will sit at a specific point on the main sequence of the Hertzsprung–Russell diagram, with the main-sequence spectral type depending upon the mass of the star. Small, relatively cold, low-mass red dwarfs fuse hydrogen slowly and will remain on the main sequence for hundreds of billions of years or longer, whereas massive, hot O-type stars will leave the main sequence after just a few million years. A mid-sized yellow dwarf star, like the Sun, will remain on the main sequence for about 10 billion years. The Sun is thought to be in the middle of its main sequence lifespan.
Planetary system
A star may gain a protoplanetary disk, which furthermore can develop into a planetary system.
Mature stars
Eventually the star's core exhausts its supply of hydrogen and the star begins to evolve off the main sequence. Without the outward radiation pressure generated by the fusion of hydrogen to counteract the force of gravity, the core contracts until either electron degeneracy pressure becomes sufficient to oppose gravity or the core becomes hot enough (around 100 MK) for helium fusion to begin. Which of these happens first depends upon the star's mass.
Low-mass stars
What happens after a low-mass star ceases to produce energy through fusion has not been directly observed; the universe is around 13.8 billion years old, which is less time (by several orders of magnitude, in some cases) than it takes for fusion to cease in such stars.
Recent astrophysical models suggest that red dwarfs of may stay on the main sequence for some six to twelve trillion years, gradually increasing in both temperature and luminosity, and take several hundred billion years more to collapse, slowly, into a white dwarf. Such stars will not become red giants as the whole star is a convection zone and it will not develop a degenerate helium core with a shell burning hydrogen. Instead, hydrogen fusion will proceed until almost the whole star is helium.
Slightly more massive stars do expand into red giants, but their helium cores are not massive enough to reach the temperatures required for helium fusion so they never reach the tip of the red-giant branch. When hydrogen shell burning finishes, these stars move directly off the red-giant branch like a post-asymptotic-giant-branch (AGB) star, but at lower luminosity, to become a white dwarf. A star with an initial mass about will be able to reach temperatures high enough to fuse helium, and these "mid-sized" stars go on to further stages of evolution beyond the red-giant branch.
Mid-sized stars
Stars of roughly become red giants, which are large non-main-sequence stars of stellar classification K or M. Red giants lie along the right edge of the Hertzsprung–Russell diagram due to their red color and large luminosity. Examples include Aldebaran in the constellation Taurus and Arcturus in the constellation of Boötes.
Mid-sized stars are red giants during two different phases of their post-main-sequence evolution: red-giant-branch stars, with inert cores made of helium and hydrogen-burning shells, and asymptotic-giant-branch stars, with inert cores made of carbon and helium-burning shells inside the hydrogen-burning shells. Between these two phases, stars spend a period on the horizontal branch with a helium-fusing core. Many of these helium-fusing stars cluster towards the cool end of the horizontal branch as K-type giants and are referred to as red clump giants.
Subgiant phase
When a star exhausts the hydrogen in its core, it leaves the main sequence and begins to fuse hydrogen in a shell outside the core. The core increases in mass as the shell produces more helium. Depending on the mass of the helium core, this continues for several million to one or two billion years, with the star expanding and cooling at a similar or slightly lower luminosity to its main sequence state. Eventually either the core becomes degenerate, in stars around the mass of the sun, or the outer layers cool sufficiently to become opaque, in more massive stars. Either of these changes cause the hydrogen shell to increase in temperature and the luminosity of the star to increase, at which point the star expands onto the red-giant branch.
Red-giant-branch phase
The expanding outer layers of the star are convective, with the material being mixed by turbulence from near the fusing regions up to the surface of the star. For all but the lowest-mass stars, the fused material has remained deep in the stellar interior prior to this point, so the convecting envelope makes fusion products visible at the star's surface for the first time. At this stage of evolution, the results are subtle, with the largest effects, alterations to the isotopes of hydrogen and helium, being unobservable. The effects of the CNO cycle appear at the surface during the first dredge-up, with lower 12C/13C ratios and altered proportions of carbon and nitrogen. These are detectable with spectroscopy and have been measured for many evolved stars.
The helium core continues to grow on the red-giant branch. It is no longer in thermal equilibrium, either degenerate or above the Schönberg–Chandrasekhar limit, so it increases in temperature which causes the rate of fusion in the hydrogen shell to increase. The star increases in luminosity towards the tip of the red-giant branch. Red-giant-branch stars with a degenerate helium core all reach the tip with very similar core masses and very similar luminosities, although the more massive of the red giants become hot enough to ignite helium fusion before that point.
Horizontal branch
In the helium cores of stars in the 0.6 to 2.0 solar mass range, which are largely supported by electron degeneracy pressure, helium fusion will ignite on a timescale of days in a helium flash. In the nondegenerate cores of more massive stars, the ignition of helium fusion occurs relatively slowly with no flash. The nuclear power released during the helium flash is very large, on the order of 108 times the luminosity of the Sun for a few days and 1011 times the luminosity of the Sun (roughly the luminosity of the Milky Way Galaxy) for a few seconds. However, the energy is consumed by the thermal expansion of the initially degenerate core and thus cannot be seen from outside the star. Due to the expansion of the core, the hydrogen fusion in the overlying layers slows and total energy generation decreases. The star contracts, although not all the way to the main sequence, and it migrates to the horizontal branch on the Hertzsprung–Russell diagram, gradually shrinking in radius and increasing its surface temperature.
Core helium flash stars evolve to the red end of the horizontal branch but do not migrate to higher temperatures before they gain a degenerate carbon-oxygen core and start helium shell burning. These stars are often observed as a red clump of stars in the colour-magnitude diagram of a cluster, hotter and less luminous than the red giants. Higher-mass stars with larger helium cores move along the horizontal branch to higher temperatures, some becoming unstable pulsating stars in the yellow instability strip (RR Lyrae variables), whereas some become even hotter and can form a blue tail or blue hook to the horizontal branch. The morphology of the horizontal branch depends on parameters such as metallicity, age, and helium content, but the exact details are still being modelled.
Asymptotic-giant-branch phase
After a star has consumed the helium at the core, hydrogen and helium fusion continues in shells around a hot core of carbon and oxygen. The star follows the asymptotic giant branch on the Hertzsprung–Russell diagram, paralleling the original red-giant evolution, but with even faster energy generation (which lasts for a shorter time). Although helium is being burnt in a shell, the majority of the energy is produced by hydrogen burning in a shell further from the core of the star. Helium from these hydrogen burning shells drops towards the center of the star and periodically the energy output from the helium shell increases dramatically. This is known as a thermal pulse and they occur towards the end of the asymptotic-giant-branch phase, sometimes even into the post-asymptotic-giant-branch phase. Depending on mass and composition, there may be several to hundreds of thermal pulses.
There is a phase on the ascent of the asymptotic-giant-branch where a deep convective zone forms and can bring carbon from the core to the surface. This is known as the second dredge up, and in some stars there may even be a third dredge up. In this way a carbon star is formed, very cool and strongly reddened stars showing strong carbon lines in their spectra. A process known as hot bottom burning may convert carbon into oxygen and nitrogen before it can be dredged to the surface, and the interaction between these processes determines the observed luminosities and spectra of carbon stars in particular clusters.
Another well known class of asymptotic-giant-branch stars is the Mira variables, which pulsate with well-defined periods of tens to hundreds of days and large amplitudes up to about 10 magnitudes (in the visual, total luminosity changes by a much smaller amount). In more-massive stars the stars become more luminous and the pulsation period is longer, leading to enhanced mass loss, and the stars become heavily obscured at visual wavelengths. These stars can be observed as OH/IR stars, pulsating in the infrared and showing OH maser activity. These stars are clearly oxygen rich, in contrast to the carbon stars, but both must be produced by dredge ups.
Post-AGB
These mid-range stars ultimately reach the tip of the asymptotic-giant-branch and run out of fuel for shell burning. They are not sufficiently massive to start full-scale carbon fusion, so they contract again, going through a period of post-asymptotic-giant-branch superwind to produce a planetary nebula with an extremely hot central star. The central star then cools to a white dwarf. The expelled gas is relatively rich in heavy elements created within the star and may be particularly oxygen or carbon enriched, depending on the type of the star. The gas builds up in an expanding shell called a circumstellar envelope and cools as it moves away from the star, allowing dust particles and molecules to form. With the high infrared energy input from the central star, ideal conditions are formed in these circumstellar envelopes for maser excitation.
It is possible for thermal pulses to be produced once post-asymptotic-giant-branch evolution has begun, producing a variety of unusual and poorly understood stars known as born-again asymptotic-giant-branch stars. These may result in extreme horizontal-branch stars (subdwarf B stars), hydrogen deficient post-asymptotic-giant-branch stars, variable planetary nebula central stars, and R Coronae Borealis variables.
Massive stars
In massive stars, the core is already large enough at the onset of the hydrogen burning shell that helium ignition will occur before electron degeneracy pressure has a chance to become prevalent. Thus, when these stars expand and cool, they do not brighten as dramatically as lower-mass stars; however, they were more luminous on the main sequence and they evolve to highly luminous supergiants. Their cores become massive enough that they cannot support themselves by electron degeneracy and will eventually collapse to produce a neutron star or black hole.
Supergiant evolution
Extremely massive stars (more than approximately ), which are very luminous and thus have very rapid stellar winds, lose mass so rapidly due to radiation pressure that they tend to strip off their own envelopes before they can expand to become red supergiants, and thus retain extremely high surface temperatures (and blue-white color) from their main-sequence time onwards. The largest stars of the current generation are about because the outer layers would be expelled by the extreme radiation. Although lower-mass stars normally do not burn off their outer layers so rapidly, they can likewise avoid becoming red giants or red supergiants if they are in binary systems close enough so that the companion star strips off the envelope as it expands, or if they rotate rapidly enough so that convection extends all the way from the core to the surface, resulting in the absence of a separate core and envelope due to thorough mixing.
The core of a massive star, defined as the region depleted of hydrogen, grows hotter and denser as it accretes material from the fusion of hydrogen outside the core. In sufficiently massive stars, the core reaches temperatures and densities high enough to fuse carbon and heavier elements via the alpha process. At the end of helium fusion, the core of a star consists primarily of carbon and oxygen. In stars heavier than about , the carbon ignites and fuses to form neon, sodium, and magnesium. Stars somewhat less massive may partially ignite carbon, but they are unable to fully fuse the carbon before electron degeneracy sets in, and these stars will eventually leave an oxygen-neon-magnesium white dwarf.
The exact mass limit for full carbon burning depends on several factors such as metallicity and the detailed mass lost on the asymptotic giant branch, but is approximately . After carbon burning is complete, the core of these stars reaches about and becomes hot enough for heavier elements to fuse. Before oxygen starts to fuse, neon begins to capture electrons which triggers neon burning. For a range of stars of approximately , this process is unstable and creates runaway fusion resulting in an electron capture supernova.
In more massive stars, the fusion of neon proceeds without a runaway deflagration. This is followed in turn by complete oxygen burning and silicon burning, producing a core consisting largely of iron-peak elements. Surrounding the core are shells of lighter elements still undergoing fusion. The timescale for complete fusion of a carbon core to an iron core is so short, just a few hundred years, that the outer layers of the star are unable to react and the appearance of the star is largely unchanged. The iron core grows until it reaches an effective Chandrasekhar mass, higher than the formal Chandrasekhar mass due to various corrections for the relativistic effects, entropy, charge, and the surrounding envelope. The effective Chandrasekhar mass for an iron core varies from about in the least massive red supergiants to more than in more massive stars. Once this mass is reached, electrons begin to be captured into the iron-peak nuclei and the core becomes unable to support itself. The core collapses and the star is destroyed, either in a supernova or direct collapse to a black hole.
Supernova
When the core of a massive star collapses, it will form a neutron star, or in the case of cores that exceed the Tolman–Oppenheimer–Volkoff limit, a black hole. Through a process that is not completely understood, some of the gravitational potential energy released by this core collapse is converted into a Type Ib, Type Ic, or Type II supernova. It is known that the core collapse produces a massive surge of neutrinos, as observed with supernova SN 1987A. The extremely energetic neutrinos fragment some nuclei; some of their energy is consumed in releasing nucleons, including neutrons, and some of their energy is transformed into heat and kinetic energy, thus augmenting the shock wave started by rebound of some of the infalling material from the collapse of the core. Electron capture in very dense parts of the infalling matter may produce additional neutrons. Because some of the rebounding matter is bombarded by the neutrons, some of its nuclei capture them, creating a spectrum of heavier-than-iron material including the radioactive elements up to (and likely beyond) uranium. Although non-exploding red giants can produce significant quantities of elements heavier than iron using neutrons released in side reactions of earlier nuclear reactions, the abundance of elements heavier than iron (and in particular, of certain isotopes of elements that have multiple stable or long-lived isotopes) produced in such reactions is quite different from that produced in a supernova. Neither abundance alone matches that found in the Solar System, so both supernovae and ejection of elements from red giants are required to explain the observed abundance of heavy elements and isotopes thereof.
The energy transferred from collapse of the core to rebounding material not only generates heavy elements, but provides for their acceleration well beyond escape velocity, thus causing a Type Ib, Type Ic, or Type II supernova. Current understanding of this energy transfer is still not satisfactory; although current computer models of Type Ib, Type Ic, and Type II supernovae account for part of the energy transfer, they are not able to account for enough energy transfer to produce the observed ejection of material. However, neutrino oscillations may play an important role in the energy transfer problem as they not only affect the energy available in a particular flavour of neutrinos but also through other general-relativistic effects on neutrinos.
Some evidence gained from analysis of the mass and orbital parameters of binary neutron stars (which require two such supernovae) hints that the collapse of an oxygen-neon-magnesium core may produce a supernova that differs observably (in ways other than size) from a supernova produced by the collapse of an iron core.
The most massive stars that exist today may be completely destroyed by a supernova with an energy greatly exceeding its gravitational binding energy. This rare event, caused by pair-instability, leaves behind no black hole remnant. In the past history of the universe, some stars were even larger than the largest that exists today, and they would immediately collapse into a black hole at the end of their lives, due to photodisintegration.
Stellar remnants
After a star has burned out its fuel supply, its remnants can take one of three forms, depending on the mass during its lifetime.
White and black dwarfs
For a star of , the resulting white dwarf is of about , compressed into approximately the volume of the Earth. White dwarfs are stable because the inward pull of gravity is balanced by the degeneracy pressure of the star's electrons, a consequence of the Pauli exclusion principle. Electron degeneracy pressure provides a rather soft limit against further compression; therefore, for a given chemical composition, white dwarfs of higher mass have a smaller volume. With no fuel left to burn, the star radiates its remaining heat into space for billions of years.
A white dwarf is very hot when it first forms, more than 100,000 K at the surface and even hotter in its interior. It is so hot that a lot of its energy is lost in the form of neutrinos for the first 10 million years of its existence and will have lost most of its energy after a billion years.
The chemical composition of the white dwarf depends upon its mass. A star that has a mass of about 8-12 solar masses will ignite carbon fusion to form magnesium, neon, and smaller amounts of other elements, resulting in a white dwarf composed chiefly of oxygen, neon, and magnesium, provided that it can lose enough mass to get below the Chandrasekhar limit (see below), and provided that the ignition of carbon is not so violent as to blow the star apart in a supernova. A star of mass on the order of magnitude of the Sun will be unable to ignite carbon fusion, and will produce a white dwarf composed chiefly of carbon and oxygen, and of mass too low to collapse unless matter is added to it later (see below). A star of less than about half the mass of the Sun will be unable to ignite helium fusion (as noted earlier), and will produce a white dwarf composed chiefly of helium.
In the end, all that remains is a cold dark mass sometimes called a black dwarf. However, the universe is not old enough for any black dwarfs to exist yet.
If the white dwarf's mass increases above the Chandrasekhar limit, which is for a white dwarf composed chiefly of carbon, oxygen, neon, and/or magnesium, then electron degeneracy pressure fails due to electron capture and the star collapses. Depending upon the chemical composition and pre-collapse temperature in the center, this will lead either to collapse into a neutron star or runaway ignition of carbon and oxygen. Heavier elements favor continued core collapse, because they require a higher temperature to ignite, because electron capture onto these elements and their fusion products is easier; higher core temperatures favor runaway nuclear reaction, which halts core collapse and leads to a Type Ia supernova. These supernovae may be many times brighter than the Type II supernova marking the death of a massive star, even though the latter has the greater total energy release. This instability to collapse means that no white dwarf more massive than approximately can exist (with a possible minor exception for very rapidly spinning white dwarfs, whose centrifugal force due to rotation partially counteracts the weight of their matter). Mass transfer in a binary system may cause an initially stable white dwarf to surpass the Chandrasekhar limit.
If a white dwarf forms a close binary system with another star, hydrogen from the larger companion may accrete around and onto a white dwarf until it gets hot enough to fuse in a runaway reaction at its surface, although the white dwarf remains below the Chandrasekhar limit. Such an explosion is termed a nova.
Neutron stars
Ordinarily, atoms are mostly electron clouds by volume, with very compact nuclei at the center (proportionally, if atoms were the size of a football stadium, their nuclei would be the size of dust mites). When a stellar core collapses, the pressure causes electrons and protons to fuse by electron capture. Without electrons, which keep nuclei apart, the neutrons collapse into a dense ball (in some ways like a giant atomic nucleus), with a thin overlying layer of degenerate matter (chiefly iron unless matter of different composition is added later). The neutrons resist further compression by the Pauli exclusion principle, in a way analogous to electron degeneracy pressure, but stronger.
These stars, known as neutron stars, are extremely small—on the order of radius 10 km, no bigger than the size of a large city—and are phenomenally dense. Their period of rotation shortens dramatically as the stars shrink (due to conservation of angular momentum); observed rotational periods of neutron stars range from about 1.5 milliseconds (over 600 revolutions per second) to several seconds. When these rapidly rotating stars' magnetic poles are aligned with the Earth, we detect a pulse of radiation each revolution. Such neutron stars are called pulsars, and were the first neutron stars to be discovered. Though electromagnetic radiation detected from pulsars is most often in the form of radio waves, pulsars have also been detected at visible, X-ray, and gamma ray wavelengths.
Black holes
If the mass of the stellar remnant is high enough, the neutron degeneracy pressure will be insufficient to prevent collapse below the Schwarzschild radius. The stellar remnant thus becomes a black hole. The mass at which this occurs is not known with certainty, but is currently estimated at between 2 and .
Black holes are predicted by the theory of general relativity. According to classical general relativity, no matter or information can flow from the interior of a black hole to an outside observer, although quantum effects may allow deviations from this strict rule. The existence of black holes in the universe is well supported, both theoretically and by astronomical observation.
Because the core-collapse mechanism of a supernova is, at present, only partially understood, it is still not known whether it is possible for a star to collapse directly to a black hole without producing a visible supernova, or whether some supernovae initially form unstable neutron stars which then collapse into black holes; the exact relation between the initial mass of the star and the final remnant is also not completely certain. Resolution of these uncertainties requires the analysis of more supernovae and supernova remnants.
Models
A stellar evolutionary model is a mathematical model that can be used to compute the evolutionary phases of a star from its formation until it becomes a remnant. The mass and chemical composition of the star are used as the inputs, and the luminosity and surface temperature are the only constraints. The model formulae are based upon the physical understanding of the star, usually under the assumption of hydrostatic equilibrium. Extensive computer calculations are then run to determine the changing state of the star over time, yielding a table of data that can be used to determine the evolutionary track of the star across the Hertzsprung–Russell diagram, along with other evolving properties. Accurate models can be used to estimate the current age of a star by comparing its physical properties with those of stars along a matching evolutionary track.
| Physical sciences | Stellar astronomy | null |
27984 | https://en.wikipedia.org/wiki/Strong%20interaction | Strong interaction | In nuclear physics and particle physics, the strong interaction, also called the strong force or strong nuclear force, is a fundamental interaction that confines quarks into protons, neutrons, and other hadron particles. The strong interaction also binds neutrons and protons to create atomic nuclei, where it is called the nuclear force.
Most of the mass of a proton or neutron is the result of the strong interaction energy; the individual quarks provide only about 1% of the mass of a proton. At the range of 10−15 m (1 femtometer, slightly more than the radius of a nucleon), the strong force is approximately 100 times as strong as electromagnetism, 106 times as strong as the weak interaction, and 1038 times as strong as gravitation.
In the context of atomic nuclei, the force binds protons and neutrons together to form a nucleus and is called the nuclear force (or residual strong force). Because the force is mediated by massive, short lived mesons on this scale, the residual strong interaction obeys a distance-dependent behavior between nucleons that is quite different from when it is acting to bind quarks within hadrons. There are also differences in the binding energies of the nuclear force with regard to nuclear fusion versus nuclear fission. Nuclear fusion accounts for most energy production in the Sun and other stars. Nuclear fission allows for decay of radioactive elements and isotopes, although it is often mediated by the weak interaction. Artificially, the energy associated with the nuclear force is partially released in nuclear power and nuclear weapons, both in uranium or plutonium-based fission weapons and in fusion weapons like the hydrogen bomb.
History
Before 1971, physicists were uncertain as to how the atomic nucleus was bound together. It was known that the nucleus was composed of protons and neutrons and that protons possessed positive electric charge, while neutrons were electrically neutral. By the understanding of physics at that time, positive charges would repel one another and the positively charged protons should cause the nucleus to fly apart. However, this was never observed. New physics was needed to explain this phenomenon.
A stronger attractive force was postulated to explain how the atomic nucleus was bound despite the protons' mutual electromagnetic repulsion. This hypothesized force was called the strong force, which was believed to be a fundamental force that acted on the protons and neutrons that make up the nucleus.
In 1964, Murray Gell-Mann, and separately George Zweig, proposed that baryons, which include protons and neutrons, and mesons were composed of elementary particles. Zweig called the elementary particles "aces" while Gell-Mann called them "quarks"; the theory came to be called the quark model. The strong attraction between nucleons was the side-effect of a more fundamental force that bound the quarks together into protons and neutrons. The theory of quantum chromodynamics explains that quarks carry what is called a color charge, although it has no relation to visible color. Quarks with unlike color charge attract one another as a result of the strong interaction, and the particle that mediates this was called the gluon.
Behavior of the strong interaction
The strong interaction is observable at two ranges, and mediated by different force carriers in each one. On a scale less than about 0.8 fm (roughly the radius of a nucleon), the force is carried by gluons and holds quarks together to form protons, neutrons, and other hadrons. On a larger scale, up to about 3 fm, the force is carried by mesons and binds nucleons (protons and neutrons) together to form the nucleus of an atom. In the former context, it is often known as the color force, and is so strong that if hadrons are struck by high-energy particles, they produce jets of massive particles instead of emitting their constituents (quarks and gluons) as freely moving particles. This property of the strong force is called color confinement.
Within hadrons
The word strong is used since the strong interaction is the "strongest" of the four fundamental forces. At a distance of 10−15 m, its strength is around 100 times that of the electromagnetic force, some 106 times as great as that of the weak force, and about 1038 times that of gravitation.
The strong force is described by quantum chromodynamics (QCD), a part of the Standard Model of particle physics. Mathematically, QCD is a non-abelian gauge theory based on a local (gauge) symmetry group called SU(3).
The force carrier particle of the strong interaction is the gluon, a massless gauge boson. Gluons are thought to interact with quarks and other gluons by way of a type of charge called color charge. Color charge is analogous to electromagnetic charge, but it comes in three types (±red, ±green, and ±blue) rather than one, which results in different rules of behavior. These rules are described by quantum chromodynamics (QCD), the theory of quark–gluon interactions.
Unlike the photon in electromagnetism, which is neutral, the gluon carries a color charge. Quarks and gluons are the only fundamental particles that carry non-vanishing color charge, and hence they participate in strong interactions only with each other. The strong force is the expression of the gluon interaction with other quark and gluon particles.
All quarks and gluons in QCD interact with each other through the strong force. The strength of interaction is parameterized by the strong coupling constant. This strength is modified by the gauge color charge of the particle, a group-theoretical property.
The strong force acts between quarks. Unlike all other forces (electromagnetic, weak, and gravitational), the strong force does not diminish in strength with increasing distance between pairs of quarks. After a limiting distance (about the size of a hadron) has been reached, it remains at a strength of about , no matter how much farther the distance between the quarks. As the separation between the quarks grows, the energy added to the pair creates new pairs of matching quarks between the original two; hence it is impossible to isolate quarks. The explanation is that the amount of work done against a force of is enough to create particle–antiparticle pairs within a very short distance. The energy added to the system by pulling two quarks apart would create a pair of new quarks that will pair up with the original ones. In QCD, this phenomenon is called color confinement; as a result only hadrons, not individual free quarks, can be observed. The failure of all experiments that have searched for free quarks is considered to be evidence of this phenomenon.
The elementary quark and gluon particles involved in a high energy collision are not directly observable. The interaction produces jets of newly created hadrons that are observable. Those hadrons are created, as a manifestation of mass–energy equivalence, when sufficient energy is deposited into a quark–quark bond, as when a quark in one proton is struck by a very fast quark of another impacting proton during a particle accelerator experiment. However, quark–gluon plasmas have been observed.
Between hadrons
While color confinement implies that the strong force acts without distance-diminishment between pairs of quarks in compact collections of bound quarks (hadrons), at distances approaching or greater than the radius of a proton, a residual force (described below) remains. It manifests as a force between the "colorless" hadrons, and is known as the nuclear force or residual strong force (and historically as the strong nuclear force).
The nuclear force acts between hadrons, known as mesons and baryons. This "residual strong force", acting indirectly, transmits gluons that form part of the virtual π and ρ mesons, which, in turn, transmit the force between nucleons that holds the nucleus (beyond hydrogen-1 nucleus) together.
The residual strong force is thus a minor residuum of the strong force that binds quarks together into protons and neutrons. This same force is much weaker between neutrons and protons, because it is mostly neutralized within them, in the same way that electromagnetic forces between neutral atoms (van der Waals forces) are much weaker than the electromagnetic forces that hold electrons in association with the nucleus, forming the atoms.
Unlike the strong force, the residual strong force diminishes with distance, and does so rapidly. The decrease is approximately as a negative exponential power of distance, though there is no simple expression known for this; see Yukawa potential. The rapid decrease with distance of the attractive residual force and the less rapid decrease of the repulsive electromagnetic force acting between protons within a nucleus, causes the instability of larger atomic nuclei, such as all those with atomic numbers larger than 82 (the element lead).
Although the nuclear force is weaker than the strong interaction itself, it is still highly energetic: transitions produce gamma rays. The mass of a nucleus is significantly different from the summed masses of the individual nucleons. This mass defect is due to the potential energy associated with the nuclear force. Differences between mass defects power nuclear fusion and nuclear fission.
Unification
The so-called Grand Unified Theories (GUT) aim to describe the strong interaction and the electroweak interaction as aspects of a single force, similarly to how the electromagnetic and weak interactions were unified by the Glashow–Weinberg–Salam model into electroweak interaction. The strong interaction has a property called asymptotic freedom, wherein the strength of the strong force diminishes at higher energies (or temperatures). The theorized energy where its strength becomes equal to the electroweak interaction is the grand unification energy. However, no Grand Unified Theory has yet been successfully formulated to describe this process, and Grand Unification remains an unsolved problem in physics.
If GUT is correct, after the Big Bang and during the electroweak epoch of the universe, the electroweak force separated from the strong force. Accordingly, a grand unification epoch is hypothesized to have existed prior to this.
| Physical sciences | Physics | null |
28002 | https://en.wikipedia.org/wiki/Simple%20machine | Simple machine | A simple machine is a mechanical device that changes the direction or magnitude of a force. In general, they can be defined as the simplest mechanisms that use mechanical advantage (also called leverage) to multiply force. Usually the term refers to the six classical simple machines that were defined by Renaissance scientists:
Lever
Wheel and axle
Pulley
Inclined plane
Wedge
Screw
A simple machine uses a single applied force to do work against a single load force. Ignoring friction losses, the work done on the load is equal to the work done by the applied force. The machine can increase the amount of the output force, at the cost of a proportional decrease in the distance moved by the load. The ratio of the output to the applied force is called the mechanical advantage.
Simple machines can be regarded as the elementary "building blocks" of which all more complicated machines (sometimes called "compound machines") are composed. For example, wheels, levers, and pulleys are all used in the mechanism of a bicycle. The mechanical advantage of a compound machine is just the product of the mechanical advantages of the simple machines of which it is composed.
Although they continue to be of great importance in mechanics and applied science, modern mechanics has moved beyond the view of the simple machines as the ultimate building blocks of which all machines are composed, which arose in the Renaissance as a neoclassical amplification of ancient Greek texts. The great variety and sophistication of modern machine linkages, which arose during the Industrial Revolution, is inadequately described by these six simple categories. Various post-Renaissance authors have compiled expanded lists of "simple machines", often using terms like basic machines, compound machines, or machine elements to distinguish them from the classical simple machines above. By the late 1800s, Franz Reuleaux had identified hundreds of machine elements, calling them simple machines. Modern machine theory analyzes machines as kinematic chains composed of elementary linkages called kinematic pairs.
History
The idea of a simple machine originated with the Greek philosopher Archimedes around the 3rd century BC, who studied the Archimedean simple machines: lever, pulley, and screw. He discovered the principle of mechanical advantage in the lever. Archimedes' famous remark with regard to the lever: "Give me a place to stand on, and I will move the Earth," () expresses his realization that there was no limit to the amount of force amplification that could be achieved by using mechanical advantage. Later Greek philosophers defined the classic five simple machines (excluding the inclined plane) and were able to calculate their (ideal) mechanical advantage. For example, Heron of Alexandria (–75 AD) in his work Mechanics lists five mechanisms that can "set a load in motion": lever, windlass, pulley, wedge, and screw, and describes their fabrication and uses. However the Greeks' understanding was limited to the statics of simple machines (the balance of forces), and did not include dynamics, the tradeoff between force and distance, or the concept of work.
During the Renaissance the dynamics of the mechanical powers, as the simple machines were called, began to be studied from the standpoint of how far they could lift a load, in addition to the force they could apply, leading eventually to the new concept of mechanical work. In 1586 Flemish engineer Simon Stevin derived the mechanical advantage of the inclined plane, and it was included with the other simple machines. The complete dynamic theory of simple machines was worked out by Italian scientist Galileo Galilei in 1600 in (On Mechanics), in which he showed the underlying mathematical similarity of the machines as force amplifiers. He was the first to explain that simple machines do not create energy, only transform it.
The classic rules of sliding friction in machines were discovered by Leonardo da Vinci (1452–1519), but were unpublished and merely documented in his notebooks, and were based on pre-Newtonian science such as believing friction was an ethereal fluid. They were rediscovered by Guillaume Amontons (1699) and were further developed by Charles-Augustin de Coulomb (1785).
Ideal simple machine
If a simple machine does not dissipate energy through friction, wear or deformation, then energy is conserved and it is called an ideal simple machine. In this case, the power into the machine equals the power out, and the mechanical advantage can be calculated from its geometric dimensions.
Although each machine works differently mechanically, the way they function is similar mathematically. In each machine, a force is applied to the device at one point, and it does work moving a load at another point. Although some machines only change the direction of the force, such as a stationary pulley, most machines multiply the magnitude of the force by a factor, the mechanical advantage
that can be calculated from the machine's geometry and friction.
Simple machines do not contain a source of energy, so they cannot do more work than they receive from the input force. A simple machine with no friction or elasticity is called an ideal machine. Due to conservation of energy, in an ideal simple machine, the power output (rate of energy output) at any time is equal to the power input
The power output equals the velocity of the load multiplied by the load force . Similarly the power input from the applied force is equal to the velocity of the input point multiplied by the applied force . Therefore,
So the mechanical advantage of an ideal machine is equal to the velocity ratio, the ratio of input velocity to output velocity
The velocity ratio is also equal to the ratio of the distances covered in any given period of time
Therefore, the mechanical advantage of an ideal machine is also equal to the distance ratio, the ratio of input distance moved to output distance moved
This can be calculated from the geometry of the machine. For example, the mechanical advantage and distance ratio of the lever is equal to the ratio of its lever arms.
The mechanical advantage can be greater or less than one:
If , the output force is greater than the input, the machine acts as a force amplifier, but the distance moved by the load is less than the distance moved by the input force .
If , the output force is less than the input, but the distance moved by the load is greater than the distance moved by the input force.
In the screw, which uses rotational motion, the input force should be replaced by the torque, and the velocity by the angular velocity the shaft is turned.
Friction and efficiency
All real machines have friction, which causes some of the input power to be dissipated as heat. If is the power lost to friction, from conservation of energy
The mechanical efficiency of a machine (where ) is defined as the ratio of power out to the power in, and is a measure of the frictional energy losses
As above, the power is equal to the product of force and velocity, so
Therefore,
So in non-ideal machines, the mechanical advantage is always less than the velocity ratio by the product with the efficiency . So a machine that includes friction will not be able to move as large a load as a corresponding ideal machine using the same input force.
Compound machines
A compound machine is a machine formed from a set of simple machines connected in series with the output force of one providing the input force to the next. For example, a bench vise consists of a lever (the vise's handle) in series with a screw, and a simple gear train consists of a number of gears (wheels and axles) connected in series.
The mechanical advantage of a compound machine is the ratio of the output force exerted by the last machine in the series divided by the input force applied to the first machine, that is
Because the output force of each machine is the input of the next, , this mechanical advantage is also given by
Thus, the mechanical advantage of the compound machine is equal to the product of the mechanical advantages of the series of simple machines that form it
Similarly, the efficiency of a compound machine is also the product of the efficiencies of the series of simple machines that form it
Self-locking machines
In many simple machines, if the load force on the machine is high enough in relation to the input force , the machine will move backwards, with the load force doing work on the input force. So these machines can be used in either direction, with the driving force applied to either input point. For example, if the load force on a lever is high enough, the lever will move backwards, moving the input arm backwards against the input force. These are called reversible, non-locking or overhauling machines, and the backward motion is called overhauling.
However, in some machines, if the frictional forces are high enough, no amount of load force can move it backwards, even if the input force is zero. This is called a self-locking, nonreversible, or non-overhauling machine. These machines can only be set in motion by a force at the input, and when the input force is removed will remain motionless, "locked" by friction at whatever position they were left.
Self-locking occurs mainly in those machines with large areas of sliding contact between moving parts: the screw, inclined plane, and wedge:
The most common example is a screw. In most screws, one can move the screw forward or backward by turning it, and one can move the nut along the shaft by turning it, but no amount of pushing the screw or the nut will cause either of them to turn.
On an inclined plane, a load can be pulled up the plane by a sideways input force, but if the plane is not too steep and there is enough friction between load and plane, when the input force is removed the load will remain motionless and will not slide down the plane, regardless of its weight.
A wedge can be driven into a block of wood by force on the end, such as from hitting it with a sledge hammer, forcing the sides apart, but no amount of compression force from the wood walls will cause it to pop back out of the block.
A machine will be self-locking if and only if its efficiency is below 50%:
Whether a machine is self-locking depends on both the friction forces (coefficient of static friction) between its parts, and the distance ratio (ideal mechanical advantage). If both the friction and ideal mechanical advantage are high enough, it will self-lock.
Proof
When a machine moves in the forward direction from point 1 to point 2, with the input force doing work on a load force, from conservation of energy the input work is equal to the sum of the work done on the load force and the work lost to friction
If the efficiency is below 50%
From
When the machine moves backward from point 2 to point 1 with the load force doing work on the input force, the work lost to friction is the same
So the output work is
Thus the machine self-locks, because the work dissipated in friction is greater than the work done by the load force moving it backwards even with no input force.
Modern machine theory
Machines are studied as mechanical systems consisting of actuators and mechanisms that transmit forces and movement, monitored by sensors and controllers. The components of actuators and mechanisms consist of links and joints that form kinematic chains.
Kinematic chains
Simple machines are elementary examples of kinematic chains that are used to model mechanical systems ranging from the steam engine to robot manipulators. The bearings that form the fulcrum of a lever and that allow the wheel and axle and pulleys to rotate are examples of a kinematic pair called a hinged joint. Similarly, the flat surface of an inclined plane and wedge are examples of the kinematic pair called a sliding joint. The screw is usually identified as its own kinematic pair called a helical joint.
Two levers, or cranks, are combined into a planar four-bar linkage by attaching a link that connects the output of one crank to the input of another. Additional links can be attached to form a six-bar linkage or in series to form a robot.
Classification of machines
The identification of simple machines arises from a desire for a systematic method to invent new machines. Therefore, an important concern is how simple machines are combined to make more complex machines. One approach is to attach simple machines in series to obtain compound machines.
However, a more successful strategy was identified by Franz Reuleaux, who collected and studied over 800 elementary machines. He realized that a lever, pulley, and wheel and axle are in essence the same device: a body rotating about a hinge. Similarly, an inclined plane, wedge, and screw are a block sliding on a flat surface.
This realization shows that it is the joints, or the connections that provide movement, that are the primary elements of a machine. Starting with four types of joints, the revolute joint, sliding joint, cam joint and gear joint, and related connections such as cables and belts, it is possible to understand a machine as an assembly of solid parts that connect these joints.
Kinematic synthesis
The design of mechanisms to perform required movement and force transmission is known as kinematic synthesis. This is a collection of geometric techniques for the mechanical design of linkages, cam and follower mechanisms and gears and gear trains.
| Technology | Tools and machinery | null |
28017 | https://en.wikipedia.org/wiki/Sirius | Sirius | Sirius is the brightest star in the night sky. Its name is derived from the Greek word (Latin script: ), meaning 'glowing' or 'scorching'. The star is designated Canis Majoris, Latinized to Alpha Canis Majoris, and abbreviated CMa or Alpha CMa. With a visual apparent magnitude of −1.46, Sirius is almost twice as bright as Canopus, the next brightest star. Sirius is a binary star consisting of a main-sequence star of spectral type A0 or A1, termed Sirius A, and a faint white dwarf companion of spectral type DA2, termed Sirius B. The distance between the two varies between 8.2 and 31.5 astronomical units as they orbit every 50 years.
Sirius appears bright because of its intrinsic luminosity and its proximity to the Solar System. At a distance of , the Sirius system is one of Earth's nearest neighbours. Sirius is gradually moving closer to the Solar System and it is expected to increase in brightness slightly over the next 60,000 years to reach a peak magnitude of −1.68.
Coincidentally, at about the same time, Sirius will take its turn as the southern Pole Star, around the year 66,270 AD. In that year, Sirius will come to within 1.6 degrees of the south celestial pole. This is due to axial precession and proper motion of Sirius itself which moves slowly in the SSW direction, so it will be visible from the southern hemisphere only.
After that time, its distance will begin to increase, and it will become fainter, but it will continue to be the brightest star in the Earth's night sky for approximately the next 210,000 years, at which point Vega, another A-type star that is intrinsically more luminous than Sirius, becomes the brightest star.
Sirius A is about twice as massive as the Sun () and has an absolute visual magnitude of +1.43. It is 25 times as luminous as the Sun, but has a significantly lower luminosity than other bright stars such as Canopus, Betelgeuse, or Rigel. The system is between 200 and 300 million years old. It was originally composed of two bright bluish stars. The initially more massive of these, Sirius B, consumed its hydrogen fuel and became a red giant before shedding its outer layers and collapsing into its current state as a white dwarf around 120 million years ago.
Sirius is colloquially known as the "Dog Star", reflecting its prominence in its constellation, Canis Major (the Greater Dog). The heliacal rising of Sirius marked the flooding of the Nile in Ancient Egypt and the "dog days" of summer for the ancient Greeks, while to the Polynesians, mostly in the Southern Hemisphere, the star marked winter and was an important reference for their navigation around the Pacific Ocean.
Observational history
As the brightest star in the night sky, Sirius appears in some of the earliest astronomical records. Its displacement from the ecliptic causes its heliacal rising to be remarkably regular compared to other stars, with a period of almost exactly 365.25 days holding it constant relative to the solar year. This rising occurs at Cairo on 19 July (Julian), placing it just before the onset of the annual flooding of the Nile during antiquity. Owing to the flood's own irregularity, the extreme precision of the star's return made it important to the ancient Egyptians, who worshipped it as the goddess Sopdet (, "Triangle"; }, Sō̂this), guarantor of the fertility of their land.
The ancient Greeks observed that the appearance of Sirius as the morning star heralded the hot and dry summer and feared that the star caused plants to wilt, men to weaken, and women to become aroused. Owing to its brightness, Sirius would have been seen to twinkle more in the unsettled weather conditions of early summer. To Greek observers, this signified emanations that caused its malignant influence. Anyone suffering its effects was said to be "star-struck" (, astrobólētos). It was described as "burning" or "flaming" in literature. The season following the star's reappearance came to be known as the "dog days". The inhabitants of the island of Ceos in the Aegean Sea would offer sacrifices to Sirius and Zeus to bring cooling breezes and would await the reappearance of the star in summer. If it rose clear, it would portend good fortune; if it was misty or faint then it foretold (or emanated) pestilence. Coins retrieved from the island from the 3rd century BC feature dogs or stars with emanating rays, highlighting Sirius's importance.
The Romans celebrated the heliacal setting of Sirius around 25 April, sacrificing a dog, along with incense, wine, and a sheep, to the goddess Robigo so that the star's emanations would not cause wheat rust on wheat crops that year.
Bright stars were important to the ancient Polynesians for navigation of the Pacific Ocean. They also served as latitude markers; the declination of Sirius matches the latitude of the archipelago of Fiji at 17°S and thus passes directly over the islands each sidereal day. Sirius served as the body of a "Great Bird" constellation called Manu, with Canopus as the southern wingtip and Procyon the northern wingtip, which divided the Polynesian night sky into two hemispheres. Just as the appearance of Sirius in the morning sky marked summer in Greece, it marked the onset of winter for the Māori, whose name Takurua described both the star and the season. Its culmination at the winter solstice was marked by celebration in Hawaii, where it was known as Ka'ulua, "Queen of Heaven". Many other Polynesian names have been recorded, including Tau-ua in the Marquesas Islands, Rehua in New Zealand, and Ta'urua-fau-papa "Festivity of original high chiefs" and Ta'urua-e-hiti-i-te-tara-te-feiai "Festivity who rises with prayers and religious ceremonies" in Tahiti.
Kinematics
In 1717, Edmond Halley discovered the proper motion of the hitherto presumed fixed stars after comparing contemporary astrometric measurements with those from the second century AD given in Ptolemy's Almagest. The bright stars Aldebaran, Arcturus and Sirius were noted to have moved significantly; Sirius had progressed about 30 arcminutes (about the diameter of the Moon) to the southwest.
In 1868, Sirius became the first star to have its velocity measured, the beginning of the study of celestial radial velocities. Sir William Huggins examined the spectrum of the star and observed a red shift. He concluded that Sirius was receding from the Solar System at about 40 km/s. Compared to the modern value of −5.5 km/s, this was an overestimate and had the wrong sign; the minus sign (−) means that it is approaching the Sun.
Distance
In his 1698 book, Cosmotheoros, Christiaan Huygens estimated the distance to Sirius at 27,664 times the distance from the Earth to the Sun (about 0.437 light-year, translating to a parallax of roughly 7.5 arcseconds). There were several unsuccessful attempts to measure the parallax of Sirius: by Jacques Cassini (6 seconds); by some astronomers (including Nevil Maskelyne) using Lacaille's observations made at the Cape of Good Hope (4 seconds); by Piazzi (the same amount); using Lacaille's observations made at Paris, more numerous and certain than those made at the Cape (no sensible parallax); by Bessel (no sensible parallax).
Scottish astronomer Thomas Henderson used his observations made in 1832–1833 and South African astronomer Thomas Maclear's observations made in 1836–1837, to determine that the value of the parallax was 0.23 arcsecond, and error of the parallax was estimated not to exceed a quarter of a second, or as Henderson wrote in 1839, "On the whole we may conclude that the parallax of Sirius is not greater than half a second in space; and that it is probably much less." Astronomers adopted a value of 0.25 arcsecond for much of the 19th century. It is now known to have a parallax of nearly .
The Hipparcos parallax for Sirius is only accurate to about light years, giving a distance of . Sirius B is generally assumed to be at the same distance. Sirius B has a Gaia Data Release 3 parallax with a much smaller statistical margin of error, giving a distance of , but it is flagged as having a very large value for astrometric excess noise, which indicates that the parallax value may be unreliable.
Discovery of Sirius B
In a letter dated 10 August 1844, the German astronomer Friedrich Wilhelm Bessel deduced from changes in the proper motion of Sirius that it had an unseen companion. On 31 January 1862, American telescope-maker and astronomer Alvan Graham Clark first observed the faint companion, which is now called Sirius B. This happened during testing of an aperture great refractor telescope for Dearborn Observatory, which was one of the largest refracting telescope lenses in existence at the time, and the largest telescope in the United States. Sirius B's sighting was confirmed on 8 March with smaller telescopes.
The visible star is now sometimes known as Sirius A. Since 1894, some apparent orbital irregularities in the Sirius system have been observed, suggesting a third very small companion star, but this has never been confirmed. The best fit to the data indicates a six-year orbit around Sirius A and a mass of . This star would be five to ten magnitudes fainter than the white dwarf Sirius B, which would make it difficult to observe. Observations published in 2008 were unable to detect either a third star or a planet. An apparent "third star" observed in the 1920s is now believed to be a background object.
In 1915, Walter Sydney Adams, using a reflector at Mount Wilson Observatory, observed the spectrum of Sirius B and determined that it was a faint whitish star. This led astronomers to conclude that it was a white dwarf—the second to be discovered. The diameter of Sirius A was first measured by Robert Hanbury Brown and Richard Q. Twiss in 1959 at Jodrell Bank using their stellar intensity interferometer. In 2005, using the Hubble Space Telescope, astronomers determined that Sirius B has nearly the diameter of the Earth, , with a mass 102% of the Sun's.
Colour controversy
Around the year 150 AD, Claudius Ptolemy of Alexandria, an ethnic Greek Egyptian astronomer of the Roman period, mapped the stars in Books VII and VIII of his Almagest, in which he used Sirius as the location for the globe's central meridian. He described Sirius as reddish, along with five other stars, Betelgeuse, Antares, Aldebaran, Arcturus, and Pollux, all of which are at present observed to be of orange or red hue. The discrepancy was first noted by amateur astronomer Thomas Barker, squire of Lyndon Hall in Rutland, who prepared a paper and spoke at a meeting of the Royal Society in London in 1760. The existence of other stars changing in brightness gave credibility to the idea that some may change in colour too; Sir John Herschel noted this in 1839, possibly influenced by witnessing Eta Carinae two years earlier. Thomas J.J. See resurrected discussion on red Sirius with the publication of several papers in 1892, and a final summary in 1926. He cited not only Ptolemy but also the poet Aratus, the orator Cicero, and general Germanicus all calling the star red, though acknowledging that none of the latter three authors were astronomers, the last two merely translating Aratus's poem Phaenomena. Seneca had described Sirius as being of a deeper red than Mars. It is therefore possible that the description as red is a poetic metaphor for ill fortune. In 1985, German astronomers Wolfhard Schlosser and Werner Bergmann published an account of an 8th-century Lombardic manuscript, which contains De cursu stellarum ratio by St. Gregory of Tours. The Latin text taught readers how to determine the times of nighttime prayers from positions of the stars, and a bright star described as rubeola ("reddish") was claimed to be Sirius. The authors proposed this as evidence that Sirius B had been a red giant at the time of observation. Other scholars replied that it was likely St. Gregory had been referring to Arcturus.
It is notable that not all ancient observers saw Sirius as red. The 1st-century poet Marcus Manilius described it as "sea-blue", as did the 4th-century Avienius. Furthermore, Sirius was consistently reported as a white star in ancient China: a detailed re-evaluation of Chinese texts from the 2nd century BC up to the 7th century AD concluded that all such reliable sources are consistent with Sirius being white.
Nevertheless, historical accounts referring to Sirius as red are sufficiently extensive to lead researchers to seek possible physical explanations. Proposed theories fall into two categories: intrinsic and extrinsic. Intrinsic theories postulate a real change in the Sirius system over the past two millennia, of which the most widely discussed is the proposal that the white dwarf Sirius B was a red giant as recently as 2000 years ago. Extrinsic theories are concerned with the possibility of transient reddening in an intervening medium through which the star is observed, such as might be caused by dust in the interstellar medium, or by particles in the terrestrial atmosphere.
The possibility that stellar evolution of either Sirius A or Sirius B could be responsible for the discrepancy has been rejected on the grounds that the timescale of thousands of years is orders of magnitude too short and that there is no sign of the nebulosity in the system that would be expected had such a change taken place. Similarly, the presence of a third star sufficiently luminous to affect the visible colour of the system in recent millennia is inconsistent with observational evidence. Intrinsic theories may therefore be disregarded. Extrinsic theories based on reddening by interstellar dust are similarly implausible. A transient dust cloud passing between the Sirius system and an observer on Earth would, indeed redden the appearance of the star to some degree, but reddening sufficient to cause it to appear similar in colour to intrinsically red bright stars such as Betelgeuse and Arcturus would also dim the star by several magnitudes, inconsistent with historical accounts: indeed, the dimming would be sufficient to render the colour of the star imperceptible to the human eye without the aid of a telescope.
Extrinsic theories based on optical effects in the Earth's atmosphere are better supported by available evidence. Scintillations caused by atmospheric turbulence result in rapid, transient changes in the apparent colour of the star, especially when observed near the horizon, although with no particular preference for red. However, systematic reddening of the star's light results from absorption and scattering by particles in the atmosphere, exactly analogous to the redness of the Sun at sunrise and sunset. Because the particles that cause reddening in the Earth's atmosphere are different (typically much smaller) than those that cause reddening in the interstellar medium, there is far less dimming of the starlight, and in the case of Sirius the change in colour can be seen without the aid of a telescope. There may be cultural reasons to explain why some ancient observers might have reported the colour of Sirius preferentially when it was situated low in the sky (and therefore apparently red). In several Mediterranean cultures, the local visibility of Sirius at heliacal rising and setting (whether it appeared bright and clear or dimmed) was thought to have astrological significance and was thus subject to systematic observation and intense interest. Thus Sirius, more than any other star, was observed and recorded while close to the horizon. Other contemporary cultures, such as Chinese, lacking this tradition, recorded Sirius only as white.
Observation
With an apparent magnitude of −1.46, Sirius is the brightest star in the night sky, almost twice as bright as the second-brightest star, Canopus. From Earth, Sirius always appears dimmer than Jupiter and Venus, and at certain times also dimmer than Mercury and Mars. Sirius is visible from almost everywhere on Earth, except latitudes north of 73° N, and it does not rise very high when viewed from some northern cities (reaching only 13° above the horizon from Saint Petersburg). Because of its declination of roughly −17°, Sirius is a circumpolar star from latitudes south of 73° S. From the Southern Hemisphere in early July, Sirius can be seen in both the evening where it sets after the Sun and in the morning where it rises before the Sun. Along with Procyon and Betelgeuse, Sirius forms one of the three vertices of the Winter Triangle to observers in the Northern Hemisphere.
Sirius can be observed in daylight with the naked eye under the right conditions. Ideally, the sky should be very clear, with the observer at a high altitude, the star passing overhead, and the Sun low on the horizon. These conditions are most easily met around sunset in March and April, and around sunrise in September and October. These observing conditions are more easily met in the Southern Hemisphere, owing to the southerly declination of Sirius.
The orbital motion of the Sirius binary system brings the two stars to a minimum angular separation of 3 arcseconds and a maximum of 11 arcseconds. At the closest approach, it is an observational challenge to distinguish the white dwarf from its more luminous companion, requiring a telescope with at least aperture and excellent seeing conditions. After a periastron occurred in 1994,
the pair moved apart, making them easier to separate with a telescope. Apoastron occurred in 2019,
but from the Earth's vantage point, the greatest observational separation occurred in 2023, with an angular separation of 11.333″.
At a distance of 2.6 parsecs (8.6 ly), the Sirius system contains two of the eight nearest stars to the Sun, and it is the fifth closest stellar system to the Sun. This proximity is the main reason for its brightness, as with other near stars such as Alpha Centauri, Procyon and Vega and in contrast to distant, highly luminous supergiants such as Canopus, Rigel or Betelgeuse (although Canopus may be a bright giant). It is still around 25 times more luminous than the Sun. The closest large neighbouring star to Sirius is Procyon, 1.61 parsecs (5.24 ly) away. The Voyager 2 spacecraft, launched in 1977 to study the four giant planets in the Solar System, is expected to pass within of Sirius in approximately 296,000 years.
Stellar system
Sirius is a binary star system consisting of two white stars orbiting each other with a separation of about 20 AU
(roughly the distance between the Sun and Uranus) and a period of 50.1 years. The brighter component, termed Sirius A, is a main-sequence star of spectral type early A, with an estimated surface temperature of 9,940 K. Its companion, Sirius B, is a star that has already evolved off the main sequence and become a white dwarf. Currently 10,000 times less luminous in the visual spectrum, Sirius B was once the more massive of the two. The age of the system has been estimated at 230 million years. Early in its life, it is thought to have been two bluish-white stars orbiting each other in an elliptical orbit every 9.1 years. The system emits a higher than expected level of infrared radiation, as measured by IRAS space-based observatory. This might be an indication of dust in the system, which is considered somewhat unusual for a binary star. The Chandra X-ray Observatory image shows Sirius B outshining its partner as an X-ray source.
In 2015, Vigan and colleagues used the VLT Survey Telescope to search for evidence of substellar companions, and were able to rule out the presence of giant planets 11 times more massive than Jupiter at 0.5 AU distance from Sirius A, 6–7 times the mass of Jupiter at 1–2 AU distance, and down to around 4 times the mass of Jupiter at 10 AU distance. Similarly, Lucas and colleagues did not detect any companions around Sirius B.
Sirius A
Sirius A, also known as the Dog Star, has a mass of . The radius of this star has been measured by an astronomical interferometer, giving an estimated angular diameter of 5.936±0.016 mas. The projected rotational velocity is a relatively low 16 km/s, which does not produce any significant flattening of its disk. This is at marked variance with the similar-sized Vega, which rotates at a much faster 274 km/s and bulges prominently around its equator. A weak magnetic field has been detected on the surface of Sirius A.
Stellar models suggest that the star formed during the collapsing of a molecular cloud and that, after 10 million years, its internal energy generation was derived entirely from nuclear reactions. The core became convective and used the CNO cycle for energy generation. It is calculated that Sirius A will have completely exhausted the store of hydrogen at its core within a billion () years of its formation, and will then evolve away from the main sequence. It will pass through a red giant stage and eventually become a white dwarf.
Sirius A is classed as a type because the spectrum shows deep metallic absorption lines, indicating an enhancement of its surface layers in elements heavier than helium, such as iron. The spectral type has been reported as which indicates that it would be classified as A1 from hydrogen and helium lines, but A0 from the metallic lines that cause it to be grouped with the Am stars. When compared to the Sun, the proportion of iron in the atmosphere of Sirius A relative to hydrogen is given by meaning iron is 316% as abundant as in the Sun's atmosphere. The high surface content of metallic elements is unlikely to be true of the entire star; rather the iron-peak and heavy metals are radiatively levitated towards the surface.
Sirius B
Sirius B, also known as the Pup Star, is one of the most massive white dwarfs known. With a mass of , it is almost double the average. This mass is packed into a volume roughly equal to the Earth's. The current surface temperature is 25,200 K. Because there is no internal heat source, Sirius B will steadily cool as the remaining heat is radiated into space over the next two billion years or so.
A white dwarf forms after a star has evolved from the main sequence and then passed through a red giant stage. This occurred when Sirius B was less than half its current age, around 120 million years ago. The original star had an estimated and was a B-type star (most likely B5V for ) when it was still on the main sequence, potentially burning around 600–1200 times more luminous than the sun. While it passed through the red giant stage, Sirius B may have enriched the metallicity of its companion, explaining the very high metallicity of Sirius A.
This star is primarily composed of a carbon–oxygen mixture that was generated by helium fusion in the progenitor star. This is overlaid by an envelope of lighter elements, with the materials segregated by mass because of the high surface gravity. The outer atmosphere of Sirius B is now almost pure hydrogen—the element with the lowest mass—and no other elements are seen in its spectrum.
Although Sirius A and B compose a binary system that is reminiscent of those that can undergo Type Ia supernova, the two stars are believed to be too far apart for it to occur, even if Sirius A swells into a red giant. Novas, however, may be possible.
Apparent third star
Since 1894, irregularities have been tentatively observed in the orbits of Sirius A and B with an apparent periodicity of 6–6.4 years. A 1995 study concluded that such a companion likely exists, with a mass of roughly 0.05 solar mass—a small red dwarf or large brown dwarf, with an apparent magnitude of more than 15, and less than 3 arcseconds from Sirius A.
In 2017, more accurate astrometric observations by the Hubble Space Telescope ruled out the existence of a stellar mass sized Sirius C, while still allowing a substellar mass candidate such as a lower mass Brown dwarf. The 1995 study predicted an astrometric movement of roughly 90 mas (0.09 arcsecond), but Hubble was unable to detect any location anomaly to an accuracy of 5 mas (0.005 arcsec). This ruled out any objects orbiting Sirius A with more than 0.033 solar mass (35 Jupiter masses) in 0.5 years, and 0.014 (15 Jupiter masses) in 2 years. The study was also able to rule out any companions to Sirius B with more than 0.024 solar mass (25 Jupiter masses) orbiting in 0.5 year, and 0.0095 (10 Jupiter masses) orbiting in 1.8 years. Effectively, there are almost certainly no additional bodies in the Sirius system larger than a small brown dwarf or large exoplanet.
Star cluster membership
In 1909, Ejnar Hertzsprung was the first to suggest that Sirius was a member of the Ursa Major Moving Group, based on his observations of the system's movements across the sky. The Ursa Major Group is a set of 220 stars that share a common motion through space. It was once a member of an open cluster, but has since become gravitationally unbound from the cluster. Analyses in 2003 and 2005 found Sirius's membership in the group to be questionable: the Ursa Major Group has an estimated age of 500 ± 100 million years, whereas Sirius, with metallicity similar to the Sun's, has an age that is only half this, making it too young to belong to the group. Sirius may instead be a member of the proposed Sirius Supercluster, along with other scattered stars such as Beta Aurigae, Alpha Coronae Borealis, Beta Crateris, Beta Eridani and Beta Serpentis. This would be one of three large clusters located within of the Sun. The other two are the Hyades and the Pleiades, and each of these clusters consists of hundreds of stars.
Distant star cluster
In 2017, a massive star cluster was discovered only 10 arcminutes from Sirius, making the two appear to be visually close to one other when viewed from the point of view of the Earth. It was discovered during a statistical analysis of Gaia data. The cluster is over a thousand times further away from us than the star system, but given its size it still appears at magnitude 8.3.
Etymology
The proper name "Sirius" comes from the Latin Sīrius, from the Ancient Greek (Seirios, "glowing" or "scorcher"). The Greek word itself may have been imported from elsewhere before the Archaic period, one authority suggesting a link with the Egyptian god Osiris. The name's earliest recorded use dates from the 7th century BC in Hesiod's poetic work Works and Days. In 2016, the International Astronomical Union organized a Working Group on Star Names (WGSN) to catalog and standardize proper names for stars. The WGSN's first bulletin of July 2016 included a table of the first two batches of names approved by the WGSN, which included Sirius for the star α Canis Majoris A. It is now so entered in the IAU Catalog of Star Names.
Sirius has over 50 other designations and names attached to it. In Geoffrey Chaucer's essay Treatise on the Astrolabe, it bears the name Alhabor and is depicted by a hound's head. This name is widely used on medieval astrolabes from Western Europe. In Sanskrit it is known as Mrgavyadha "deer hunter", or Lubdhaka "hunter". As Mrgavyadha, the star represents Rudra (Shiva). The star is referred to as Makarajyoti in Malayalam and has religious significance to the pilgrim center Sabarimala. In Scandinavia, the star has been known as Lokabrenna ("burning done by Loki", or "Loki's torch"). In the astrology of the Middle Ages, Sirius was a Behenian fixed star, associated with beryl and juniper. Its astrological symbol was listed by Heinrich Cornelius Agrippa.
Cultural significance
Many cultures have historically attached special significance to Sirius, particularly in relation to dogs. It is often colloquially called the "Dog Star" as the brightest star of Canis Major, the "Great Dog" constellation. Canis Major was classically depicted as Orion's dog. The Ancient Greeks thought that Sirius's emanations could affect dogs adversely, making them behave abnormally during the "dog days", the hottest days of the summer. The Romans knew these days as , and the star Sirius was called Canicula, "little dog". The excessive panting of dogs in hot weather was thought to place them at risk of desiccation and disease. In extreme cases, a foaming dog might have rabies, which could infect and kill humans they had bitten. Homer, in the Iliad, describes the approach of Achilles toward Troy in these words:
In a little-attested Greek myth, the star-god that personified Sirius fell in love with a fertility goddess named Opora, but he was unable to have her. Thus he began to burn hot, making humans suffer, who prayed to the gods. The god of the north wind, Boreas, solved the problem by ordering his sons to deliver Opora to Sirius, while he cooled down the earth with blasts of his own cold wind.
In Iranian mythology, especially in Persian mythology and in Zoroastrianism, the ancient religion of Persia, Sirius appears as Tishtrya and is revered as the rain-maker divinity (Tishtar of New Persian poetry). Beside passages in the sacred texts of the Avesta, the Avestan language Tishtrya followed by the version Tir in Middle and New Persian is also depicted in the Persian epic Shahnameh of Ferdowsi. Because of the concept of the yazatas, powers which are "worthy of worship", Tishtrya is a divinity of rain and fertility and an antagonist of apaosha, the demon of drought. In this struggle, Tishtrya is depicted as a white horse.
In Chinese astronomy Sirius is known as the star of the "celestial wolf" ( Chinese romanization: Tiānláng; Japanese romanization: Tenrō; Korean and romanization: 천랑 /Cheonrang) in the Mansion of Jǐng (井宿). Many nations among the indigenous peoples of North America also associated Sirius with canines; the Seri and Tohono Oʼodham of the southwest note the star as a dog that follows mountain sheep, while the Blackfoot called it "Dog-face". The Cherokee paired Sirius with Antares as a dog-star guardian of either end of the "Path of Souls". The Pawnee of Nebraska had several associations; the Wolf (Skidi) tribe knew it as the "Wolf Star", while other branches knew it as the "Coyote Star". Further north, the Alaskan Inuit of the Bering Strait called it "Moon Dog".
Several cultures also associated the star with a bow and arrows. The ancient Chinese visualized a large bow and arrow across the southern sky, formed by the constellations of Puppis and Canis Major. In this, the arrow tip is pointed at the wolf Sirius. A similar association is depicted at the Temple of Hathor in Dendera, where the goddess Satet has drawn her arrow at Hathor (Sirius). Known as "Tir", the star was portrayed as the arrow itself in later Persian culture.
Sirius is mentioned in Surah, An-Najm ("The Star"), of the Qur'an, where it is given the name (transliteration: aš-ši'rā or ash-shira; the leader). The verse is: "", "That He is the Lord of Sirius (the Mighty Star)." (An-Najm:49) Ibn Kathir said in his commentary "that it is the bright star, named Mirzam Al-Jawza' (Sirius), which a group of Arabs used to worship". The alternate name Aschere, used by Johann Bayer, is derived from this.
In theosophy, it is believed the Seven Stars of the Pleiades transmit the spiritual energy of the Seven Rays from the Galactic Logos to the Seven Stars of the Great Bear, then to Sirius. From there is it sent via the Sun to the god of Earth (Sanat Kumara), and finally through the seven Masters of the Seven Rays to the human race.
The midnight culmination of Sirius in the northern hemisphere coincides with the beginning of the New Year of the Gregorian calendar during the decades around the year 2000. Over the years, its midnight culmination moves slowly, owing to the combination of the star's proper motion and the precession of the equinoxes. At the time of the introduction of the Gregorian calendar in the year 1582, its culmination occurred 17 minutes before midnight into the new year under the assumption of a constant motion. According to Richard Hinckley Allen its midnight culmination was celebrated at the Temple of Demeter at Eleusis.
Dogon
The Dogon people are an ethnic group in Mali, West Africa, reported by some researchers to have traditional astronomical knowledge about Sirius that would normally be considered impossible without the use of telescopes. According to Marcel Griaule, they knew about the fifty-year orbital period of Sirius and its companion prior to western astronomers.
Doubts have been raised about the validity of Griaule and Dieterlein's work. In 1991, anthropologist Walter van Beek concluded about the Dogon, "Though they do speak about sigu tolo [which is what Griaule claimed the Dogon called Sirius] they disagree completely with each other as to which star is meant; for some it is an invisible star that should rise to announce the sigu [festival], for another it is Venus that, through a different position, appears as sigu tolo. All agree, however, that they learned about the star from Griaule." According to Noah Brosch cultural transfer of relatively modern astronomical information could have taken place in 1893, when a French expedition arrived in Central West Africa to observe the total eclipse on 16 April.
Serer religion
In the religion of the Serer people of Senegal, the Gambia and Mauritania, Sirius is called Yoonir from the Serer language (and some of the Cangin language speakers, who are all ethnically Serers). The star Sirius is one of the most important and sacred stars in Serer religious cosmology and symbolism. The Serer high priests and priestesses (Saltigues, the hereditary "rain priests") chart Yoonir to forecast rainfall and enable Serer farmers to start planting seeds. In Serer religious cosmology, it is the symbol of the universe.
Modern significance
Sirius features on the coat of arms of Macquarie University, and is the name of its alumnae journal. Seven ships of the Royal Navy have been called since the 18th century, with the first being the flagship of the First Fleet to Australia in 1788. The Royal Australian Navy subsequently named a vessel in honor of the flagship. American vessels include the as well as a monoplane model—the Lockheed Sirius, the first of which was flown by Charles Lindbergh. The name was also adopted by Mitsubishi Motors as the Mitsubishi Sirius engine in 1980. The name of the North American satellite radio company CD Radio was changed to Sirius Satellite Radio in November 1999, being named after "the brightest star in the night sky". Sirius is one of the 27 stars on the flag of Brazil, where it represents the state of Mato Grosso.
Composer Karlheinz Stockhausen, who wrote a piece called Sirius, is claimed to have said on several occasions that he came from a planet in the Sirius system. To Stockhausen, Sirius stood for "the place where music is the highest of vibrations" and where music had been developed in the most perfect way.
Sirius has been the subject of poetry. Dante and John Milton reference the star, and it is the "powerful western fallen star" of Walt Whitman's "When Lilacs Last in the Dooryard Bloom'd", while Tennyson's poem The Princess describes the star's scintillation:
Throughout the 1990s, several members of the occult group the Order of the Solar Temple committed mass murder-suicide with the goal of leaving their bodies and spiritually "transiting" to Sirius. In total, 74 people died in all of the suicides and murders.
| Physical sciences | Notable stars | null |
28034 | https://en.wikipedia.org/wiki/Scanning%20electron%20microscope | Scanning electron microscope | A scanning electron microscope (SEM) is a type of electron microscope that produces images of a sample by scanning the surface with a focused beam of electrons. The electrons interact with atoms in the sample, producing various signals that contain information about the surface topography and composition of the sample. The electron beam is scanned in a raster scan pattern, and the position of the beam is combined with the intensity of the detected signal to produce an image. In the most common SEM mode, secondary electrons emitted by atoms excited by the electron beam are detected using a secondary electron detector (Everhart–Thornley detector). The number of secondary electrons that can be detected, and thus the signal intensity, depends, among other things, on specimen topography. Some SEMs can achieve resolutions better than 1 nanometer.
Specimens are observed in high vacuum in a conventional SEM, or in low vacuum or wet conditions in a variable pressure or environmental SEM, and at a wide range of cryogenic or elevated temperatures with specialized instruments.
History
An account of the early history of scanning electron microscopy has been presented by McMullan. Although Max Knoll produced a photo with a 50 mm object-field-width showing channeling contrast by the use of an electron beam scanner, it was Manfred von Ardenne who in 1937 invented a microscope with high resolution by scanning a very small raster with a demagnified and finely focused electron beam. In the same year, Cecil E. Hall also completed the construction of the first emission microscope in North America, just two years after being tasked by his supervisor, E. F. Burton at the University of Toronto. Ardenne applied scanning of the electron beam in an attempt to surpass the resolution of the transmission electron microscope (TEM), as well as to mitigate substantial problems with chromatic aberration inherent to real imaging in the TEM. He further discussed the various detection modes, possibilities and theory of SEM, together with the construction of the first high resolution SEM. Further work was reported by Zworykin's group, followed by the Cambridge groups in the 1950s and early 1960s headed by Charles Oatley, all of which finally led to the marketing of the first commercial instrument by Cambridge Scientific Instrument Company as the "Stereoscan" in 1965, which was delivered to DuPont.
Principles and capacities
The signals used by a SEM to produce an image result from interactions of the electron beam with atoms at various depths within the sample. Various types of signals are produced including secondary electrons (SE), reflected or back-scattered electrons (BSE), characteristic X-rays and light (cathodoluminescence) (CL), absorbed current (specimen current) and transmitted electrons. Secondary electron detectors are standard equipment in all SEMs, but it is rare for a single machine to have detectors for all other possible signals.
Secondary electrons have very low energies on the order of 50 eV, which limits their mean free path in solid matter. Consequently, SEs can only escape from the top few nanometers of the surface of a sample. The signal from secondary electrons tends to be highly localized at the point of impact of the primary electron beam, making it possible to collect images of the sample surface with a resolution of below 1 nm. Back-scattered electrons (BSE) are beam electrons that are reflected from the sample by elastic scattering. Since they have much higher energy than SEs, they emerge from deeper locations within the specimen and, consequently, the resolution of BSE images is less than SE images. However, BSE are often used in analytical SEM, along with the spectra made from the characteristic X-rays, because the intensity of the BSE signal is strongly related to the atomic number (Z) of the specimen. BSE images can provide information about the distribution, but not the identity, of different elements in the sample. In samples predominantly composed of light elements, such as biological specimens, BSE imaging can image colloidal gold immuno-labels of 5 or 10 nm diameter, which would otherwise be difficult or impossible to detect in secondary electron images. Characteristic X-rays are emitted when the electron beam removes an inner shell electron from the sample, causing a higher-energy electron to fill the shell and release energy. The energy or wavelength of these characteristic X-rays can be measured by Energy-dispersive X-ray spectroscopy or Wavelength-dispersive X-ray spectroscopy and used to identify and measure the abundance of elements in the sample and map their distribution.
Due to the very narrow electron beam, SEM micrographs have a large depth of field yielding a characteristic three-dimensional appearance useful for understanding the surface structure of a sample. This is exemplified by the micrograph of pollen shown above. A wide range of magnifications is possible, from about 10 times (about equivalent to that of a powerful hand-lens) to more than 500,000 times, about 250 times the magnification limit of the best light microscopes.
Sample preparation
SEM samples have to be small enough to fit on the specimen stage, and may need special preparation to increase their electrical conductivity and to stabilize them, so that they can withstand the high vacuum conditions and the high energy beam of electrons. Samples are generally mounted rigidly on a specimen holder or stub using a conductive adhesive. SEM is used extensively for defect analysis of semiconductor wafers, and manufacturers make instruments that can examine any part of a 300 mm semiconductor wafer. Many instruments have chambers that can tilt an object of that size to 45° and provide continuous 360° rotation.
Nonconductive specimens collect charge when scanned by the electron beam, and especially in secondary electron imaging mode, this causes scanning faults and other image artifacts. For conventional imaging in the SEM, specimens must be electrically conductive, at least at the surface, and electrically grounded to prevent the accumulation of electrostatic charge. Metal objects require little special preparation for SEM except for cleaning and conductively mounting to a specimen stub. Non-conducting materials are usually coated with an ultrathin coating of electrically conducting material, deposited on the sample either by low-vacuum sputter coating, electroless deposition or by high-vacuum evaporation. Conductive materials in current use for specimen coating include gold, gold/palladium alloy, platinum, iridium, tungsten, chromium, osmium, and graphite. Coating with heavy metals may increase signal/noise ratio for samples of low atomic number (Z). The improvement arises because secondary electron emission for high-Z materials is enhanced.
An alternative to coating for some biological samples is to increase the bulk conductivity of the material by impregnation with osmium using variants of the OTO staining method (O-osmium tetroxide, T-thiocarbohydrazide, O-osmium).
Nonconducting specimens may be imaged without coating using an environmental SEM (ESEM) or low-voltage mode of SEM operation. In ESEM instruments the specimen is placed in a relatively high-pressure chamber and the electron optical column is differentially pumped to keep vacuum adequately low at the electron gun. The high-pressure region around the sample in the ESEM neutralizes charge and provides an amplification of the secondary electron signal. Low-voltage SEM is typically conducted in an instrument with a field emission guns (FEG) which is capable of producing high primary electron brightness and small spot size even at low accelerating potentials. To prevent charging of non-conductive specimens, operating conditions must be adjusted such that the incoming beam current is equal to sum of outgoing secondary and backscattered electron currents, a condition that is most often met at accelerating voltages of 0.3–4 kV.
Embedding in a resin with further polishing to a mirror-like finish can be used for both biological and materials specimens when imaging in backscattered electrons or when doing quantitative X-ray microanalysis.
The main preparation techniques are not required in the environmental SEM outlined below, but some biological specimens can benefit from fixation.
Biological samples
Since the SEM specimen chamber is under high vacuum, a SEM specimen must be completely dry or cryogenically cooled. Hard, dry materials such as wood, bone, feathers, dried insects, or shells (including egg shells) can be examined with little further treatment, but living cells and tissues and whole, soft-bodied organisms require chemical fixation to preserve and stabilize their structure.
Fixation is usually performed by incubation in a solution of a buffered chemical fixative, such as glutaraldehyde, sometimes in combination with formaldehyde and other fixatives, and optionally followed by postfixation with osmium tetroxide. The fixed tissue is then dehydrated. Because air-drying causes collapse and shrinkage, this is commonly achieved by replacement of water in the cells with organic solvents such as ethanol or acetone, and replacement of these solvents in turn with a transitional fluid such as liquid carbon dioxide by critical point drying. The carbon dioxide is finally removed while in a supercritical state, so that no gas–liquid interface is present within the sample during drying.
The dry specimen is usually mounted on a specimen stub using an adhesive such as epoxy resin or electrically conductive double-sided adhesive tape, and sputter-coated with gold or gold/palladium alloy before examination in the microscope. Samples may be sectioned (with a microtome) if information about the organism's internal ultrastructure is to be exposed for imaging.
If the SEM is equipped with a cold stage for cryo microscopy, cryofixation may be used and low-temperature scanning electron microscopy performed on the cryogenically fixed specimens. Cryo-fixed specimens may be cryo-fractured under vacuum in a special apparatus to reveal internal structure, sputter-coated and transferred onto the SEM cryo-stage while still frozen. Low-temperature scanning electron microscopy (LT-SEM) is also applicable to the imaging of temperature-sensitive materials such as ice and fats.
Freeze-fracturing, freeze-etch or freeze-and-break is a preparation method particularly useful for examining lipid membranes and their incorporated proteins in "face on" view. The preparation method reveals the proteins embedded in the lipid bilayer.
Materials
Back-scattered electron imaging, quantitative X-ray analysis, and X-ray mapping of specimens often requires grinding and polishing the surfaces to an ultra-smooth surface. Specimens that undergo WDS or EDS analysis are often carbon-coated. In general, metals are not coated prior to imaging in the SEM because they are conductive and provide their own pathway to ground. Fractography is the study of fractured surfaces that can be done on a light microscope or, commonly, on an SEM. The fractured surface is cut to a suitable size, cleaned of any organic residues, and mounted on a specimen holder for viewing in the SEM. Integrated circuits may be cut with a focused ion beam (FIB) or other ion beam milling instrument for viewing in the SEM. The SEM in the first case may be incorporated into the FIB, enabling high-resolution imaging of the result of the process. Metals, geological specimens, and integrated circuits all may also be chemically polished for viewing in the SEM. Special high-resolution coating techniques are required for high-magnification imaging of inorganic thin films.
Scanning process and image formation
In a typical SEM, an electron beam is thermionically emitted from an electron gun fitted with a tungsten filament cathode. Tungsten is normally used in thermionic electron guns because it has the highest melting point and lowest vapor pressure of all metals, thereby allowing it to be electrically heated for electron emission, and because of its low cost. Other types of electron emitters include lanthanum hexaboride () cathodes, which can be used in a standard tungsten filament SEM if the vacuum system is upgraded, or field emission guns (FEG), which may be of the cold-cathode type using tungsten single crystal emitters or the thermally assisted Schottky type, that use emitters of tungsten single crystals coated in zirconium oxide.
The electron beam, which typically has an energy ranging from 0.2 keV to 40 keV, is focused by one or two condenser lenses to a spot about 0.4 nm to 5 nm in diameter. The beam passes through pairs of scanning coils or pairs of deflector plates in the electron column, typically in the final lens, which deflect the beam in the x and y axes so that it scans in a raster fashion over a rectangular area of the sample surface.
When the primary electron beam interacts with the sample, the electrons lose energy by repeated random scattering and absorption within a teardrop-shaped volume of the specimen known as the interaction volume, which extends from less than 100 nm to approximately 5 μm into the surface. The size of the interaction volume depends on the electron's landing energy, the atomic number of the specimen, and the specimen's density. The energy exchange between the electron beam and the sample results in the reflection of high-energy electrons by elastic scattering, the emission of secondary electrons by inelastic scattering, and the emission of electromagnetic radiation, each of which can be detected by specialized detectors. The beam current absorbed by the specimen can also be detected and used to create images of the distribution of specimen current. Electronic amplifiers of various types are used to amplify the signals, which are displayed as variations in brightness on a computer monitor (or, for vintage models, on a cathode-ray tube). Each pixel of computer video memory is synchronized with the position of the beam on the specimen in the microscope, and the resulting image is, therefore, a distribution map of the intensity of the signal being emitted from the scanned area of the specimen. Older microscopes captured images on film, but most modern instruments collect digital images.
Magnification
Magnification in an SEM can be controlled over a range of about 6 orders of magnitude from about 10 to 3,000,000 times. Unlike optical and transmission electron microscopes, image magnification in an SEM is not a function of the power of the objective lens. SEMs may have condenser and objective lenses, but their function is to focus the beam to a spot, and not to image the specimen. Provided the electron gun can generate a beam with a sufficiently small diameter, an SEM could in principle work entirely without condenser or objective lenses. However, it might not be very versatile or achieve very high resolution. In an SEM, as in scanning probe microscopy, magnification results from the ratio of the raster on the display device and dimensions of the raster on the specimen. Assuming that the display screen has a fixed size, higher magnification results from reducing the size of the raster on the specimen, and vice versa. Magnification is therefore controlled by the current supplied to the x, y scanning coils, or the voltage supplied to the x, y deflector plates, and not by objective lens power.
Detection of secondary electrons
The most common imaging mode collects low-energy (<50 eV) secondary electrons that are ejected from conduction or valence bands of the specimen atoms by inelastic scattering interactions with beam electrons. Due to their low energy, these electrons originate from within a few nanometers below the sample surface. The electrons are detected by an Everhart–Thornley detector, which is a type of collector-scintillator-photomultiplier system. The secondary electrons are first collected by attracting them towards an electrically biased grid at about +400 V, and then further accelerated towards a phosphor or scintillator positively biased to about +2,000 V. The accelerated secondary electrons are now sufficiently energetic to cause the scintillator to emit flashes of light (cathodoluminescence), which are conducted to a photomultiplier outside the SEM column via a light pipe and a window in the wall of the specimen chamber. The amplified electrical signal output by the photomultiplier is displayed as a two-dimensional intensity distribution that can be viewed and photographed on an analogue video display, or subjected to analog-to-digital conversion and displayed and saved as a digital image. This process relies on a raster-scanned primary beam. The brightness of the signal depends on the number of secondary electrons reaching the detector. If the beam enters the sample perpendicular to the surface, then the activated region is uniform about the axis of the beam and a certain number of electrons "escape" from within the sample. As the angle of incidence increases, the interaction volume increases and the "escape" distance of one side of the beam decreases, resulting in more secondary electrons being emitted from the sample. Thus steep surfaces and edges tend to be brighter than flat surfaces, which results in images with a well-defined, three-dimensional appearance. Using the signal of secondary electrons image resolution less than 0.5 nm is possible.
Detection of backscattered electrons
Backscattered electrons (BSE) consist of high-energy electrons originating in the electron beam, that are reflected or back-scattered out of the specimen interaction volume by elastic scattering interactions with specimen atoms. Since heavy elements (high atomic number) backscatter electrons more strongly than light elements (low atomic number), and thus appear brighter in the image, BSEs are used to detect contrast between areas with different chemical compositions. The Everhart–Thornley detector, which is normally positioned to one side of the specimen, is inefficient for the detection of backscattered electrons because few such electrons are emitted in the solid angle subtended by the detector, and because the positively biased detection grid has little ability to attract the higher energy BSE. Dedicated backscattered electron detectors are positioned above the sample in a "doughnut" type arrangement, concentric with the electron beam, maximizing the solid angle of collection. BSE detectors are usually either of scintillator or of semiconductor types. When all parts of the detector are used to collect electrons symmetrically about the beam, atomic number contrast is produced. However, strong topographic contrast is produced by collecting back-scattered electrons from one side above the specimen using an asymmetrical, directional BSE detector; the resulting contrast appears as illumination of the topography from that side. Semiconductor detectors can be made in radial segments that can be switched in or out to control the type of contrast produced and its directionality.
Backscattered electrons can also be used to form an electron backscatter diffraction (EBSD) image that can be used to determine the crystallographic structure of the specimen.
Beam-injection analysis of semiconductors
The nature of the SEM's probe, energetic electrons, makes it uniquely suited to examining the optical and electronic properties of semiconductor materials. The high-energy electrons from the SEM beam will inject charge carriers into the semiconductor. Thus, beam electrons lose energy by promoting electrons from the valence band into the conduction band, leaving behind holes.
In a direct bandgap material, recombination of these electron-hole pairs will result in cathodoluminescence; if the sample contains an internal electric field, such as is present at a p-n junction, the SEM beam injection of carriers will cause electron beam induced current (EBIC) to flow. Cathodoluminescence and EBIC are referred to as "beam-injection" techniques, and are very powerful probes of the optoelectronic behavior of semiconductors, in particular for studying nanoscale features and defects.
Cathodoluminescence
Cathodoluminescence, the emission of light when atoms excited by high-energy electrons return to their ground state, is analogous to UV-induced fluorescence, and some materials such as zinc sulfide and some fluorescent dyes, exhibit both phenomena. Over the last decades, cathodoluminescence was most commonly experienced as the light emission from the inner surface of the cathode-ray tube in television sets and computer CRT monitors. In the SEM, CL detectors either collect all light emitted by the specimen or can analyse the wavelengths emitted by the specimen and display an emission spectrum or an image of the distribution of cathodoluminescence emitted by the specimen in real color.
X-ray microanalysis
Characteristic X-rays that are produced by the interaction of electrons with the sample may also be detected in an SEM equipped for energy-dispersive X-ray spectroscopy or wavelength dispersive X-ray spectroscopy. Analysis of the x-ray signals may be used to map the distribution and estimate the abundance of elements in the sample.
Resolution of the SEM
An SEM is not a camera and the detector is not continuously image-forming like a CCD array or film. Unlike in an optical system, the resolution is not limited by the diffraction limit, fineness of lenses or mirrors or detector array resolution. The focusing optics can be large and coarse, and the SE detector is fist-sized and simply detects current. Instead, the spatial resolution of the SEM depends on the size of the electron spot, which in turn depends on both the wavelength of the electrons and the electron-optical system that produces the scanning beam. The resolution is also limited by the size of the interaction volume, the volume of specimen material that interacts with the electron beam. The spot size and the interaction volume are both large compared to the distances between atoms, so the resolution of the SEM is not high enough to image individual atoms, as is possible with a transmission electron microscope (TEM). The SEM has compensating advantages, though, including the ability to image a comparatively large area of the specimen; the ability to image bulk materials (not just thin films or foils); and the variety of analytical modes available for measuring the composition and properties of the specimen. Depending on the instrument, the resolution can fall somewhere between less than 1 nm and 20 nm. As of 2009, The world's highest resolution conventional (≤30 kV) SEM can reach a point resolution of 0.4 nm using a secondary electron detector.
Environmental SEM
Conventional SEM requires samples to be imaged under vacuum, because a gas atmosphere rapidly spreads and attenuates electron beams. As a consequence, samples that produce a significant amount of vapour, e.g. wet biological samples or oil-bearing rock, must be either dried or cryogenically frozen. Processes involving phase transitions, such as the drying of adhesives or melting of alloys, liquid transport, chemical reactions, and solid-air-gas systems, in general cannot be observed with conventional high-vacuum SEM. In environmental SEM (ESEM), the chamber is evacuated of air, but water vapor is retained near its saturation pressure, and the residual pressure remains relatively high. This allows the analysis of samples containing water or other volatile substances. With ESEM, observations of living insects have been possible.
The first commercial development of the ESEM in the late 1980s allowed samples to be observed in low-pressure gaseous environments (e.g. 1–50 Torr or 0.1–6.7 kPa) and high relative humidity (up to 100%). This was made possible by the development of a secondary-electron detector capable of operating in the presence of water vapour and by the use of pressure-limiting apertures with differential pumping in the path of the electron beam to separate the vacuum region (around the gun and lenses) from the sample chamber. The first commercial ESEMs were produced by the ElectroScan Corporation in USA in 1988. ElectroScan was taken over by Philips (who later sold their electron-optics division to FEI Company) in 1996.
ESEM is especially useful for non-metallic and biological materials because coating with carbon or gold is unnecessary. Uncoated plastics and elastomers can be routinely examined, as can uncoated biological samples. This is useful because coating can be difficult to reverse, may conceal small features on the surface of the sample and may reduce the value of the results obtained. X-ray analysis is difficult with a coating of a heavy metal, so carbon coatings are routinely used in conventional SEMs, but ESEM makes it possible to perform X-ray microanalysis on uncoated non-conductive specimens; however some specific for ESEM artifacts are introduced in X-ray analysis. ESEM may be the preferred for electron microscopy of unique samples from criminal or civil actions, where forensic analysis may need to be repeated by several different experts. It is possible to study specimens in liquid with ESEM or with other liquid-phase electron microscopy methods.
Transmission SEM
The SEM can also be used in transmission mode by simply incorporating an appropriate detector below a thin specimen section. Detectors are available for bright field, dark field, as well as segmented detectors for mid-field to high angle annular dark-field. Despite the difference in instrumentation, this technique is still commonly referred to as scanning transmission electron microscopy (STEM).
SEM in Forensic Science
The SEM is used often in Forensic Science for magnified analysis of microscopic things such as diatoms and gunshot residue. Because SEM is a nondestructive force on the sample, it can be used to analyze evidence without damaging it. The SEM shoots a beam of high energy electrons to the sample which bounce off of the sample without changing or destroying it. This is great when it comes to analyzing diatoms. When a person dies by drowning, they inhale the water which causes what is in the water (diatoms) to get in the blood stream, brain, kidneys, and more. These diatoms in the body can be magnified with the SEM to determine the type of diatoms which aid in understanding how and where the person died. By using the images produced by the SEM, forensic scientists can compare diatoms types to confirm the body of water a person died in.
Gunshot residue (GSR) analysis can be done with many different analytical instruments, but SEM is a common way to analyze inorganic compounds because of the way it can closely analyze the types of elements (mostly metals) through its three detectors: backscatter electron detector, secondary electron detector, and X-ray detector. GSR can be collected from the crime scene, victim, or shooter and analyzed with the SEM. This can help scientists determine proximity and or contact with the discharged firearm.
Color in SEM
Electron microscopes do not naturally produce color images. A secondary electron detector produces a single value per pixel that corresponds to the number of electrons received by the detector during the short period of time when the beam is targeted to the (x, y) pixel position. For each pixel, this single value is represented by a grey level, forming a monochrome image. However, several methods can used to get color electron microscopy images.
False color using a single detector
On compositional images of flat surfaces (typically BSE):
The easiest way to get color is to replace each grey level with an arbitrary color, using a color look-up table. This method is known as false color imaging and can help to distinguish phases of the sample with similar properties or composition.
On textured-surface images:
As an alternative to simply replacing each grey level by a color, a sample observed by an oblique beam allows researchers to create an approximative topography image (see further section "Photometric 3D rendering from a single SEM image"). Such topography can then be processed by 3D-rendering algorithms for a more natural rendering of the surface texture.
SEM image coloring
Very often, published SEM images are artificially colored. This may be done for aesthetic effect, to clarify structure or to add a realistic appearance to the sample and generally does not add information about the specimen.
Coloring may be performed manually with photo-editing software, or semi-automatically with dedicated software using feature-detection or object-oriented segmentation.
Color built using multiple electron detectors
In some configurations more information is gathered per pixel, often by the use of multiple detectors.
As a common example, secondary electron and backscattered electron detectors are superimposed and a color is assigned to each of the images captured by each detector, with a result of a combined color image where colors are related to the density of the components. This method is known as density-dependent color SEM (DDC-SEM). Micrographs produced by DDC-SEM retain topographical information, which is better captured by the secondary electrons detector and combine it to the information about density, obtained by the backscattered electron detector.
Analytical signals based on generated photons
Measurement of the energy of photons emitted from the specimen is a common method to get analytical capabilities. Examples are the energy-dispersive X-ray spectroscopy (EDS) detectors used in elemental analysis and cathodoluminescence microscope (CL) systems that analyse the intensity and spectrum of electron-induced luminescence in (for example) geological specimens. In SEM systems using these detectors it is common to color code these extra signals and superimpose them in a single color image, so that differences in the distribution of the various components of the specimen can be seen clearly and compared. Optionally, the standard secondary electron image can be merged with the one or more compositional channels, so that the specimen's structure and composition can be compared. Such images can be made while maintaining the full integrity of the original signal data, which is not modified in any way.
3D in SEM
SEMs do not naturally provide 3D images contrary to SPMs. However 3D data can be obtained using an SEM with different methods as follows.
3D SEM reconstruction from a stereo pair
photogrammetry is the most metrologically accurate method to bring the third dimension to SEM images. Contrary to photometric methods (next paragraph), photogrammetry calculates absolute heights using triangulation methods. The drawbacks are that it works only if there is a minimum texture, and it requires two images to be acquired from two different angles, which implies the use of a tilt stage. (Photogrammetry is a software operation that calculates the shift (or "disparity") for each pixel, between the left image and the right image of the same pair. Such disparity reflects the local height).
Photometric 3D SEM reconstruction from a four-quadrant detector by "shape from shading"
This method typically uses a four-quadrant BSE detector (alternatively for one manufacturer, a 3-segment detector). The microscope produces four images of the same specimen at the same time, so no tilt of the sample is required. The method gives metrological 3D dimensions as far as the slope of the specimen remains reasonable. Most SEM manufacturers now (2018) offer such a built-in or optional four-quadrant BSE detector, together with proprietary software to calculate a 3D image in real time.
Other approaches use more sophisticated (and sometimes GPU-intensive) methods like the optimal estimation algorithm and offer much better results at the cost of high demands on computing power.
In all instances, this approach works by integration of the slope, so vertical slopes and overhangs are ignored; for instance, if an entire sphere lies on a flat, little more than the upper hemisphere is seen emerging above the flat, resulting in wrong altitude of the sphere apex. The prominence of this effect depends on the angle of the BSE detectors with respect to the sample, but these detectors are usually situated around (and close to) the electron beam, so this effect is very common.
Photometric 3D rendering from a single SEM image
This method requires an SEM image obtained in oblique low angle lighting. The grey-level is then interpreted as the slope, and the slope integrated to restore the specimen topography. This method is interesting for visual enhancement and the detection of the shape and position of objects; however the vertical heights cannot usually be calibrated, contrary to other methods such as photogrammetry.
Other types of 3D SEM reconstruction
Inverse reconstruction using electron-material interactive models
Multi-Resolution reconstruction using single 2D File: High-quality 3D imaging may be an ultimate solution for revealing the complexities of any porous media, but acquiring them is costly and time-consuming. High-quality 2D SEM images, on the other hand, are widely available. Recently, a novel three-step, multiscale, multiresolution reconstruction method is presented that directly uses 2D images in order to develop 3D models. This method, based on a Shannon Entropy and conditional simulation, can be used for most of the available stationary materials and can build various stochastic 3D models just using a few thin sections.
Ion-abrasion SEM (IA-SEM) is a method of nanoscale 3D imaging that uses a focused beam of gallium to repeatedly abrade the specimen surface 20 nanometres at a time. Each exposed surface is then scanned to compile a 3D image.
Applications of 3D SEM
One possible application is measuring the roughness of ice crystals. This method can combine variable-pressure environmental SEM and the 3D capabilities of the SEM to measure roughness on individual ice crystal facets, convert it into a computer model and run further statistical analysis on the model. Other measurements include fractal dimension, examining fracture surface of metals, characterization of materials, corrosion measurement, and dimensional measurements at the nano scale (step height, volume, angle, flatness, bearing ratio, coplanarity, etc.).
SEM is also used by art conservationists to discern threats to paintings' surface stability due to aging, such as the formations of complexes of zinc ions with fatty acids. Forensic scientists use SEM to detect art forgeries.
Gallery of SEM images
The following are examples of images taken using an SEM.
| Technology | Optical instruments | null |
28142 | https://en.wikipedia.org/wiki/Supercluster | Supercluster | A supercluster is a large group of smaller galaxy clusters or galaxy groups; they are among the largest known structures in the universe. The Milky Way is part of the Local Group galaxy group (which contains more than 54 galaxies), which in turn is part of the Virgo Supercluster, which is part of the Laniakea Supercluster, which is part of the Pisces–Cetus Supercluster Complex. The large size and low density of superclusters means that they, unlike clusters, expand with the Hubble expansion. The number of superclusters in the observable universe is estimated to be 10 million.
Existence
The existence of superclusters indicates that the galaxies in the Universe are not uniformly distributed; most of them are drawn together in groups and clusters, with groups containing up to some dozens of galaxies and clusters up to several thousand galaxies. Those groups and clusters and additional isolated galaxies in turn form even larger structures called superclusters.
Their existence was first postulated by George Abell in his 1958 Abell catalogue of galaxy clusters. He called them "second-order clusters", or clusters of clusters.
Superclusters form massive structures of galaxies, called "filaments", "supercluster complexes", "walls" or "sheets", that may span between several hundred million light-years to 10 billion light-years, covering more than 5% of the observable universe. These are the largest structures known to date. Observations of superclusters can give information about the initial condition of the universe, when these superclusters were created. The directions of the rotational axes of galaxies within superclusters are studied by those who believe that they may give insight and information into the early formation process of galaxies in the history of the Universe.
Interspersed among superclusters are large voids of space where few galaxies exist. Superclusters are frequently subdivided into groups of clusters called galaxy groups and clusters.
Although superclusters are supposed to be the largest structures in the universe according to the Cosmological principle, larger structures have been observed in surveys, including the Sloan Great Wall.
List of superclusters
Nearby superclusters
Distant superclusters
Extremely distant superclusters
Diagram
| Physical sciences | Basics_3 | null |
28143 | https://en.wikipedia.org/wiki/Salicylic%20acid | Salicylic acid | Salicylic acid is an organic compound with the formula HOC6H4COOH. A colorless (or white), bitter-tasting solid, it is a precursor to and a metabolite of acetylsalicylic acid (aspirin). It is a plant hormone, and has been listed by the EPA Toxic Substances Control Act (TSCA) Chemical Substance Inventory as an experimental teratogen. The name is from Latin for willow tree, from which it was initially identified and derived. It is an ingredient in some anti-acne products. Salts and esters of salicylic acid are known as salicylates.
Uses
Medicine
Salicylic acid as a medication is commonly used to remove the outermost layer of the skin. As such, it is used to treat warts, psoriasis, acne vulgaris, ringworm, dandruff, and ichthyosis.
Similar to other hydroxy acids, salicylic acid is an ingredient in many skincare products for the treatment of seborrhoeic dermatitis, acne, psoriasis, calluses, corns, keratosis pilaris, acanthosis nigricans, ichthyosis, and warts.
Uses in manufacturing
Salicylic acid is used as a food preservative, a bactericide, and an antiseptic.
Salicylic acid is used in the production of other pharmaceuticals, including 4-aminosalicylic acid, sandulpiride, and landetimide (via salethamide). It is also used in picric acid production.
Salicylic acid has long been a key starting material for making acetylsalicylic acid (ASA or aspirin). ASA is prepared by the acetylation of salicylic acid with the acetyl group from acetic anhydride or acetyl chloride. ASA is the standard to which all the other non-steroidal anti-inflammatory drugs (NSAIDs) are compared. In veterinary medicine, this group of drugs is mainly used for treatment of inflammatory musculoskeletal disorders.
Bismuth subsalicylate, a salt of bismuth and salicylic acid, "displays anti-inflammatory action (due to salicylic acid) and also acts as an antacid and mild antibiotic". It is an active ingredient in stomach-relief aids such as Pepto-Bismol and some formulations of Kaopectate.
Other derivatives include methyl salicylate, used as a liniment to soothe joint and muscle pain, and choline salicylate, which is used topically to relieve the pain of mouth ulcers. Aminosalicylic acid is used to induce remission in ulcerative colitis, and has been used as an antitubercular agent often administered in association with isoniazid.
Sodium salicylate is a useful phosphor in the vacuum ultraviolet spectral range, with nearly flat quantum efficiency for wavelengths between 10 and 100 nm. It fluoresces in the blue at 420 nm. It is easily prepared on a clean surface by spraying a saturated solution of the salt in methanol followed by evaporation.
Mechanism of action
Salicylic acid modulates COX-1 enzymatic activity to decrease the formation of pro-inflammatory prostaglandins. Salicylate may competitively inhibit prostaglandin formation.
Salicylic acid, when applied to the skin surface, works by causing the cells of the epidermis to slough off more readily, preventing pores from clogging up, and allowing room for new cell growth. Salicylic acid inhibits the oxidation of uridine-5-diphosphoglucose (UDPG) competitively with NADH and noncompetitively with UDPG. It also competitively inhibits the transferring of glucuronyl group of uridine-5-phosphoglucuronic acid to the phenolic acceptor.
The wound-healing retardation action of salicylates is probably due mainly to its inhibitory action on mucopolysaccharide synthesis.
Safety
If high concentrations of salicylic ointment are used topically, high levels of salicylic acid can enter the blood, requiring hemodialysis to avoid further complications.
Cosmetic applications of the drug pose no significant risk. Even in a worst-case use scenario in which one was using multiple salicylic acid-containing topical products, the aggregate plasma concentration of salicylic acid was well below what was permissible for acetylsalicylic acid (aspirin). Since oral aspirin (which produces much higher salicylic acid plasma concentrations than dermal salicylic acid applications) poses no significant adverse pregnancy outcomes in terms of frequency of stillbirth, birth defects or developmental delay, use of salicylic acid containing cosmetics is safe for pregnant women. Salicylic acid is present in most fruits and vegetables as for example in greatest quantities in berries and in beverages like tea.
Production and chemical reactions
Biosynthesis
Salicylic acid is biosynthesized from the amino acid phenylalanine. In Arabidopsis thaliana, it can be synthesized via a phenylalanine-independent pathway.
Chemical synthesis
Commercial vendors prepare sodium salicylate by treating sodium phenolate (the sodium salt of phenol) with carbon dioxide at high pressure (100atm) and high temperature (115°C) – a method known as the Kolbe-Schmitt reaction. Acidifying the product with sulfuric acid gives salicylic acid:
At the laboratory scale, it can also be prepared by the hydrolysis of aspirin (acetylsalicylic acid) or methyl salicylate (oil of wintergreen) with a strong acid or base; these reactions reverse those chemicals' commercial syntheses.
Reactions
Upon heating, salicylic acid converts to phenyl salicylate:
Further heating gives xanthone.
Salicylic acid as its conjugate base is a chelating agent, with an affinity for iron(III).
Salicylic acid slowly degrades to phenol and carbon dioxide at 200–230 °C:
All isomers of chlorosalicylic acid and of dichlorosalicylic acid are known. 5-Chlorosalicylic acid is produced by direct chlorination of salicylic acid.
History
Willow has long been used for medicinal purposes. Dioscorides, whose writings were highly influential for more than 1,500 years, used "Itea" (which was possibly a species of willow) as a treatment for "painful intestinal obstructions", birth control, for "those who spit blood", to remove calluses and corns and, externally, as a "warm pack for gout". William Turner, in 1597, repeated this, saying that willow bark, "being burnt to ashes, and steeped in vinegar, takes away corns and other like risings in the feet and toes". Some of these cures may describe the action of salicylic acid, which can be derived from the salicin present in willow. It is, however, a modern myth that Hippocrates used willow as a painkiller.
Hippocrates, Galen, Pliny the Elder, and others knew that decoctions containing salicylate could ease pain and reduce fevers.
It was used in Europe and China to treat these conditions. This remedy is mentioned in texts from Ancient Egypt, Sumer, and Assyria.
The Cherokee and other Native Americans use an infusion of the bark for fever and other medicinal purposes. In 2014, archaeologists identified traces of salicylic acid on seventh-century pottery fragments found in east-central Colorado.
Edward Stone, a vicar from Chipping Norton, Oxfordshire, England, reported in 1763 that the bark of the willow was effective in reducing a fever.
An extract of willow bark, called salicin, after the Latin name for the white willow (Salix alba), was isolated and named by German chemist Johann Andreas Buchner in 1828. A larger amount of the substance was isolated in 1829 by Henri Leroux, a French pharmacist. Raffaele Piria, an Italian chemist, was able to convert the substance into a sugar and a second component, which on oxidation becomes salicylic acid.
Salicylic acid was also isolated from the herb meadowsweet (Filipendula ulmaria, formerly classified as Spiraea ulmaria) by German researchers in 1839. Their extract caused digestive problems such as gastric irritation, bleeding, diarrhea, and even death when consumed in high doses.
In 1874 the Scottish physician Thomas MacLagan experimented with salicin as a treatment for acute rheumatism, with considerable success, as he reported in The Lancet in 1876. Meanwhile, German scientists tried sodium salicylate with less success and more severe side effects.
In 1979, salicylates were found to be involved in induced defenses of tobacco against tobacco mosaic virus. In 1987, salicylic acid was identified as the long-sought signal that causes thermogenic plants, such as the voodoo lily, Sauromatum guttatum, to produce heat.
Dietary sources
Salicylic acid occurs in plants as free salicylic acid and its carboxylated esters and phenolic glycosides. Several studies suggest that humans metabolize salicylic acid in measurable quantities from these plants. High-salicylate beverages and foods include beer, coffee, tea, numerous fruits and vegetables, sweet potato, nuts, and olive oil. Meat, poultry, fish, eggs, dairy products, sugar, breads and cereals have low salicylate content.
Some people with sensitivity to dietary salicylates may have symptoms of allergic reaction, such as bronchial asthma, rhinitis, gastrointestinal disorders, or diarrhea, so may need to adopt a low-salicylate diet.
Plant hormone
Salicylic acid is a phenolic phytohormone, and is found in plants with roles in plant growth and development, photosynthesis, transpiration, and ion uptake and transport. Salicylic acid is involved in endogenous signaling, mediating plant defense against pathogens. It plays a role in the resistance to pathogens (i.e. systemic acquired resistance) by inducing the production of pathogenesis-related proteins and other defensive metabolites. SA's defense signaling role is most clearly demonstrated by experiments which do away with it: Delaney et al. 1994, Gaffney et al. 1993, Lawton et al. 1995, and Vernooij et al. 1994 each use Nicotiana tabacum or Arabidopsis expressing nahG, for salicylate hydroxylase. Pathogen inoculation did not produce the customarily high SA levels, SAR was not produced, and no pathogenesis-related (PR) genes were expressed in systemic leaves. Indeed, the subjects were more susceptible to virulent and even normally avirulent pathogens.
Exogenously, salicylic acid can aid plant development via enhanced seed germination, bud flowering, and fruit ripening, though too high of a concentration of salicylic acid can negatively regulate these developmental processes.
The volatile methyl ester of salicylic acid, methyl salicylate, can also diffuse through the air, facilitating plant-plant communication. Methyl salicylate is taken up by the stomata of the nearby plant, where it can induce an immune response after being converted back to salicylic acid.
Signal transduction
A number of proteins have been identified that interact with SA in plants, especially salicylic acid binding proteins (SABPs) and the NPR genes (nonexpressor of pathogenesis-related genes), which are putative receptors.
| Biology and health sciences | Plant hormone | Biology |
28144 | https://en.wikipedia.org/wiki/Seaborgium | Seaborgium | Seaborgium is a synthetic chemical element; it has symbol Sg and atomic number 106. It is named after the American nuclear chemist Glenn T. Seaborg. As a synthetic element, it can be created in a laboratory but is not found in nature. It is also radioactive; the most stable known isotopes have half lives on the order of several minutes.
In the periodic table of the elements, it is a d-block transactinide element. It is a member of the 7th period and belongs to the group 6 elements as the fourth member of the 6d series of transition metals. Chemistry experiments have confirmed that seaborgium behaves as the heavier homologue to tungsten in group 6. The chemical properties of seaborgium are characterized only partly, but they compare well with the chemistry of the other group 6 elements.
In 1974, a few atoms of seaborgium were produced in laboratories in the Soviet Union and in the United States. The priority of the discovery and therefore the naming of the element was disputed between Soviet and American scientists, and it was not until 1997 that the International Union of Pure and Applied Chemistry (IUPAC) established seaborgium as the official name for the element. It is one of only two elements named after a living person at the time of naming, the other being oganesson, element 118.
Introduction
History
Following claims of the observation of elements 104 and 105 in 1970 by Albert Ghiorso et al. at the Lawrence Livermore National Laboratory, a search for element 106 using oxygen-18 projectiles and the previously used californium-249 target was conducted. Several 9.1 MeV alpha decays were reported and are now thought to originate from element 106, though this was not confirmed at the time. In 1972, the HILAC accelerator received equipment upgrades, preventing the team from repeating the experiment, and data analysis was not done during the shutdown. This reaction was tried again several years later, in 1974, and the Berkeley team realized that their new data agreed with their 1971 data, to the astonishment of Ghiorso. Hence, element 106 could have actually been discovered in 1971 if the original data was analyzed more carefully.
Two groups claimed discovery of the element. Evidence of element 106 was first reported in 1974 by a Russian research team in Dubna led by Yuri Oganessian, in which targets of lead-208 and lead-207 were bombarded with accelerated ions of chromium-54. In total, fifty-one spontaneous fission events were observed with a half-life between four and ten milliseconds. After having ruled out nucleon transfer reactions as a cause for these activities, the team concluded that the most likely cause of the activities was the spontaneous fission of isotopes of element 106. The isotope in question was first suggested to be seaborgium-259, but was later corrected to seaborgium-260.
+ → + 2
+ → +
A few months later in 1974, researchers including Glenn T. Seaborg, Carol Alonso and Albert Ghiorso at the University of California, Berkeley, and E. Kenneth Hulet from the Lawrence Livermore National Laboratory, also synthesized the element by bombarding a californium-249 target with oxygen-18 ions, using equipment similar to that which had been used for the synthesis of element 104 five years earlier, observing at least seventy alpha decays, seemingly from the isotope seaborgium-263m with a half-life of seconds. The alpha daughter rutherfordium-259 and granddaughter nobelium-255 had previously been synthesised and the properties observed here matched with those previously known, as did the intensity of their production. The cross-section of the reaction observed, 0.3 nanobarns, also agreed well with theoretical predictions. These bolstered the assignment of the alpha decay events to seaborgium-263m.
+ → + 4 → + → +
A dispute thus arose from the initial competing claims of discovery, though unlike the case of the synthetic elements up to element 105, neither team of discoverers chose to announce proposed names for the new elements, thus averting an element naming controversy temporarily. The dispute on discovery, however, dragged on until 1992, when the IUPAC/IUPAP Transfermium Working Group (TWG), formed to put an end to the controversy by making conclusions regarding discovery claims for elements 101 to 112, concluded that the Soviet synthesis of seaborgium-260 was not convincing enough, "lacking as it is in yield curves and angular selection results", whereas the American synthesis of seaborgium-263 was convincing due to its being firmly anchored to known daughter nuclei. As such, the TWG recognised the Berkeley team as official discoverers in their 1993 report.
Seaborg had previously suggested to the TWG that if Berkeley was recognised as the official discoverer of elements 104 and 105, they might propose the name kurchatovium (symbol Kt) for element 106 to honour the Dubna team, which had proposed this name for element 104 after Igor Kurchatov, the former head of the Soviet nuclear research programme. However, due to the worsening relations between the competing teams after the publication of the TWG report (because the Berkeley team vehemently disagreed with the TWG's conclusions, especially regarding element 104), this proposal was dropped from consideration by the Berkeley team. After being recognized as official discoverers, the Berkeley team started deciding on a name in earnest:
Seaborg's son Eric remembered the naming process as follows:
The name seaborgium and symbol Sg were announced at the 207th national meeting of the American Chemical Society in March 1994 by Kenneth Hulet, one of the co-discovers. However, IUPAC resolved in August 1994 that an element could not be named after a living person, and Seaborg was still alive at the time. Thus, in September 1994, IUPAC recommended a set of names in which the names proposed by the three laboratories (the third being the GSI Helmholtz Centre for Heavy Ion Research in Darmstadt, Germany) with competing claims to the discovery for elements 104 to 109 were shifted to various other elements, in which rutherfordium (Rf), the Berkeley proposal for element 104, was shifted to element 106, with seaborgium being dropped entirely as a name.
This decision ignited a firestorm of worldwide protest for disregarding the historic discoverer's right to name new elements, and against the new retroactive rule against naming elements after living persons; the American Chemical Society stood firmly behind the name seaborgium for element 106, together with all the other American and German naming proposals for elements 104 to 109, approving these names for its journals in defiance of IUPAC. At first, IUPAC defended itself, with an American member of its committee writing: "Discoverers don't have a right to name an element. They have a right to suggest a name. And, of course, we didn't infringe on that at all." However, Seaborg responded:
Bowing to public pressure, IUPAC proposed a different compromise in August 1995, in which the name seaborgium was reinstated for element 106 in exchange for the removal of all but one of the other American proposals, which met an even worse response. Finally, IUPAC rescinded these previous compromises and made a final, new recommendation in August 1997, in which the American and German proposals for elements 104 to 109 were all adopted, including seaborgium for element 106, with the single exception of element 105, named dubnium to recognise the contributions of the Dubna team to the experimental procedures of transactinide synthesis. This list was finally accepted by the American Chemical Society, which wrote:
Seaborg commented regarding the naming:
Seaborg died a year and a half later, on 25 February 1999, at the age of 86.
Isotopes
Superheavy elements such as seaborgium are produced by bombarding lighter elements in particle accelerators that induces fusion reactions. Whereas most of the isotopes of seaborgium can be synthesized directly this way, some heavier ones have only been observed as decay products of elements with higher atomic numbers.
Depending on the energies involved, fusion reactions that generate superheavy elements are separated into "hot" and "cold". In hot fusion reactions, very light, high-energy projectiles are accelerated toward very heavy targets (actinides), giving rise to compound nuclei at high excitation energy (~40–50 MeV) that may either fission or evaporate several (3 to 5) neutrons. In cold fusion reactions, the produced fused nuclei have a relatively low excitation energy (~10–20 MeV), which decreases the probability that these products will undergo fission reactions. As the fused nuclei cool to the ground state, they require emission of only one or two neutrons, and thus, allows for the generation of more neutron-rich products. The latter is a distinct concept from that of where nuclear fusion claimed to be achieved at room temperature conditions (see cold fusion).
Seaborgium has no stable or naturally occurring isotopes. Several radioactive isotopes have been synthesized in the laboratory, either by fusing two atoms or by observing the decay of heavier elements. Thirteen different isotopes of seaborgium have been reported with mass numbers 258–269 and 271, four of which, seaborgium-261, −263, −265, and −267, have known metastable states. All of these decay only through alpha decay and spontaneous fission, with the single exception of seaborgium-261 that can also undergo electron capture to dubnium-261.
There is a trend toward increasing half-lives for the heavier isotopes, though even–odd isotopes are generally more stable than their neighboring even–even isotopes, because the odd neutron leads to increased hindrance of spontaneous fission; among known seaborgium isotopes, alpha decay is the predominant decay mode in even–odd nuclei whereas fission dominates in even–even nuclei. Three of the heaviest known isotopes, 267Sg, 269Sg, and 271Sg, are also the longest-lived, having half-lives on the order of several minutes. Some other isotopes in this region are predicted to have comparable or even longer half-lives. Additionally, 263Sg, 265Sg, 265mSg, and 268Sg have half-lives measured in seconds. All the remaining isotopes have half-lives measured in milliseconds, with the exception of the shortest-lived isotope, 261mSg, with a half-life of only 9.3 microseconds.
The proton-rich isotopes from 258Sg to 261Sg were directly produced by cold fusion; all heavier isotopes were produced from the repeated alpha decay of the heavier elements hassium, darmstadtium, and flerovium, with the exceptions of the isotopes 263mSg, 264Sg, 265Sg, and 265mSg, which were directly produced by hot fusion through irradiation of actinide targets.
Predicted properties
Very few properties of seaborgium or its compounds have been measured; this is due to its extremely limited and expensive production and the fact that seaborgium (and its parents) decays very quickly. A few singular chemistry-related properties have been measured, but properties of seaborgium metal remain unknown and only predictions are available.
Physical
Seaborgium is expected to be a solid under normal conditions and assume a body-centered cubic crystal structure, similar to its lighter congener tungsten. Early predictions estimated that it should be a very heavy metal with density around 35.0 g/cm3, but calculations in 2011 and 2013 predicted a somewhat lower value of 23–24 g/cm3.
Chemical
Seaborgium is the fourth member of the 6d series of transition metals and the heaviest member of group 6 in the periodic table, below chromium, molybdenum, and tungsten. All the members of the group form a diversity of oxoanions. They readily portray their group oxidation state of +6, although this is highly oxidising in the case of chromium, and this state becomes more and more stable to reduction as the group is descended: indeed, tungsten is the last of the 5d transition metals where all four 5d electrons participate in metallic bonding. As such, seaborgium should have +6 as its most stable oxidation state, both in the gas phase and in aqueous solution, and this is the only positive oxidation state that is experimentally known for it; the +5 and +4 states should be less stable, and the +3 state, the most common for chromium, would be the least stable for seaborgium.
This stabilisation of the highest oxidation state occurs in the early 6d elements because of the similarity between the energies of the 6d and 7s orbitals, since the 7s orbitals are relativistically stabilised and the 6d orbitals are relativistically destabilised. This effect is so large in the seventh period that seaborgium is expected to lose its 6d electrons before its 7s electrons (Sg, [Rn]5f146d47s2; Sg+, [Rn]5f146d37s2; Sg2+, [Rn]5f146d37s1; Sg4+, [Rn]5f146d2; Sg6+, [Rn]5f14). Because of the great destabilisation of the 7s orbital, SgIV should be even more unstable than WIV and should be very readily oxidised to SgVI. The predicted ionic radius of the hexacoordinate Sg6+ ion is 65 pm, while the predicted atomic radius of seaborgium is 128 pm. Nevertheless, the stability of the highest oxidation state is still expected to decrease as LrIII > RfIV > DbV > SgVI. Some predicted standard reduction potentials for seaborgium ions in aqueous acidic solution are as follows:
{|
|-
| 2 SgO3 + 2 H+ + 2 e− || Sg2O5 + H2O || E0 = −0.046 V
|-
| Sg2O5 + 2 H+ + 2 e− || 2 SgO2 + H2O || E0 = +0.11 V
|-
| SgO2 + 4 H+ + e− || Sg3+ + 2 H2O || E0 = −1.34 V
|-
| Sg3+ + e− || Sg2+ || E0 = −0.11 V
|-
| Sg3+ + 3 e− || Sg || E0 = +0.27 V
|}
Seaborgium should form a very volatile hexafluoride (SgF6) as well as a moderately volatile hexachloride (SgCl6), pentachloride (SgCl5), and oxychlorides SgO2Cl2 and SgOCl4. SgO2Cl2 is expected to be the most stable of the seaborgium oxychlorides and to be the least volatile of the group 6 oxychlorides, with the sequence MoO2Cl2 > WO2Cl2 > SgO2Cl2. The volatile seaborgium(VI) compounds SgCl6 and SgOCl4 are expected to be unstable to decomposition to seaborgium(V) compounds at high temperatures, analogous to MoCl6 and MoOCl4; this should not happen for SgO2Cl2 due to the much higher energy gap between the highest occupied and lowest unoccupied molecular orbitals, despite the similar Sg–Cl bond strengths (similarly to molybdenum and tungsten).
Molybdenum and tungsten are very similar to each other and show important differences to the smaller chromium, and seaborgium is expected to follow the chemistry of tungsten and molybdenum quite closely, forming an even greater variety of oxoanions, the simplest among them being seaborgate, , which would form from the rapid hydrolysis of , although this would take place less readily than with molybdenum and tungsten as expected from seaborgium's greater size. Seaborgium should hydrolyse less readily than tungsten in hydrofluoric acid at low concentrations, but more readily at high concentrations, also forming complexes such as SgO3F− and : complex formation competes with hydrolysis in hydrofluoric acid.
Experimental chemistry
Experimental chemical investigation of seaborgium has been hampered due to the need to produce it one atom at a time, its short half-life, and the resulting necessary harshness of the experimental conditions. The isotope 265Sg and its isomer 265mSg are advantageous for radiochemistry: they are produced in the 248Cm(22Ne,5n) reaction.
In the first experimental chemical studies of seaborgium in 1995 and 1996, seaborgium atoms were produced in the reaction 248Cm(22Ne,4n)266Sg, thermalised, and reacted with an O2/HCl mixture. The adsorption properties of the resulting oxychloride were measured and compared with those of molybdenum and tungsten compounds. The results indicated that seaborgium formed a volatile oxychloride akin to those of the other group 6 elements, and confirmed the decreasing trend of oxychloride volatility down group 6:
Sg + + 2 HCl → +
In 2001, a team continued the study of the gas phase chemistry of seaborgium by reacting the element with O2 in a H2O environment. In a manner similar to the formation of the oxychloride, the results of the experiment indicated the formation of seaborgium oxide hydroxide, a reaction well known among the lighter group 6 homologues as well as the pseudohomologue uranium.
2 Sg + 3 → 2
+ →
Predictions on the aqueous chemistry of seaborgium have largely been confirmed. In experiments conducted in 1997 and 1998, seaborgium was eluted from cation-exchange resin using a HNO3/HF solution, most likely as neutral SgO2F2 or the anionic complex ion [SgO2F3]− rather than . In contrast, in 0.1 M nitric acid, seaborgium does not elute, unlike molybdenum and tungsten, indicating that the hydrolysis of [Sg(H2O)6]6+ only proceeds as far as the cationic complex [Sg(OH)4(H2O)]2+ or [SgO(OH)3(H2O)2]+, while that of molybdenum and tungsten proceed to neutral [MO2(OH)2].
The only other oxidation state known for seaborgium other than the group oxidation state of +6 is the zero oxidation state. Similarly to its three lighter congeners, forming chromium hexacarbonyl, molybdenum hexacarbonyl, and tungsten hexacarbonyl, seaborgium has been shown in 2014 to also form seaborgium hexacarbonyl, Sg(CO)6. Like its molybdenum and tungsten homologues, seaborgium hexacarbonyl is a volatile compound that reacts readily with silicon dioxide.
Absence in nature
Searches for long-lived primordial nuclides of seaborgium in nature have all yielded negative results. One 2022 study estimated the concentration of seaborgium atoms in natural tungsten (its chemical homolog) is less than atom(Sg)/atom(W).
| Physical sciences | Group 6 | Chemistry |
28149 | https://en.wikipedia.org/wiki/Serpens | Serpens | Serpens () is a constellation in the northern celestial hemisphere. One of the 48 constellations listed by the 2nd-century astronomer Ptolemy, it remains one of the 88 modern constellations designated by the International Astronomical Union. It is unique among the modern constellations in being split into two non-contiguous parts, Serpens Caput (Serpent Head) to the west and Serpens Cauda (Serpent Tail) to the east. Between these two halves lies the constellation of Ophiuchus, the "Serpent-Bearer". In figurative representations, the body of the serpent is represented as passing behind Ophiuchus between Mu Serpentis in Serpens Caput and Nu Serpentis in Serpens Cauda.
The brightest star in Serpens is the red giant star Alpha Serpentis, or Unukalhai, in Serpens Caput, with an apparent magnitude of 2.63. Also located in Serpens Caput are the naked-eye globular cluster Messier 5 and the naked-eye variables R Serpentis and Tau4 Serpentis. Notable extragalactic objects include Seyfert's Sextet, one of the densest galaxy clusters known; Arp 220, the prototypical ultraluminous infrared galaxy; and Hoag's Object, the most famous of the very rare class of galaxies known as ring galaxies.
Part of the Milky Way's galactic plane passes through Serpens Cauda, which is therefore rich in galactic deep-sky objects, such as the Eagle Nebula (IC 4703) and its associated star cluster Messier 16. The nebula measures 70 light-years by 50 light-years and contains the Pillars of Creation, three dust clouds that became famous for the image taken by the Hubble Space Telescope. Other striking objects include the Red Square Nebula, one of the few objects in astronomy to take on a square shape; and Westerhout 40, a massive nearby star-forming region consisting of a molecular cloud and an H II region.
History
In Greek mythology, Serpens represents a snake held by the healer Asclepius. Represented in the sky by the constellation Ophiuchus, Asclepius once killed a snake, but the animal was subsequently resurrected after a second snake placed a revival herb on it before its death. As snakes shed their skin every year, they were known as the symbol of rebirth in ancient Greek society, and legend says Asclepius would revive dead humans using the same technique he witnessed. Although this is likely the logic for Serpens' presence with Ophiuchus, the true reason is still not fully known. Sometimes, Serpens was depicted as coiling around Ophiuchus, but the majority of atlases showed Serpens passing either behind Ophiuchus' body or between his legs.
In some ancient atlases, the constellations Serpens and Ophiuchus were depicted as two separate constellations, although more often they were shown as a single constellation. One notable figure to depict Serpens separately was Johann Bayer; thus, Serpens' stars are cataloged with separate Bayer designations from those of Ophiuchus. When Eugène Delporte established modern constellation boundaries in the 1920s, he elected to depict the two separately. However, this posed the problem of how to disentangle the two constellations, with Deporte deciding to split Serpens into two areas—the head and the tail—separated by the continuous Ophiuchus. These two areas became known as Serpens Caput and Serpens Cauda, caput being the Latin word for head and cauda the Latin word for tail.
In Chinese astronomy, most of the stars of Serpens represented part of a wall surrounding a marketplace, known as Tianshi, which was in Ophiuchus and part of Hercules. Serpens also contains a few Chinese constellations. Two stars in the tail represented part of Shilou, the tower with the market office. Another star in the tail represented Liesi, jewel shops. One star in the head (Mu Serpentis) marked Tianru, the crown prince's wet nurse, or sometimes rain.
There were two "serpent" constellations in Babylonian astronomy, known as Mušḫuššu and Bašmu. It appears that Mušḫuššu was depicted as a hybrid of a dragon, a lion and a bird, and loosely corresponded to Hydra. Bašmu was a horned serpent (c.f. Ningishzida) and roughly corresponds to the Ὄφις constellation of Eudoxus of Cnidus on which the Ὄφις (Serpens) of Ptolemy is based.
Characteristics
Serpens is the only one of the 88 modern constellations to be split into two disconnected regions in the sky: Serpens Caput (the head) and Serpens Cauda (the tail). The constellation is also unusual in that it depends on another constellation for context; specifically, it is being held by the Serpent Bearer Ophiuchus.
Serpens Caput is bordered by Libra to the south, Virgo and Boötes to the west, Corona Borealis to the north, and Ophiuchus and Hercules to the east; Serpens Cauda is bordered by Sagittarius to the south, Scutum and Aquila to the east, and Ophiuchus to the north and west. Covering 636.9 square degrees total, it ranks 23rd of the 88 constellations in size. It appears prominently in both the northern and southern skies during the Northern Hemisphere's summer. Its main asterism consists of 11 stars, and 108 stars in total are brighter than magnitude 6.5, the traditional limit for naked-eye visibility.
Serpens Caput's boundaries, as set by Belgian astronomer Eugène Delporte in 1930, are defined by a 10-sided polygon, while Serpens Cauda's are defined by a 22-sided polygon. In the equatorial coordinate system, the right ascension coordinates of Serpens Caput's borders lie between and , while the declination coordinates are between and . Serpens Cauda's boundaries lie between right ascensions of and and declinations of and . The International Astronomical Union (IAU) adopted the three-letter abbreviation "Ser" for the constellation in 1922.
Features
Stars
Head stars
Marking the heart of the serpent is the constellation's brightest star, Alpha Serpentis. Traditionally called Unukalhai, is a red giant of spectral type K2III located approximately 23 parsecs distant with a visual magnitude of 2.630 ± 0.009, meaning it can easily be seen with the naked eye even in areas with substantial light pollution. A faint companion is in orbit around the red giant star, although it is not visible to the naked eye. Situated near Alpha is Lambda Serpentis, a magnitude 4.42 ± 0.05 star rather similar to the Sun positioned only 12 parsecs away. It has an exoplanet orbiting around it. Another solar analog in Serpens is the primary of Psi Serpentis, a binary star located slightly further away at approximately 14 parsecs.
Beta, Gamma, and Iota Serpentis form a distinctive triangular shape marking the head of the snake, with Kappa Serpentis (the proper name is Gudja) being roughly midway between Gamma and Iota. The brightest of the four with an apparent magnitude of roughly 3.67, Beta Serpentis is a white main-sequence star roughly 160 parsecs distant. It is likely that a nearby 10th-magnitude star is physically associated with Beta, although it is not certain. The Mira variable R Serpentis, situated between Beta and Gamma, is visible to the naked eye at its maximum of 5th-magnitude, but, typical of Mira variables, it can fade to below magnitude 14. Gamma Serpentis itself is an F-type subgiant located only 11 parsecs distant and thus is quite bright, being of magnitude 3.84 ± 0.05. The star is known to show solar-like oscillations. Iota Serpentis is a binary star system.
Delta Serpentis, forming part of the body of the snake between the heart and the head, is a multiple star system positioned around 70 parsecs from Earth. Consisting of four stars, the system has a total apparent magnitude of 3.79 as viewed from Earth, although two of the stars, with a combined apparent magnitude of 3.80, provide nearly all the light. The primary, a white subgiant, is a Delta Scuti variable with an average apparent magnitude of 4.23. Positioned very near Delta, both in the night sky and likely in actual space at an estimated distance of around 70 parsecs, is the barium star 16 Serpentis. Another notable variable star visible to the naked eye is Chi Serpentis, an Alpha² Canum Venaticorum variable situated midway between Delta and Beta which varies from its median brightness of 5.33 by 0.03 magnitudes over a period of approximately 1.5 days. Chi Serpentis is a chemically peculiar star.
The two stars in Serpens Caput that form part of the Snake's body below the heart are Epsilon and Mu Serpentis, both third-magnitude A-type main-sequence stars. Both have a peculiarity: Epsilon is an Am star, while Mu is a binary. Located slightly northwest of Mu is 36 Serpentis, another A-type main-sequence star. This star also has a peculiarity; it is a binary with the primary component being a Lambda Boötis star, meaning that it has solar-like amounts of carbon, nitrogen, and oxygen, while containing very low amounts of iron peak elements. The secondary star has also been a source of X-ray emissions. 25 Serpentis, positioned a few degrees northeast of Mu Serpentis, is a spectroscopic binary consisting of a hot B-type giant and an A-type main-sequence star. The primary is a slowly pulsating B star, which causes the system to vary by 0.03 magnitudes.
Serpens Caput contains many RR Lyrae variables, although most are too faint to be seen without professional photography. The brightest is VY Serpentis, only of 10th magnitude. This star's period has been increasing by approximately 1.2 seconds per century. A variable star of a different kind is Tau4 Serpentis, a cool red giant that pulsates between magnitudes 5.89 and 7.07 in 87 days. This star has been found to display an inverse P Cygni profile, where cold infalling gas on to the star creates redshifted hydrogen absorption lines next to the normal emission lines.
Several stars in Serpens have been found to have planets. The brightest, Omega Serpentis, located between Epsilon and Mu, is an orange giant with a planet of at least 1.7 Jupiter-masses. NN Serpentis, an eclipsing post-common-envelope binary consisting of a white dwarf and a red dwarf, is very likely to have two planets causing variations in the period of the eclipses. Although it does not have a planet, the solar analog HD 137510 has been found to have a brown dwarf companion within the brown-dwarf desert.
PSR B1534+11 is a system consisting of two neutron stars orbiting each other, one of which is a pulsar with a period of 37.9 milliseconds. Situated approximately 1000 parsecs distant, the system was used to test Albert Einstein's theory of general relativity, validating the system's relativistic parameters to within 0.2% of values predicted by the theory. The X-ray emission from the system has been found to be present when the non-pulsar star intersects the equatorial pulsar wind of the pulsar, and the system's orbit has been found to vary slightly.
Tail stars
The brightest star in the tail, Eta Serpentis, is similar to Alpha Serpentis' primary in that it is a red giant of spectral class K. This star, however, is known to exhibit solar-like oscillations over a period of approximately 2.16 hours. The other two stars in Serpens Cauda forming its asterism are Theta and Xi Serpentis. Xi, where the asterism crosses over to Mu Serpentis in the head, is a triple star system located approximately 105 parsecs away. Two of the stars, with a combined apparent magnitude of around 3.5, form a spectroscopic binary with an angular separation of only 2.2 milliarcseconds, and thus cannot be resolved with modern equipment. The primary is a white giant with an excess of strontium. Theta, forming the tip of the tail, is also a multiple system, consisting of two A-type main-sequence stars with a combined apparent magnitude of around 4.1 separated by almost half an arcminute. There is also a third G-type star with a mass and radius similar to that of the Sun.
Lying near the boundary with Ophiuchus are Zeta, Nu, and Omicron Serpentis. All three are 4th-magnitude main-sequence stars, with Nu and Omicron being of spectral type A and Zeta being of spectral type F. Nu is a single star with a 9th-magnitude visual companion, while Omicron is a Delta Scuti variable with amplitude variations of 0.01 magnitudes. In 1909, the symbiotic nova RT Serpentis appeared near Omicron, although it only reached a maximum magnitude of 10.
The star system 59 Serpentis, also known as d Serpentis, is a triple star system consisting of a spectroscopic binary containing an A-type star and an orange giant and an orange giant secondary. The system shows irregular variations in brightness between magnitudes 5.17 and 5.2. In 1970, the nova FH Serpentis appeared just slightly north of 59 Serpentis, reaching a maximum brightness of 4.5. Also near 59 Serpentis in the Serpens Cloud are several Orion variables. MWC 297 is a Herbig Be star that in 1994 exhibited a large X-ray flare and increased in X-ray luminosity by five times before returning to the quiescent state. The star also appears to possess a circumstellar disk. Another Orion variable in the region is VV Serpentis, a Herbig Ae star that has been found to exhibit Delta Scuti pulsations. VV Serpentis has also, like MWC 297, been found to have a dusty disk surrounding it, and is also a UX Orionis star, meaning that it shows irregular variations in its brightness.
The star HR 6958, also known as MV Serpentis, is an Alpha2 Canum Venaticorum variable that is faintly visible to the naked eye. The star's metal abundance is ten times higher than the Sun for most metals at the iron peak and up to 1,000 times more for heavier elements. It has also been found to contain excess silicon. Barely visible to the naked eye is HD 172365, a likely post-blue straggler in the open cluster IC 4756 that contains a large excess of lithium. HD 172189, also located in IC 4756, is an Algol variable eclipsing binary with a 5.70 day period. The primary star in the system is also a Delta Scuti variable, undergoing multiple pulsation frequencies, which, combined with the eclipses, causes the system to vary by around a tenth of a magnitude.
As the galactic plane passes through it, Serpens Cauda contains many massive OB stars. Several of these are visible to the naked eye, such as NW Serpentis, an early Be star that has been found to be somewhat variable. The variability is interesting; according to one study, it could be one of the first discovered hybrids between Beta Cephei variables and slowly pulsating B stars. Although not visible to the naked eye, HD 167971 (MY Serpentis) is a Beta Lyrae variable triple system consisting of three very hot O-type stars. A member of the cluster NGC 6604, the two eclipsing stars are both blue giants, with one being of the very early spectral type O7.5III. The remaining star is either a blue giant or supergiant of a late O or early B spectral type. Also an eclipsing binary, the HD 166734 system consists of two O-type blue supergiants in orbit around each other. Less extreme in terms of mass and temperature is HD 161701, a spectroscopic binary consisting of a B-type primary and an Ap secondary, although it is the only known spectroscopic binary to consist of a star with excess of mercury and manganese and an Ap star.
South of the Eagle Nebula on the border with Sagittarius is the eclipsing binary W Serpentis, whose primary is a white giant that is interacting with the secondary. The system has been found to contain an accretion disk, and was one of the first discovered Serpentids, which are eclipsing binaries containing exceptionally strong far-ultraviolet spectral lines. It is suspected that such Serpentids are in an earlier evolutionary phase, and will evolve first into double periodic variables and then classical Algol variables. Also near the Eagle Nebula is the eclipsing Wolf–Rayet binary CV Serpentis, consisting of a Wolf–Rayet star and a hot O-type subgiant. The system is surrounded by a ring-shaped nebula, likely formed during the Wolf–Rayet phase of the primary. The eclipses of the system vary erratically, and although there are two theories as to why, neither of them is completely consistent with current understanding of stars.
Serpens Cauda contains a few X-ray binaries. One of these, GX 17+2, is a low-mass X-ray binary consisting of a neutron star and, as in all low-mass X-ray binaries, a low-mass star. The system has been classified as a Sco-like Z source, meaning that its accretion is near the Eddington limit. The system has also been found to approximately every 3 days brighten by around 3.5 K-band magnitudes, possibly due to the presence of a synchrotron jet. Another low-mass X-ray binary, Serpens X-1, undergoes occasional X-ray bursts. One in particular lasted nearly four hours, possibly explained by the burning of carbon in "a heavy element ocean".
Φ 332 (Finsen 332) is a tiny and difficult double-double star at 18:45 / +5°30', named Tweedledee and Tweedledum by South African astronomer William Stephen Finsen, who was struck by the nearly identical position angles and separations at the time of his 1953 discovery. Gliese 710 is a star that is expected to pass very close to the Solar System in around 1.29 million years.
Deep-sky objects
Head objects
As the galactic plane does not pass through this part of Serpens, a view to many galaxies beyond it is possible. However, a few structures of the Milky Way Galaxy are present in Serpens Caput, such as Messier 5, a globular cluster positioned approximately 8° southwest of α Serpentis, next to the star 5 Serpentis. Barely visible to the naked eye under good conditions, and is located approximately 25,000 ly distant. Messier 5 contains a large number of known RR Lyrae variable stars, and is receding from us at over 50 km/s. The cluster contains two millisecond pulsars, one of which is in a binary, allowing the proper motion of the cluster to be measured. The binary could help our understanding of neutron degenerate matter; the current median mass, if confirmed, would exclude any "soft" equation of state for such matter. The cluster has been used to test for magnetic dipole moments in neutrinos, which could shed light on some hypothetical particles such as the axion. The brightest stars in Messier 5 are around magnitude 10.6, and the globular cluster was first observed by William Herschel in 1791.
Another globular cluster is Palomar 5, found just south of Messier 5. Many stars are leaving this globular cluster due to the Milky Way's gravity, forming a tidal tail over 30000 light-years long. It is over 11 billion years old. It has also been flattened and distorted by tidal effects.
The L134/L183 is a dark nebula complex that, along with a third cloud, is likely formed by fragments of a single original cloud located 36 degrees away from the galactic plane, a large distance for dark nebulae. The entire complex is thought to be around 140 parsecs distant. L183, also referred to as L134N, is home to several infrared sources, indicating pre-stellar sources thought to present the first known observation of the contraction phase between cloud cores and prestellar cores. The core is split into three regions, with a combined mass of around 25 solar masses.
Outside of the Milky Way, there are no bright deep-sky objects for amateur astronomers in Serpens Caput, with nothing else above 10th magnitude. The brightest is NGC 5962, a spiral galaxy positioned around 28 megaparsecs distant with an apparent magnitude of 11.34. Two supernovae have been observed in the galaxy, and NGC 5962 has two satellite galaxies. Slightly fainter is NGC 5921, a barred spiral galaxy with a LINER-type active galactic nucleus situated somewhat closer at a distance of 21 megaparsecs. A type II supernova was observed in this galaxy in 2001 and was designated SN 2001X. Fainter still are the spirals NGC 5964 and NGC 6118, with the latter being host to the supernova SN 2004dk.
Hoag's Object, located 600 million light-years from Earth, is a member of the very rare class of galaxies known as ring galaxies. The outer ring is largely composed of young blue stars while the core is made up of older yellow stars. The predominant theory regarding its formation is that the progenitor galaxy was a barred spiral galaxy whose arms had velocities too great to keep the galaxy's coherence and therefore detached. Arp 220 is another unusual galaxy in Serpens. The prototypical ultraluminous infrared galaxy, Arp 220 is somewhat closer than Hoag's Object at 250 million light-years from Earth. It consists of two large spiral galaxies in the process of colliding with their nuclei orbiting at a distance of 1,200 light-years, causing extensive star formation throughout both components. It possesses a large cluster of more than a billion stars, partially covered by thick dust clouds near one of the galaxies' core. Another interacting galaxy pair, albeit in an earlier stage, consists of the galaxies NGC 5953 and NGC 5954. In this case, both are active galaxies, with the former a Seyfert 2 galaxy and the latter a LINER-type galaxy. Both are undergoing a burst of star formation triggered by the interaction.
Seyfert's Sextet is a group of six galaxies, four of which are interacting gravitationally and two of which simply appear to be a part of the group despite their greater distance. The gravitationally bound cluster lies at a distance of 190 million light-years from Earth and is approximately 100,000 light-years across, making Seyfert's Sextet one of the densest galaxy groups known. Astronomers predict that the four interacting galaxies will eventually merge to form a large elliptical galaxy. The radio source 3C 326 was originally thought to emanate from a giant elliptical galaxy. However, in 1990, it was shown that the source is instead a brighter, smaller galaxy a few arcseconds north. This object, designated 3C 326 N, has enough gas for star formation, but is being inhibited due to the energy from the radio galaxy nucleus.
A much larger galaxy cluster is the redshift-0.0354 Abell 2063. The cluster is thought to be interacting with the nearby galaxy group MKW 3s, based on radial velocity measurements of galaxies and the positioning of the cD galaxy at the center of Abell 2063. The active galaxy at the center of MKW 3s—NGC 5920—appears to be creating a bubble of hot gas from its radio activity. Near the 5th-magnitude star Pi Serpentis lies AWM 4, a cluster containing an excess of metals in the intracluster medium. The central galaxy, NGC 6051, is a radio galaxy that is probably responsible for this enrichment. Similar to AWM 4, the cluster Abell 2052 has central cD radio galaxy, 3C 317. This radio galaxy is believed to have restarted after a period of inactivity less than 200 years ago. The galaxy has over 40,000 known globular clusters, the highest known total of any galaxy as of 2002.
Consisting of two quasars with a separation of less than 5 arcseconds, the quasar pair 4C 11.50 is one of the visually closest pairs of quasars in the sky. The two have markedly different redshifts, however, and are thus unrelated. The foreground member of the pair (4C 11.50 A) does not have enough mass to refract light from the background component (4C 11.50 B) enough to produce a lensed image, although it does have a true companion of its own. An even stranger galaxy pair is 3C 321. Unlike the previous pair, the two galaxies making up 3C 321 are interacting with each other and are in the process of merging. Both members appear to be active galaxies; the primary radio galaxy may be responsible for the activity in the secondary by means of the former's jet driving material onto the latter's supermassive black hole.
An example of gravitational lensing is found in the radio galaxy 3C 324. First thought to be a single overluminous radio galaxy with a redshift of z = 1.206, it was found in 1987 to actually be two galaxies, with the radio galaxy at the aforementioned redshift being lensed by another galaxy at redshift z = 0.845. The first example of a multiply-imaged radio galaxy discovered, the source appears to be an elliptical galaxy with a dust lane obscuring our view of the visual and ultraviolet emission from the nucleus. In even shorter wavelengths, the BL Lac object PG 1553+113 is a heavy emitter of gamma rays. This object is the most distant found to emit photons with energies in the TeV range as of 2007. The spectrum is unique, with hard emission in some ranges of the gamma-ray spectrum in stark contrast to soft emission in others. In 2012, the object flared in the gamma-ray spectrum, tripling in luminosity for two nights, allowing the redshift to be accurately measured as z = 0.49.
Several gamma-ray bursts (GRBs) have been observed in Serpens Caput, such as GRB 970111, one of the brightest GRBs observed. An optical transient event associated with this GRB has not been found, despite its intensity. The host galaxy initially also proved elusive, however it now appears that the host is a Seyfert I galaxy located at redshift z = 0.657. The X-ray afterglow of the GRB has also been much fainter than for other dimmer GRBs. More distant is GRB 060526 (redshift z = 3.221), from which X-ray and optical afterglows were detected. This GRB was very faint for a long-duration GRB.
Tail objects
Part of the galactic plane passes through the tail, and thus Serpens Cauda is rich in deep-sky objects within the Milky Way galaxy. The Eagle Nebula and its associated star cluster, Messier 16 lie around 5,700 light-years from Earth in the direction of the Galactic Center. The nebula measures 70 light-years by 50 light-years and contains the Pillars of Creation, three dust clouds that became famous for the image taken by the Hubble Space Telescope. The stars being born in the Eagle Nebula, added to those with an approximate age of 5 million years have an average temperature of 45,000 kelvins and produce prodigious amounts of radiation that will eventually destroy the dust pillars. Despite its fame, the Eagle Nebula is fairly dim, with an integrated magnitude of approximately 6.0. The star-forming regions in the nebula are often evaporating gaseous globules; unlike Bok globules they only hold one protostar.
North of Messier 16, at a distance of approximately 2000 parsecs, is the OB association Serpens OB2, containing over 100 OB stars. Around 5 million years old, the association appears to still contain star-forming regions, and the light from its stars is illuminating the HII region S 54. Within this HII region is the open cluster NGC 6604, which is the same age as the surrounding OB association, and the cluster is now thought to simply be the densest part of it. The cluster appears to be producing a thermal chimney of ionized gas, caused by the interaction of the gas from the galactic disk with the galactic halo.
Another open cluster in Serpens Cauda is IC 4756, containing at least one naked-eye star, HD 172365 (another naked-eye star in the vicinity, HD 171586, is most likely unrelated). Positioned approximately 440 parsecs distant, the cluster is estimated to be around 800 million years old, quite old for an open cluster. Despite the presence of the Milky Way in Serpens Cauda, one globular cluster can be found: NGC 6535, although invisible to the naked eye, can be made out in small telescopes just north of Zeta Serpentis. Rather small and sparse for a globular cluster, this cluster contains no known RR Lyrae variables, which is unusual for a globular cluster.
MWC 922 is a star surrounded by a planetary nebula. Dubbed the Red Square Nebula due to its similarities to the Red Rectangle Nebula, the planetary nebula appears to be a nearly perfect square with a dark band around the equatorial regions. The nebula contains concentric rings, which are similar to those seen in the supernova SN 1987A. MWC 922 itself is an FS Canis Majoris variable, meaning that it is a Be star containing exceptionally bright hydrogen emission lines as well as select forbidden lines, likely due to the presence of a close binary. East of Xi Serpentis is another planetary nebula, Abell 41, containing the binary star MT Serpentis at its center. The nebula appears to have a bipolar structure, and the axis of symmetry of the nebula has been found to be within 5° of the line perpendicular to the orbital plane of the stars, strengthening the link between binary stars and bipolar planetary nebulae. On the other end of the stellar age spectrum is L483, a dark nebula which contains the protostar IRAS 18418-0440. Although classified as a class 0 protostar, it has some unusual features for such an object, such as a lack of high-velocity stellar winds, and it has been proposed that this object is in transition between class 0 and class I. A variable nebula exists around the protostar, although it is only visible in infrared light.
The Serpens cloud is a massive star-forming molecular cloud situated in the southern part of Serpens Cauda. Only two million years old and 420 parsecs distant, the cloud is known to contain many protostars such as Serpens FIRS 1 and Serpens SVS 20. The Serpens South protocluster was uncovered by NASA's Spitzer Space Telescope in the southern portion of the cloud, and it appears that star formation is still continuing in the region. Another site of star formation is the Westerhout 40 complex, consisting of a prominent HII region adjacent to a molecular cloud. Located around 500 parsecs distant, it is one of the nearest massive regions of star formation, but as the molecular cloud obscures the HII region, rendering it and its embedded cluster tough to see visibly, it is not as well-studied as others. The embedded cluster likely contains over 600 stars above 0.1 solar masses, with several massive stars, including at least one O-type star, being responsible for lighting the HII region and the production of a bubble.
Despite the presence of the Milky Way, several active galaxies are visible in Serpens Cauda as well, such as PDS 456, found near Xi Serpentis. The most intrinsically luminous nearby active galaxy, this AGN has been found to be extremely variable in the X-ray spectrum. This has allowed light to be shed on the nature of the supermassive black hole at the center, likely a Kerr black hole. It is possible that the quasar is undergoing a transition from an ultraluminous infrared galaxy to a classical radio-quiet quasar, but there are problems with this theory, and the object appears to be an exceptional object that does not completely lie within current classification systems. Nearby is NRAO 530, a blazar that has been known to flare in the X-rays occasionally. One of these flares was for less than 2000 seconds, making it the shortest flare ever observed in a blazar as of 2004. The blazar also appears to show periodic variability in its radio wave output over two different periods of six and ten years.
Meteor showers
There are two daytime meteor showers that radiate from Serpens, the Omega Serpentids and the Sigma Serpentids. Both showers peak between December 18 and December 25.
| Physical sciences | Other | Astronomy |
28153 | https://en.wikipedia.org/wiki/Search%20for%20extraterrestrial%20intelligence | Search for extraterrestrial intelligence | The search for extraterrestrial intelligence (SETI) is a collective term for scientific searches for intelligent extraterrestrial life. Methods include monitoring electromagnetic radiation for signs of transmissions from civilizations on other planets, optical observation, and the search for physical artifacts. Attempts to message extraterrestrial intelligences have also been made.
Scientific investigation began shortly after the advent of radio in the early 1900s, and focused international efforts have been ongoing since the 1980s. In 2015, Stephen Hawking and Israeli billionaire Yuri Milner announced the Breakthrough Listen Project, a $100 million 10-year attempt to detect signals from nearby stars.
SETI has been criticized for being overly hopeful, as there is a lack of evidence for the existence of life (especially intelligent life) beyond Earth . It has also been claimed to be unfalsifiable, as well as being close to ufology.
History
Early work
There have been many earlier searches for extraterrestrial intelligence within the Solar System. In 1896, Nikola Tesla suggested that an extreme version of his wireless electrical transmission system could be used to contact beings on Mars. In 1899, while conducting experiments at his Colorado Springs experimental station, he thought he had detected a signal from Mars since an odd repetitive static signal seemed to cut off when Mars set in the night sky. Analysis of Tesla's research has led to a range of explanations including:
Tesla simply misunderstood the new technology he was working with,
that he may have been observing signals from Marconi's European radio experiments,
and even speculation that he could have picked up naturally occurring radio noise caused by a moon of Jupiter (Io) moving through the magnetosphere of Jupiter.
In the early 1900s, Guglielmo Marconi, Lord Kelvin and David Peck Todd also stated their belief that radio could be used to contact Martians, with Marconi stating that his stations had also picked up potential Martian signals.
On August 21–23, 1924, Mars entered an opposition closer to Earth than at any time in the century before or the next 80 years. In the United States, a "National Radio Silence Day" was promoted during a 36-hour period from August 21–23, with all radios quiet for five minutes on the hour, every hour. At the United States Naval Observatory, a radio receiver was lifted above the ground in a dirigible tuned to a wavelength between 8 and 9 km, using a "radio-camera" developed by Amherst College and Charles Francis Jenkins. The program was led by David Peck Todd with the military assistance of Admiral Edward W. Eberle (Chief of Naval Operations), with William F. Friedman (chief cryptographer of the United States Army), assigned to translate any potential Martian messages.
A 1959 paper by Philip Morrison and Giuseppe Cocconi first pointed out the possibility of searching the microwave spectrum. It proposed frequencies and a set of initial targets.
In 1960, Cornell University astronomer Frank Drake performed the first modern SETI experiment, named "Project Ozma" after the Queen of Oz in L. Frank Baum's fantasy books. Drake used a radio telescope in diameter at Green Bank, West Virginia, to examine the stars Tau Ceti and Epsilon Eridani near the 1.420 gigahertz marker frequency, a region of the radio spectrum dubbed the "water hole" due to its proximity to the hydrogen and hydroxyl radical spectral lines. A 400 kilohertz band around the marker frequency was scanned using a single-channel receiver with a bandwidth of 100 hertz. He found nothing of interest.
Soviet scientists took a strong interest in SETI during the 1960s and performed a number of searches with omnidirectional antennas in the hope of picking up powerful radio signals. Soviet astronomer Iosif Shklovsky wrote the pioneering book in the field, Universe, Life, Intelligence (1962), which was expanded upon by American astronomer Carl Sagan as the best-selling book Intelligent Life in the Universe (1966).
In the March 1955 issue of Scientific American, John D. Kraus described an idea to scan the cosmos for natural radio signals using a flat-plane radio telescope equipped with a parabolic reflector. Within two years, his concept was approved for construction by Ohio State University. With a total of US$71,000 ()
in grants from the National Science Foundation, construction began on an plot in Delaware, Ohio. This Ohio State University Radio Observatory telescope was called "Big Ear". Later, it began the world's first continuous SETI program, called the Ohio State University SETI program.
In 1971, NASA funded a SETI study that involved Drake, Barney Oliver of Hewlett-Packard Laboratories, and others. The resulting report proposed the construction of an Earth-based radio telescope array with 1,500 dishes known as "Project Cyclops". The price tag for the Cyclops array was US$10 billion. Cyclops was not built, but the report formed the basis of much SETI work that followed.
The Ohio State SETI program gained fame on August 15, 1977, when Jerry Ehman, a project volunteer, witnessed a startlingly strong signal received by the telescope. He quickly circled the indication on a printout and scribbled the exclamation "Wow!" in the margin. Dubbed the Wow! signal, it is considered by some to be the best candidate for a radio signal from an artificial, extraterrestrial source ever discovered, but it has not been detected again in several additional searches.
On 24 May 2023, a test extraterrestrial signal, in the form of a "coded radio signal from Mars", was transmitted to radio telescopes on Earth, according to a report in The New York Times.
Sentinel, META, and BETA
In 1980, Carl Sagan, Bruce Murray, and Louis Friedman founded the U.S. Planetary Society, partly as a vehicle for SETI studies.
In the early 1980s, Harvard University physicist Paul Horowitz took the next step and proposed the design of a spectrum analyzer specifically intended to search for SETI transmissions. Traditional desktop spectrum analyzers were of little use for this job, as they sampled frequencies using banks of analog filters and so were restricted in the number of channels they could acquire. However, modern integrated-circuit digital signal processing (DSP) technology could be used to build autocorrelation receivers to check far more channels. This work led in 1981 to a portable spectrum analyzer named "Suitcase SETI" that had a capacity of 131,000 narrow band channels. After field tests that lasted into 1982, Suitcase SETI was put into use in 1983 with the Harvard/Smithsonian radio telescope at Oak Ridge Observatory in Harvard, Massachusetts. This project was named "Sentinel" and continued into 1985.
Even 131,000 channels were not enough to search the sky in detail at a fast rate, so Suitcase SETI was followed in 1985 by Project "META", for "Megachannel Extra-Terrestrial Assay". The META spectrum analyzer had a capacity of 8.4 million channels and a channel resolution of 0.05 hertz. An important feature of META was its use of frequency Doppler shift to distinguish between signals of terrestrial and extraterrestrial origin. The project was led by Horowitz with the help of the Planetary Society, and was partly funded by movie maker Steven Spielberg. A second such effort, META II, was begun in Argentina in 1990, to search the southern sky, receiving an equipment upgrade in 1996–1997.
The follow-on to META was named "BETA", for "Billion-channel Extraterrestrial Assay", and it commenced observation on October 30, 1995. The heart of BETA's processing capability consisted of 63 dedicated fast Fourier transform (FFT) engines, each capable of performing a 222-point complex FFTs in two seconds, and 21 general-purpose personal computers equipped with custom digital signal processing boards. This allowed BETA to receive 250 million simultaneous channels with a resolution of 0.5 hertz per channel. It scanned through the microwave spectrum from 1.400 to 1.720 gigahertz in eight hops, with two seconds of observation per hop. An important capability of the BETA search was rapid and automatic re-observation of candidate signals, achieved by observing the sky with two adjacent beams, one slightly to the east and the other slightly to the west. A successful candidate signal would first transit the east beam, and then the west beam and do so with a speed consistent with Earth's sidereal rotation rate. A third receiver observed the horizon to veto signals of obvious terrestrial origin. On March 23, 1999, the 26-meter radio telescope on which Sentinel, META and BETA were based was blown over by strong winds and seriously damaged. This forced the BETA project to cease operation.
MOP and Project Phoenix
In 1978, the NASA SETI program had been heavily criticized by Senator William Proxmire, and funding for SETI research was removed from the NASA budget by Congress in 1981; however, funding was restored in 1982, after Carl Sagan talked with Proxmire and convinced him of the program's value. In 1992, the U.S. government funded an operational SETI program, in the form of the NASA Microwave Observing Program (MOP). MOP was planned as a long-term effort to conduct a general survey of the sky and also carry out targeted searches of 800 specific nearby stars. MOP was to be performed by radio antennas associated with the NASA Deep Space Network, as well as the radio telescope of the National Radio Astronomy Observatory at Green Bank, West Virginia and the radio telescope at the Arecibo Observatory in Puerto Rico. The signals were to be analyzed by spectrum analyzers, each with a capacity of 15 million channels. These spectrum analyzers could be grouped together to obtain greater capacity. Those used in the targeted search had a bandwidth of 1 hertz per channel, while those used in the sky survey had a bandwidth of 30 hertz per channel.
MOP drew the attention of the United States Congress, where the program met opposition and canceled one year after its start. SETI advocates continued without government funding, and in 1995 the nonprofit SETI Institute of Mountain View, California resurrected the MOP program under the name of Project "Phoenix", backed by private sources of funding. In 2012 it cost around $2 million per year to maintain SETI research at the SETI Institute and around 10 times that to support different SETI activities globally. Project Phoenix, under the direction of Jill Tarter, was a continuation of the targeted search program from MOP and studied roughly 1,000 nearby Sun-like stars until approximately 2015. From 1995 through March 2004, Phoenix conducted observations at the Parkes radio telescope in Australia, the radio telescope of the National Radio Astronomy Observatory in Green Bank, West Virginia, and the radio telescope at the Arecibo Observatory in Puerto Rico. The project observed the equivalent of 800 stars over the available channels in the frequency range from 1200 to 3000 MHz. The search was sensitive enough to pick up transmitters with 1 GW EIRP to a distance of about 200 light-years.
Ongoing radio searches
Many radio frequencies penetrate Earth's atmosphere quite well, and this led to radio telescopes that investigate the cosmos using large radio antennas. Furthermore, human endeavors emit considerable electromagnetic radiation as a byproduct of communications such as television and radio. These signals would be easy to recognize as artificial due to their repetitive nature and narrow bandwidths. Earth has been sending radio waves from broadcasts into space for over 100 years. These signals have reached over 1,000 stars, most notably Vega, Aldebaran, Barnard's Star, Sirius, and Proxima Centauri. If intelligent alien life exists on any planet orbiting these nearby stars, these signals could be heard and deciphered, even though some of the signal is garbled by the Earth's ionosphere.
Many international radio telescopes are currently being used for radio SETI searches, including the Low Frequency Array (LOFAR) in Europe, the Murchison Widefield Array (MWA) in Australia, and the Lovell Telescope in the United Kingdom.
Allen Telescope Array
The SETI Institute collaborated with the Radio Astronomy Laboratory at the Berkeley SETI Research Center to develop a specialized radio telescope array for SETI studies, similar to a mini-cyclops array. Formerly known as the One Hectare Telescope (1HT), the concept was renamed the "Allen Telescope Array" (ATA) after the project's benefactor, Paul Allen. Its sensitivity is designed to be equivalent to a single large dish more than 100 meters in diameter, if fully completed. Presently, the array has 42 operational dishes at the Hat Creek Radio Observatory in rural northern California.
The full array (ATA-350) is planned to consist of 350 or more offset-Gregorian radio dishes, each in diameter. These dishes are the largest producible with commercially available satellite television dish technology. The ATA was planned for a 2007 completion date, at a cost of US$25 million. The SETI Institute provided money for building the ATA while University of California, Berkeley designed the telescope and provided operational funding. The first portion of the array (ATA-42) became operational in October 2007 with 42 antennas. The DSP system planned for ATA-350 is extremely ambitious. Completion of the full 350 element array will depend on funding and the technical results from ATA-42.
ATA-42 (ATA) is designed to allow multiple observers simultaneous access to the interferometer output at the same time. Typically, the ATA snapshot imager (used for astronomical surveys and SETI) is run in parallel with a beamforming system (used primarily for SETI). ATA also supports observations in multiple synthesized pencil beams at once, through a technique known as "multibeaming". Multibeaming provides an effective filter for identifying false positives in SETI, since a very distant transmitter must appear at only one point on the sky.
SETI Institute's Center for SETI Research (CSR) uses ATA in the search for extraterrestrial intelligence, observing 12 hours a day, 7 days a week. From 2007 to 2015, ATA identified hundreds of millions of technological signals. So far, all these signals have been assigned the status of noise or radio frequency interference because a) they appear to be generated by satellites or Earth-based transmitters, or b) they disappeared before the threshold time limit of ~1 hour. Researchers in CSR are working on ways to reduce the threshold time limit, and to expand ATA's capabilities for detection of signals that may have embedded messages.
Berkeley astronomers used the ATA to pursue several science topics, some of which might have transient SETI signals, until 2011, when the collaboration between the University of California, Berkeley and the SETI Institute was terminated.
CNET published an article and pictures about the Allen Telescope Array (ATA) on December 12, 2008.
In April 2011, the ATA entered an 8-month "hibernation" due to funding shortfalls. Regular operation of the ATA resumed on December 5, 2011.
In 2012, the ATA was revitalized with a $3.6 million donation by Franklin Antonio, co-founder and Chief Scientist of QUALCOMM Incorporated. This gift supported upgrades of all the receivers on the ATA dishes to have (2× to 10× over the range 1–8 GHz) greater sensitivity than before and supporting observations over a wider frequency range from 1–18 GHz, though initially the radio frequency electronics only go to 12 GHz. As of July 2013, the first of these receivers was installed and proven, with full installation on all 42 antennas being expected for June 2017. ATA is well suited to the search for extraterrestrial intelligence (SETI) and to discovery of astronomical radio sources, such as heretofore unexplained non-repeating, possibly extragalactic, pulses known as fast radio bursts or FRBs.
SERENDIP
SERENDIP (Search for Extraterrestrial Radio Emissions from Nearby Developed Intelligent Populations) is a SETI program launched in 1979 by the Berkeley SETI Research Center. SERENDIP takes advantage of ongoing "mainstream" radio telescope observations as a "piggy-back" or "commensal" program, using large radio telescopes including the NRAO 90m telescope at Green Bank and, formerly, the Arecibo 305m telescope. Rather than having its own observation program, SERENDIP analyzes deep space radio telescope data that it obtains while other astronomers are using the telescopes. The most recently deployed SERENDIP spectrometer, SERENDIP VI, was installed at both the Arecibo Telescope and the Green Bank Telescope in 2014–2015.
Breakthrough Listen
Breakthrough Listen is a ten-year initiative with $100 million funding begun in July 2015 to actively search for intelligent extraterrestrial communications in the universe, in a substantially expanded way, using resources that had not previously been extensively used for the purpose. It has been described as the most comprehensive search for alien communications to date. The science program for Breakthrough Listen is based at Berkeley SETI Research Center, located in the Astronomy Department at the University of California, Berkeley.
Announced in July 2015, the project is observing for thousands of hours every year on two major radio telescopes, the Green Bank Observatory in West Virginia, and the Parkes Observatory in Australia. Previously, only about 24 to 36 hours of telescope time per year were used in the search for alien life. Furthermore, the Automated Planet Finder at Lick Observatory is searching for optical signals coming from laser transmissions. The massive data rates from the radio telescopes (24 GB/s at Green Bank) necessitated the construction of dedicated hardware at the telescopes to perform the bulk of the analysis. Some of the data are also analyzed by volunteers in the SETI@home volunteer computing network. Founder of modern SETI Frank Drake was one of the scientists on the project's advisory committee.
In October 2019, Breakthrough Listen started a collaboration with scientists from the TESS team (Transiting Exoplanet Survey Satellite) to look for signs of advanced extraterrestrial life. Thousands of new planets found by TESS will be scanned for technosignatures by Breakthrough Listen partner facilities across the globe. Data from TESS monitoring of stars will also be searched for anomalies.
FAST
China's 500 meter Aperture Spherical Telescope (FAST) lists detecting interstellar communication signals as part of its science mission. It is funded by the National Development and Reform Commission (NDRC) and managed by the National Astronomical observatories (NAOC) of the Chinese Academy of Sciences (CAS). FAST is the first radio observatory built with SETI as a core scientific goal. FAST consists of a fixed diameter spherical dish constructed in a natural depression sinkhole caused by karst processes in the region. It is the world's largest filled-aperture radio telescope.
According to its website, FAST can search to 28 light-years, and is able to reach 1,400 stars. If the transmitter's radiated power were to be increased to 1,000,000 MW, FAST would be able to reach one million stars. This is compared to the former Arecibo 305 meter telescope detection distance of 18 light-years.
On 14 June 2022, astronomers, working with China's FAST telescope, reported the possibility of having detected artificial (presumably alien) signals, but cautioned that further studies were required to determine if a natural radio interference may be the source. More recently, on 18 June 2022, Dan Werthimer, chief scientist for several SETI-related projects, reportedly noted, "These signals are from radio interference; they are due to radio pollution from earthlings, not from E.T.".
UCLA
Since 2016, University of California Los Angeles (UCLA) undergraduate and graduate students have been participating in radio searches for technosignatures with the Green Bank Telescope. Targets include the Kepler field, TRAPPIST-1, and solar-type stars. The search is sensitive to Arecibo-class transmitters located within 420 light years of Earth and to transmitters that are 1,000 times more powerful than Arecibo located within 13,000 light years of Earth.
Community SETI projects
SETI@home
The SETI@home project used volunteer computing to analyze signals acquired by the SERENDIP project.
SETI@home was conceived by David Gedye along with Craig Kasnoff and is a popular volunteer computing project that was launched by the Berkeley SETI Research Center at the University of California, Berkeley, in May 1999. It was originally funded by The Planetary Society and Paramount Pictures, and later by the state of California. The project is run by director David P. Anderson and chief scientist Dan Werthimer. Any individual could become involved with SETI research by downloading the Berkeley Open Infrastructure for Network Computing (BOINC) software program, attaching to the SETI@home project, and allowing the program to run as a background process that uses idle computer power. The SETI@home program itself ran signal analysis on a "work unit" of data recorded from the central 2.5 MHz wide band of the SERENDIP IV instrument. After computation on the work unit was complete, the results were then automatically reported back to SETI@home servers at University of California, Berkeley. By June 28, 2009, the SETI@home project had over 180,000 active participants volunteering a total of over 290,000 computers. These computers gave SETI@home an average computational power of 617 teraFLOPS. In 2004 radio source SHGb02+14a set off speculation in the media that a signal had been detected but researchers noted the frequency drifted rapidly and the detection on three SETI@home computers fell within random chance.
By 2010, after 10 years of data collection, SETI@home had listened to that one frequency at every point of over 67 percent of the sky observable from Arecibo with at least three scans (out of the goal of nine scans), which covers about 20 percent of the full celestial sphere. On March 31, 2020, with 91,454 active users, the project stopped sending out new work to SETI@home users, bringing this particular SETI effort to an indefinite hiatus.
SETI Net
SETI Network was the only fully operational private search system. The SETI Net station consisted of off-the-shelf, consumer-grade electronics to minimize cost and to allow this design to be replicated as simply as possible. It had a 3-meter parabolic antenna that could be directed in azimuth and elevation, an LNA that covered 100 MHz of the 1420 MHz spectrum, a receiver to reproduce the wideband audio, and a standard personal computer as the control device and for deploying the detection algorithms. The antenna could be pointed and locked to one sky location in Ra and DEC which enabling the system to integrate on it for long periods. The Wow! signal area was monitored for many long periods. All search data was collected and is available on the Internet archive.
SETI Net started operation in the early 1980s as a way to learn about the science of the search, and developed several software packages for the amateur SETI community. It provided an astronomical clock, a file manager to keep track of SETI data files, a spectrum analyzer optimized for amateur SETI, remote control of the station from the Internet, and other packages.
SETI Net went dark and was decommissioned on 2021-12-04. The collected data is available on their website.
The SETI League and Project Argus
Founded in 1994 in response to the United States Congress cancellation of the NASA SETI program, The SETI League, Incorporated is a membership-supported nonprofit organization with 1,500 members in 62 countries. This grass-roots alliance of amateur and professional radio astronomers is headed by executive director emeritus H. Paul Shuch, the engineer credited with developing the world's first commercial home satellite TV receiver. Many SETI League members are licensed radio amateurs and microwave experimenters. Others are digital signal processing experts and computer enthusiasts.
The SETI League pioneered the conversion of backyard satellite TV dishes in diameter into research-grade radio telescopes of modest sensitivity. The organization concentrates on coordinating a global network of small, amateur-built radio telescopes under Project Argus, an all-sky survey seeking to achieve real-time coverage of the entire sky. Project Argus was conceived as a continuation of the all-sky survey component of the late NASA SETI program (the targeted search having been continued by the SETI Institute's Project Phoenix). There are currently 143 Project Argus radio telescopes operating in 27 countries. Project Argus instruments typically exhibit sensitivity on the order of 10−23 Watts/square metre, or roughly equivalent to that achieved by the Ohio State University Big Ear radio telescope in 1977, when it detected the landmark "Wow!" candidate signal.
The name "Argus" derives from the mythical Greek guard-beast who had 100 eyes, and could see in all directions at once. In the SETI context, the name has been used for radio telescopes in fiction (Arthur C. Clarke, "Imperial Earth"; Carl Sagan, "Contact"), was the name initially used for the NASA study ultimately known as "Cyclops," and is the name given to an omnidirectional radio telescope design being developed at the Ohio State University.
Optical experiments
While most SETI sky searches have studied the radio spectrum, some SETI researchers have considered the possibility that alien civilizations might be using powerful lasers for interstellar communications at optical wavelengths. The idea was first suggested by R. N. Schwartz and Charles Hard Townes in a 1961 paper published in the journal Nature titled "Interstellar and Interplanetary Communication by Optical Masers". However, the 1971 Cyclops study discounted the possibility of optical SETI, reasoning that construction of a laser system that could outshine the bright central star of a remote star system would be too difficult. In 1983, Townes published a detailed study of the idea in the United States journal Proceedings of the National Academy of Sciences, which was met with interest by the SETI community.
There are two problems with optical SETI. The first problem is that lasers are highly "monochromatic", that is, they emit light only on one frequency, making it troublesome to figure out what frequency to look for. However, emitting light in narrow pulses results in a broad spectrum of emission; the spread in frequency becomes higher as the pulse width becomes narrower, making it easier to detect an emission.
The other problem is that while radio transmissions can be broadcast in all directions, lasers are highly directional. Interstellar gas and dust is almost transparent to near infrared, so these signals can be seen from greater distances, but the extraterrestrial laser signals would need to be transmitted in the direction of Earth in order to be detected.
Optical SETI supporters have conducted paper studies of the effectiveness of using contemporary high-energy lasers and a ten-meter diameter mirror as an interstellar beacon. The analysis shows that an infrared pulse from a laser, focused into a narrow beam by such a mirror, would appear thousands of times brighter than the Sun to a distant civilization in the beam's line of fire. The Cyclops study proved incorrect in suggesting a laser beam would be inherently hard to see.
Such a system could be made to automatically steer itself through a target list, sending a pulse to each target at a constant rate. This would allow targeting of all Sun-like stars within a distance of 100 light-years. The studies have also described an automatic laser pulse detector system with a low-cost, two-meter mirror made of carbon composite materials, focusing on an array of light detectors. This automatic detector system could perform sky surveys to detect laser flashes from civilizations attempting contact.
Several optical SETI experiments are now in progress. A Harvard-Smithsonian group that includes Paul Horowitz designed a laser detector and mounted it on Harvard's optical telescope. This telescope is currently being used for a more conventional star survey, and the optical SETI survey is "piggybacking" on that effort. Between October 1998 and November 1999, the survey inspected about 2,500 stars. Nothing that resembled an intentional laser signal was detected, but efforts continue. The Harvard-Smithsonian group is now working with Princeton University to mount a similar detector system on Princeton's 91-centimeter (36-inch) telescope. The Harvard and Princeton telescopes will be "ganged" to track the same targets at the same time, with the intent being to detect the same signal in both locations as a means of reducing errors from detector noise.
The Harvard-Smithsonian SETI group led by Professor Paul Horowitz built a dedicated all-sky optical survey system along the lines of that described above, featuring a 1.8-meter (72-inch) telescope. The new optical SETI survey telescope is being set up at the Oak Ridge Observatory in Harvard, Massachusetts.
The University of California, Berkeley, home of SERENDIP and SETI@home, is also conducting optical SETI searches and collaborates with the NIROSETI program. The optical SETI program at Breakthrough Listen was initially directed by Geoffrey Marcy, an extrasolar planet hunter, and it involves examination of records of spectra taken during extrasolar planet hunts for a continuous, rather than pulsed, laser signal. This survey uses the Automated Planet Finder 2.4-m telescope at the Lick Observatory, situated on the summit of Mount Hamilton, east of San Jose, California. The other Berkeley optical SETI effort is being pursued by the Harvard-Smithsonian group and is being directed by Dan Werthimer of Berkeley, who built the laser detector for the Harvard-Smithsonian group. This survey uses a 76-centimeter (30-inch) automated telescope at Leuschner Observatory and an older laser detector built by Werthimer.
The SETI Institute also runs a program called 'Laser SETI' with an instrument composed of several cameras that continuously survey the entire night sky searching for millisecond singleton laser pulses of extraterrestrial origin.
In January 2020, two Pulsed All-sky Near-infrared Optical SETI (PANOSETI) project telescopes were installed in the Lick Observatory Astrograph Dome. The project aims to commence a wide-field optical SETI search and continue prototyping designs for a full observatory. The installation can offer an "all-observable-sky" optical and wide-field near-infrared pulsed technosignature and astrophysical transient search for the northern hemisphere.
In May 2017, astronomers reported studies related to laser light emissions from stars, as a way of detecting technology-related signals from an alien civilization. The reported studies included Tabby's Star (designated KIC 8462852 in the Kepler Input Catalog), an oddly dimming star in which its unusual starlight fluctuations may be the result of interference by an artificial megastructure, such as a Dyson swarm, made by such a civilization. No evidence was found for technology-related signals from KIC 8462852 in the studies.
Quantum communications
In a 2021 preprint, astronomer Michael Hipke described for the first time how one could search for quantum communication transmissions sent by ETI using existing telescope and receiver technology. He also provides arguments for why future searches of ETI should also target interstellar quantum communication networks.
A 2022 paper by Arjun Berera and Jaime Calderón-Figueroa noted that interstellar quantum communication by other civilizations could be possible and may be advantageous, identifying some potential challenges and factors for detecting technosignatures. They may, for example, use X-ray photons for remotely established quantum communication and quantum teleportation as the communication mode.
Search for extraterrestrial artifacts
The possibility of using interstellar messenger probes in the search for extraterrestrial intelligence was first suggested by Ronald N. Bracewell in 1960 (see Bracewell probe), and the technical feasibility of this approach was demonstrated by the British Interplanetary Society's starship study Project Daedalus in 1978. Starting in 1979, Robert Freitas advanced arguments for the proposition that physical space-probes are a superior mode of interstellar communication to radio signals (see Voyager Golden Record).
In recognition that any sufficiently advanced interstellar probe in the vicinity of Earth could easily monitor the terrestrial Internet, 'Invitation to ETI' was established by Allen Tough in 1996, as a Web-based SETI experiment inviting such spacefaring probes to establish contact with humanity. The project's 100 signatories includes prominent physical, biological, and social scientists, as well as artists, educators, entertainers, philosophers and futurists. H. Paul Shuch, executive director emeritus of The SETI League, serves as the project's Principal Investigator.
Inscribing a message in matter and transporting it to an interstellar destination can be enormously more energy efficient than communication using electromagnetic waves if delays larger than light transit time can be tolerated. That said, for simple messages such as "hello," radio SETI could be far more efficient. If energy requirement is used as a proxy for technical difficulty, then a solarcentric Search for Extraterrestrial Artifacts (SETA) may be a useful supplement to traditional radio or optical searches.
Much like the "preferred frequency" concept in SETI radio beacon theory, the Earth-Moon or Sun-Earth libration orbits might therefore constitute the most universally convenient parking places for automated extraterrestrial spacecraft exploring arbitrary stellar systems. A viable long-term SETI program may be founded upon a search for these objects.
In 1979, Freitas and Valdes conducted a photographic search of the vicinity of the Earth-Moon triangular libration points and , and of the solar-synchronized positions in the associated halo orbits, seeking possible orbiting extraterrestrial interstellar probes, but found nothing to a detection limit of about 14th magnitude. The authors conducted a second, more comprehensive photographic search for probes in 1982 that examined the five Earth-Moon Lagrangian positions and included the solar-synchronized positions in the stable L4/L5 libration orbits, the potentially stable nonplanar orbits near L1/L2, Earth-Moon , and also in the Sun-Earth system. Again no extraterrestrial probes were found to limiting magnitudes of 17–19th magnitude near L3/L4/L5, 10–18th magnitude for /, and 14–16th magnitude for Sun-Earth .
In June 1983, Valdes and Freitas used the 26 m radiotelescope at Hat Creek Radio Observatory to search for the tritium hyperfine line at 1516 MHz from 108 assorted astronomical objects, with emphasis on 53 nearby stars including all visible stars within a 20 light-year radius. The tritium frequency was deemed highly attractive for SETI work because (1) the isotope is cosmically rare, (2) the tritium hyperfine line is centered in the SETI water hole region of the terrestrial microwave window, and (3) in addition to beacon signals, tritium hyperfine emission may occur as a byproduct of extensive nuclear fusion energy production by extraterrestrial civilizations. The wideband- and narrowband-channel observations achieved sensitivities of 5–14 W/m2/channel and 0.7–2 W/m2/channel, respectively, but no detections were made.
Others have speculated, that we might find traces of past civilizations in our very own Solar System, on planets like Venus or Mars, although the traces would be found most likely underground.
Technosignatures
Technosignatures, including all signs of technology, are a recent avenue in the search for extraterrestrial intelligence. Technosignatures may originate from various sources, from megastructures such as Dyson spheres and space mirrors or space shaders to the atmospheric contamination created by an industrial civilization, or city lights on extrasolar planets, and may be detectable in the future with large hypertelescopes.
Technosignatures can be divided into three broad categories: astroengineering projects, signals of planetary origin, and spacecraft within and outside the Solar System.
An astroengineering installation such as a Dyson sphere, designed to convert all of the incident radiation of its host star into energy, could be detected through the observation of an infrared excess from a solar analog star, or by the star's apparent disappearance in the visible spectrum over several years. After examining some 100,000 nearby large galaxies, a team of researchers has concluded that none of them display any obvious signs of highly advanced technological civilizations.
Another hypothetical form of astroengineering, the Shkadov thruster, moves its host star by reflecting some of the star's light back on itself, and would be detected by observing if its transits across the star abruptly end with the thruster in front. Asteroid mining within the Solar System is also a detectable technosignature of the first kind.
Individual extrasolar planets can be analyzed for signs of technology. Avi Loeb of the Center for Astrophysics Harvard & Smithsonian has proposed that persistent light signals on the night side of an exoplanet can be an indication of the presence of cities and an advanced civilization. In addition, the excess infrared radiation and chemicals produced by various industrial processes or terraforming efforts may point to intelligence.
Light and heat detected from planets need to be distinguished from natural sources to conclusively prove the existence of civilization on a planet. However, as argued by the Colossus team, a civilization heat signature should be within a "comfortable" temperature range, like terrestrial urban heat islands, i.e., only a few degrees warmer than the planet itself. In contrast, such natural sources as wild fires, volcanoes, etc. are significantly hotter, so they will be well distinguished by their maximum flux at a different wavelength.
Other than astroengineering, technosignatures such as artificial satellites around exoplanets, particularly such in geostationary orbit, might be detectable even with today's technology and data, and would allow, similar to fossils on Earth, to find traces of extrasolar life from long ago.
Extraterrestrial craft are another target in the search for technosignatures. Magnetic sail interstellar spacecraft should be detectable over thousands of light-years of distance through the synchrotron radiation they would produce through interaction with the interstellar medium; other interstellar spacecraft designs may be detectable at more modest distances. In addition, robotic probes within the Solar System are also being sought with optical and radio searches.
For a sufficiently advanced civilization, hyper energetic neutrinos from Planck scale accelerators should be detectable at a distance of many Mpc.
Advances for Bio and Technosignature Detection
A notable advancement in technosignature detection is the development of an algorithm for signal reconstruction in zero-knowledge one-way communication channels. This algorithm decodes signals from unknown sources without prior knowledge of the encoding scheme, using principles from Algorithmic Information Theory to identify the geometric and topological dimensions of the encoding space. It successfully reconstructed the Arecibo message despite significant noise. The work establishes a connection between syntax and semantics in SETI and technosignature detection, enhancing fields like cryptography and Information Theory.
Based on fractal theory and the Weierstrass function, a known fractal, another method authored by the same group called fractal messaging offers a framework for space-time scale-free communication. This method leverages properties of self-similarity and scale invariance, enabling spatio-temporal scale-independent and parallel infinite-frequency communication. It also embodies the concept of sending a self-encoding/self-decoding signal as a mathematical formula, equivalent to self-executable computer code that unfolds to read a message at all possible time scales and in all possible channels simultaneously.
Fermi paradox
Italian physicist Enrico Fermi suggested in the 1950s that if technologically advanced civilizations are common in the universe, then they should be detectable in one way or another. According to those who were there, Fermi either asked "Where are they?" or "Where is everybody?"
The Fermi paradox is commonly understood as asking why extraterrestrials have not visited Earth, but the same reasoning applies to the question of why signals from extraterrestrials have not been heard. The SETI version of the question is sometimes referred to as "the Great Silence".
The Fermi paradox can be stated more completely as follows:
There are multiple explanations proposed for the Fermi paradox, ranging from analyses suggesting that intelligent life is rare (the "Rare Earth hypothesis"), to analyses suggesting that although extraterrestrial civilizations may be common, they would not communicate with us, would communicate in a way we have not discovered yet, could not travel across interstellar distances, or destroy themselves before they master the technology of either interstellar travel or communication.
The German astrophysicist and radio astronomer Sebastian von Hoerner suggested that the average duration of civilization was 6,500 years. After this time, according to him, it disappears for external reasons (the destruction of life on the planet, the destruction of only rational beings) or internal causes (mental or physical degeneration). According to his calculations, on a habitable planet (one in three million stars) there is a sequence of technological species over a time distance of hundreds of millions of years, and each of them "produces" an average of four technological species. With these assumptions, the average distance between civilizations in the Milky Way is 1,000 light years.
Science writer Timothy Ferris has posited that since galactic societies are most likely only transitory, an obvious solution is an interstellar communications network, or a type of library consisting mostly of automated systems. They would store the cumulative knowledge of vanished civilizations and communicate that knowledge through the galaxy. Ferris calls this the "Interstellar Internet", with the various automated systems acting as network "servers". If such an Interstellar Internet exists, the hypothesis states, communications between servers are mostly through narrow-band, highly directional radio or laser links. Intercepting such signals is, as discussed earlier, very difficult. However, the network could maintain some broadcast nodes in hopes of making contact with new civilizations.
Although somewhat dated in terms of "information culture" arguments, not to mention the obvious technological problems of a system that could work effectively for billions of years and requires multiple lifeforms agreeing on certain basics of communications technologies, this hypothesis is actually testable (see below).
Difficulty of detection
A significant problem is the vastness of space. Despite piggybacking on the world's most sensitive radio telescope, astronomer and initiator of SERENDIP Charles Stuart Bowyer noted the then world's largest instrument could not detect random radio noise emanating from a civilization like ours, which has been leaking radio and TV signals for less than 100 years. For SERENDIP and most other SETI projects to detect a signal from an extraterrestrial civilization, the civilization would have to be beaming a powerful signal directly at us. It also means that Earth civilization will only be detectable within a distance of 100 light-years.
Post-detection disclosure protocol
The International Academy of Astronautics (IAA) has a long-standing SETI Permanent Study Group (SPSG, formerly called the IAA SETI Committee), which addresses matters of SETI science, technology, and international policy. The SPSG meets in conjunction with the International Astronautical Congress (IAC), held annually at different locations around the world, and sponsors two SETI Symposia at each IAC. In 2005, the IAA established the SETI: Post-Detection Science and Technology Taskgroup (chairman, Professor Paul Davies) "to act as a Standing Committee to be available to be called on at any time to advise and consult on questions stemming from the discovery of a putative signal of extraterrestrial intelligent (ETI) origin."
However, the protocols mentioned apply only to radio SETI rather than for METI (Active SETI). The intention for METI is covered under the SETI charter "Declaration of Principles Concerning Sending Communications with Extraterrestrial Intelligence".
In October 2000 astronomers Iván Almár and Jill Tarter presented a paper to The SETI Permanent Study Group in Rio de Janeiro, Brazil which proposed a scale (modelled after the Torino scale) which is an ordinal scale between zero and ten that quantifies the impact of any public announcement regarding evidence of extraterrestrial intelligence; the Rio scale has since inspired the 2005 San Marino Scale (in regard to the risks of transmissions from Earth) and the 2010 London Scale (in regard to the detection of extraterrestrial life). The Rio scale itself was revised in 2018.
The SETI Institute does not officially recognize the Wow! signal as of extraterrestrial origin as it was unable to be verified, although in a 2020 tweet the organization stated that ''an astronomer might have pinpointed the host star''. The SETI Institute has also publicly denied that the candidate signal Radio source SHGb02+14a is of extraterrestrial origin. Although other volunteering projects such as Zooniverse credit users for discoveries, there is currently no crediting or early notification by SETI@Home following the discovery of a signal.
Some people, including Steven M. Greer, have expressed cynicism that the general public might not be informed in the event of a genuine discovery of extraterrestrial intelligence due to significant vested interests. Some, such as Bruce Jakosky have also argued that the official disclosure of extraterrestrial life may have far reaching and as yet undetermined implications for society, particularly for the world's religions.
Active SETI
Active SETI, also known as messaging to extraterrestrial intelligence (METI), consists of sending signals into space in the hope that they will be detected by an alien intelligence.
Realized interstellar radio message projects
In November 1974, a largely symbolic attempt was made at the Arecibo Observatory to send a message to other worlds. Known as the Arecibo Message, it was sent towards the globular cluster M13, which is 25,000 light-years from Earth. Further IRMs Cosmic Call, Teen Age Message, Cosmic Call 2, and A Message From Earth were transmitted in 1999, 2001, 2003 and 2008 from the Evpatoria Planetary Radar.
Debate
Whether or not to attempt to contact extraterrestrials has attracted significant academic debate in the fields of space ethics and space policy. Physicist Stephen Hawking, in his book A Brief History of Time, suggests that "alerting" extraterrestrial intelligences to our existence is foolhardy, citing humankind's history of treating its own kind harshly in meetings of civilizations with a significant technology gap, e.g., the extermination of Tasmanian aborigines. He suggests, in view of this history, that we "lay low". In one response to Hawking, in September 2016, astronomer Seth Shostak sought to allay such concerns. Astronomer Jill Tarter also disagrees with Hawking, arguing that aliens developed and long-lived enough to communicate and travel across interstellar distances would have evolved a cooperative and less violent intelligence. She however thinks it is too soon for humans to attempt active SETI and that humans should be more advanced technologically first but keep listening in the meantime.
Criticism
As various SETI projects have progressed, some have criticized early claims by researchers as being too "euphoric". For example, Peter Schenkel, while remaining a supporter of SETI projects, wrote in 2006 that:
Critics claim that the existence of extraterrestrial intelligence has no good Popperian criteria for falsifiability, as explained in a 2009 editorial in Nature, which said:
Nature added that SETI was "marked by a hope, bordering on faith" that aliens were aiming signals at us, that a hypothetical alien SETI project looking at Earth with "similar faith" would be "sorely disappointed", despite our many untargeted radar and TV signals, and our few targeted Active SETI radio signals denounced by those fearing aliens, and that it had difficulties attracting even sympathetic working scientists and government funding because it was "an effort so likely to turn up nothing".
However, Nature also added, "Nonetheless, a small SETI effort is well worth supporting, especially given the enormous implications if it did succeed" and that "happily, a handful of wealthy technologists and other private donors have proved willing to provide that support".
Supporters of the Rare Earth Hypothesis argue that advanced lifeforms are likely to be very rare, and that, if that is so, then SETI efforts will be futile. However, the Rare Earth Hypothesis itself faces many criticisms.
In 1993, Roy Mash stated that "Arguments favoring the existence of extraterrestrial intelligence nearly always contain an overt appeal to big numbers, often combined with a covert reliance on generalization from a single instance" and concluded that "the dispute between believers and skeptics is seen to boil down to a conflict of intuitions which can barely be engaged, let alone resolved, given our present state of knowledge". In response, in 2012, Milan M. Ćirković, then research professor at the Astronomical Observatory of Belgrade and a research associate of the Future of Humanity Institute at the University of Oxford, said that Mash was unrealistically over-reliant on excessive abstraction that ignored the empirical information available to modern SETI researchers.
George Basalla, Emeritus Professor of History at the University of Delaware, is a critic of SETI who argued in 2006 that "extraterrestrials discussed by scientists are as imaginary as the spirits and gods of religion or myth", and was in turn criticized by Milan M. Ćirković for, among other things, being unable to distinguish between "SETI believers" and "scientists engaged in SETI", who are often sceptical (especially about quick detection), such as Freeman Dyson and, at least in their later years, Iosif Shklovsky and Sebastian von Hoerner, and for ignoring the difference between the knowledge underlying the arguments of modern scientists and those of ancient Greek thinkers.
Massimo Pigliucci, Professor of Philosophy at CUNY – City College, asked in 2010 whether SETI is "uncomfortably close to the status of pseudoscience" due to the lack of any clear point at which negative results cause the hypothesis of Extraterrestrial Intelligence to be abandoned, before eventually concluding that SETI is "almost-science", which is described by Milan M. Ćirković as Pigliucci putting SETI in "the illustrious company of string theory, interpretations of quantum mechanics, evolutionary psychology and history (of the 'synthetic' kind done recently by Jared Diamond)", while adding that his justification for doing so with SETI "is weak, outdated, and reflecting particular philosophical prejudices similar to the ones described above in Mash and Basalla".
Richard Carrigan, a particle physicist at the Fermi National Accelerator Laboratory near Chicago, Illinois, suggested that passive SETI could also be dangerous and that a signal released onto the Internet could act as a computer virus. Computer security expert Bruce Schneier dismissed this possibility as a "bizarre movie-plot threat".
Ufology
Ufologist Stanton Friedman has often criticized SETI researchers for, among other reasons, what he sees as their unscientific criticisms of Ufology, but, unlike SETI, Ufology has generally not been embraced by academia as a scientific field of study, and it is usually characterized as a partial or total pseudoscience. In a 2016 interview, Jill Tarter pointed out that it is still a misconception that SETI and UFOs are related. She states, "SETI uses the tools of the astronomer to attempt to find evidence of somebody else's technology coming from a great distance. If we ever claim detection of a signal, we will provide evidence and data that can be independently confirmed. UFOs—none of the above." The Galileo Project headed by Harvard astronomer Avi Loeb is one of the few scientific efforts to study UFOs or UAPs. Loeb criticized that the study of UAP is often dismissed and not sufficiently studied by scientists and should shift from "occupying the talking points of national security administrators and politicians" to the realm of science. The Galileo Project's position after the publication of the 2021 UFO Report by the U.S. Intelligence community is that the scientific community needs to "systematically, scientifically and transparently look for potential evidence of extraterrestrial technological equipment".
| Physical sciences | Astronomy basics | Astronomy |
28154 | https://en.wikipedia.org/wiki/Sextans | Sextans | Sextans is a faint, minor constellation on the celestial equator which was introduced in 1687 by Polish astronomer Johannes Hevelius. Its name is Latin for the astronomical sextant, an instrument that Hevelius made frequent use of in his observations.
Characteristics
Sextans is a medium sized constellation bordering Leo to the north, touching on Hydra to the southwest, and Crater to the southeast. The recommended three-letter abbreviation for the constellation, as adopted by the International Astronomical Union in 1922, is "Sex". The official constellation boundaries, as set by Belgian astronomer Eugène Delporte in 1930, are defined by a square. In the equatorial coordinate system, the right ascension coordinates of these borders lie between and , while the declination coordinates are between +6.43° and −11.7°. Since it is close to the ecliptic plane, the Moon and planets regularly cross the constellation, especially its northeastern corner.
Notable features
Stars
John Flamsteed labeled 41 stars for the constellation. Francis Baily intended to give Bayer designations to some of the stars but because none of them were above magnitude 4.5, he left them unlettered. Rather, it was Benjamin Apthorp Gould who lettered some of the stars. He labeled the five brightest stars using Greek letters Alpha (α) to Epsilon (ε) in his Uranometria Argentina. All together, there are 38 stars that are brighter than or equal to apparent magnitude 6.5.
Bright stars
Alpha Sextantis is the brightest star in the constellation and the only star above the fifth magnitude with an apparent magnitude of 4.49. It is an ageing A-type star of spectral class A0 III located 280 light-years away from the Solar System. At the age of 385 million years, it is exhausting hydrogen at its core and leaving the main sequence.
γ Sextantis is the second brightest star in the constellation with an apparent magnitude of 5.05. It is a binary star consisting of two A-type main-sequence stars with classes of A1 V and A4 V respectively. The stars take 77.55 years to circle each other in an eccentric orbit and the system is located 280 light-years away from the Solar System. The separation of the stars is four-tenths of an arcsecond, making it difficult to observe without the use of a telescope with an aperture of 30 cm.
β Sextantis is slightly fainter at magnitude 5.07; it is said to be 364 light-years distant. Beta Sextantis is a B-type main-sequence star of spectral class B6 V and it has been used as a standard in the MK spectral classification system. It is suspected to be a Alpha2 Canum Venaticorum variable with a period of 15.4 days.
Multiple star systems
Sextans contains a few notable multiple star systems within its boundaries.
35 Sextantis is a triple star system consisting of two evolved K-type giants of equal mass, with both stars being twice as massive as the Sun. The secondary is itself a single-lined spectroscopic binary consisting of a companion and itself. The system is located approximately 700 light years away. The outer pair has a separation of 6.8" and both stars take roughly 23,000 years to orbit each other while the B subsystem takes 1,528 days to circle each other in a relatively eccentric orbit.
There are a few notable variable stars, including 25, 23 Sextantis, and LHS 292. NGC 3115, an edge-on lenticular galaxy, is the only noteworthy deep-sky object. It also lies near the ecliptic, which causes the Moon, and some of the planets to occasionally pass through it for brief periods of time.
The constellation is the location of the field studied by the COSMOS project, undertaken by the Hubble Space Telescope.
COSMOS project
Sextans B is a fairly bright dwarf irregular galaxy at magnitude 6.6, 4.3 million light-years from Earth. It is part of the Local Group of galaxies.
CL J1001+0220 is as of 2016 the most distant-known galaxy cluster at redshift z=2.506, 11.1 billion light-years from Earth.
In June 2015, astronomers reported evidence for population III stars in the Cosmos Redshift 7 galaxy (at z = 6.60) found in Sextans. Such stars are likely to have existed in the very early universe (i.e., at high redshift), and may have started the production of chemical elements heavier than hydrogen that are needed for the later formation of planets and life as we know it.
Depictions of the constellation
| Physical sciences | Other | Astronomy |
28163 | https://en.wikipedia.org/wiki/Sagitta | Sagitta | Sagitta is a dim but distinctive constellation in the northern sky. Its name is Latin for 'arrow', not to be confused with the significantly larger constellation Sagittarius 'the archer'. It was included among the 48 constellations listed by the 2nd-century astronomer Ptolemy, and it remains one of the 88 modern constellations defined by the International Astronomical Union. Although it dates to antiquity, Sagitta has no star brighter than 3rd magnitude and has the third-smallest area of any constellation.
Gamma Sagittae is the constellation's brightest star, with an apparent magnitude of 3.47. It is an aging red giant star 90% as massive as the Sun that has cooled and expanded to a radius 54 times greater than it. Delta, Epsilon, Zeta, and Theta Sagittae are each multiple stars whose components can be seen in small telescopes. V Sagittae is a cataclysmic variable—a binary star system composed of a white dwarf accreting mass of a donor star that is expected to go nova and briefly become the most luminous star in the Milky Way and one of the brightest stars in our sky around the year 2083. Two star systems in Sagitta are known to have Jupiter-like planets, while a third—15 Sagittae—has a brown dwarf companion.
History
The ancient Greeks called Sagitta 'the arrow', and it was one of the 48 constellations described by Ptolemy. It was regarded as the weapon that Hercules used to kill the eagle () of Jove that perpetually gnawed Prometheus' liver. Sagitta is located beyond the north border of Aquila, the Eagle. An amateur naturalist, polymath Richard Hinckley Allen proposed that the constellation could represent the arrow shot by Hercules towards the adjacent Stymphalian birds (which feature in Hercules' sixth labour) who had claws, beaks, and wings of iron, and who lived on human flesh in the marshes of Arcadia—denoted in the sky by the constellations Aquila the Eagle, Cygnus 'the Swan', and Lyra 'the Vulture'—and still lying between them, whence the title Herculea. Greek scholar Eratosthenes claimed it as the arrow with which Apollo exterminated the Cyclopes. The Romans named it Sagitta. In Arabic, it became al-sahm 'arrow', though this name became Sham and was transferred to Alpha Sagittae only. The Greek name has also been mistranslated as 'the loom' and thus in Arabic al-nawl. It was also called al-'anaza 'pike/javelin'.
Characteristics
The four brightest stars make up an arrow-shaped asterism located due north of the bright star Altair. Covering 79.9 square degrees and hence 0.194% of the sky, Sagitta ranks 86th of the 88 modern constellations by area. Only Equuleus and Crux are smaller. Sagitta is most readily observed from the late spring to early autumn to northern hemisphere observers, with midnight culmination occurring on 17 July. Its position in the Northern Celestial Hemisphere means that the whole constellation is visible to observers north of 69°S. Sagitta is bordered by Vulpecula to the north, Hercules to the west, Aquila to the south, and Delphinus to the east. The three-letter abbreviation for the constellation, as adopted by the International Astronomical Union in 1922, is "Sge"; American astronomer Henry Norris Russell, who devised the code, had to resort to using the genitive form of the name to come up with a letter to include ('e') that was not in the name of the constellation Sagittarius. The official constellation boundaries, as set by Belgian astronomer Eugène Delporte in 1930, are defined by a polygon of twelve segments (illustrated in infobox). In the equatorial coordinate system, the right ascension coordinates of these borders lie between and , while the declination coordinates are between 16.08° and 21.64°.
Notable features
Stars
Celestial cartographer Johann Bayer gave Bayer designations to eight stars, labelling them Alpha to Theta. English astronomer John Flamsteed added the letters x, mistaken as Chi (χ), y and z to 13, 14, and 15 Sagittae in his Catalogus Britannicus. All three were dropped by later astronomers John Bevis and Francis Baily.
Bright stars
Ptolemy saw the constellation's brightest star Gamma Sagittae as marking the arrow's head, while Bayer saw Gamma, Eta, and Theta as depicting the arrow's shaft. Gamma Sagittae is a red giant of spectral type M0 III, and magnitude 3.47. It lies at a distance of from Earth. With around 90% of the Sun's mass, it has a radius 54 times that of the Sun and is 575 times as bright. It is most likely on the red-giant branch of its evolutionary lifespan, having exhausted its core hydrogen and now burning it in a surrounding shell.
Delta Sagittae is the second-brightest star in the constellation and is a binary. Delta and Zeta depicted the spike according to Bayer. The Delta Sagittae system is composed of a red supergiant of spectral type M2 II that has 3.9 times the Sun's mass and 152 times its radius and a blue-white B9.5V main sequence star that is 2.9 times as massive as the Sun. The two orbit each other every ten years. Zeta Sagittae is a triple star system, approximately from Earth. The primary and secondary are A-type stars.
In his Uranometria, Bayer depicted Alpha, Beta, and Epsilon Sagittae as the fins of the arrow. Also known as Sham, Alpha is a yellow bright giant star of spectral class G1 II with an apparent magnitude of 4.38, which lies at a distance of from Earth. Four times as massive as the Sun, it has swollen and brightened to 21 times the Sun's radius and 340 times its luminosity. Also of magnitude 4.38, Beta is a G-type giant located distant from Earth. Estimated to be around 129 million years old, it is 4.33 times as massive as the Sun, and has expanded to roughly 27 times its radius. Epsilon Sagittae is a double star whose component stars can be seen in a small telescope. With an apparent magnitude of 5.77, the main star is a 331-million-year-old yellow giant of spectral type G8 III around 3.09 times as massive as the Sun, that has swollen to its radius. It is distant. The visual companion of magnitude 8.35 is 87.4 arcseconds distant, but is an unrelated blue supergiant around distant from Earth.
Eta Sagittae is an orange giant of spectral class K2 III with a magnitude of 5.09. Located from Earth, it has a 61.1% chance of being a member of the Hyades–Pleiades stream of stars that share a common motion through space. Theta Sagittae is a double star system, with components 12 arcseconds apart visible in a small telescope. At magnitude 6.5, the brighter is a yellow-white main sequence star of spectral type F3 V, located from Earth. The 8.8-magnitude fainter companion is a main sequence star of spectral type G5 V. A 7.4-magnitude orange giant of spectral type K2 III is also visible from the binary pair, located away.
Variable stars
Variable stars are popular targets for amateur astronomers, their observations providing valuable contributions to understanding star behaviour. R Sagittae is a member of the rare RV Tauri variable class of star. It ranges in magnitude from 8.2 to 10.4. It is around distant. It has a radius times that of the Sun, and is as luminous, yet most likely is less massive than the Sun. An aging star, it has moved on from the asymptotic giant branch of stellar evolution and is on its way to becoming a planetary nebula. FG Sagittae is a "born again" star, a highly luminous star around distant from Earth. It reignited fusion of a helium shell shortly before becoming a white dwarf, and has expanded first to a blue supergiant and then to a K-class supergiant in less than 100 years. It is surrounded by a faint (visual magnitude 23) planetary nebula, Henize 1–5, that formed when FG Sagittae first left the asymptotic giant branch.
S Sagittae is a classical Cepheid that varies from magnitude 5.24 to 6.04 every 8.38 days. It is a yellow-white supergiant that pulsates between spectral types F6 Ib and G5 Ib. Around 6 or 7 times as massive and 3,500 times as luminous as the Sun, it is located around from Earth. HD 183143 is a remote highly luminous star around away, that has been classified as a blue hypergiant. Infrared bands of ionised buckminsterfullerene molecules have also been found in its spectrum. WR 124 is a Wolf–Rayet star moving at great speed surrounded by a nebula of ejected gas.
U Sagittae is an eclipsing binary that varies between magnitudes 6.6 and 9.2 over 3.4 days, making it a suitable target for enthusiasts with small telescopes. There are two component stars—a blue-white star of spectral type B8 V and an ageing star that has cooled and expanded into a yellow subgiant of spectral type G4 III-IV. They orbit each other close enough that the cooler subgiant has filled its Roche lobe and is passing material to the hotter star, and hence it is a semidetached binary system. The system is distant. Near U Sagittae is X Sagittae, a semiregular variable ranging between magnitudes 7.9 and 8.4 over 196 days. A carbon star, X Sagittae has a surface temperature of .
Located near 18 Sagittae is V Sagittae, the prototype of the V Sagittae variables, cataclysmic variables that are also super soft X-ray sources. It is expected to become a luminous red nova when the two stars merge around the year 2083, and briefly become the most luminous star in the Milky Way and one of the brightest stars in Earth's sky. WZ Sagittae is another cataclysmic variable, composed of a white dwarf that has about 85% the mass of the Sun, and low-mass star companion that has been calculated to be a brown dwarf of spectral class L2 that is only 8% as massive as the Sun. Normally a faint object dimmer than magnitude 15, it flared up in 1913, 1946 and 1978 to be visible in binoculars. The black widow pulsar (B1957+20) is the second millisecond pulsar ever discovered. It is a massive neutron star that is ablating its brown dwarf-sized companion which causes the pulsar's radio signals to attenuate as they pass through the outflowing material.
Stars with exoplanets
HD 231701 is a yellow-white main sequence star hotter and larger than the Sun, with a Jupiter-like planet that was discovered in 2007 by the radial velocity technique. The planet orbits at a distance of from the star with a period of 141.6 days. It has a mass of at least 1.13 Jupiter masses.
HAT-P-34 is a star times as massive as the Sun with times its radius and times its luminosity. With an apparent magnitude of 10.4, it is distant. A planet times as massive as Jupiter was discovered transiting it in 2012. With a period of 5.45 days and a distance of from its star, it has an estimated surface temperature of .
15 Sagittae is a solar analog—a star similar to the Sun, with times its mass, times its radius and times its luminosity. It has an apparent magnitude of 5.80. It has an L4 brown dwarf substellar companion that is around the same size as Jupiter but 69 times as massive with a surface temperature of between 1,510 and , taking around 73.3 years to complete an orbit around the star. The system is estimated to be billion years old.
Deep-sky objects
The band of the Milky Way and the Great Rift within it pass though Sagitta, with Alpha, Beta and Epsilon Sagittae marking the Rift's border. Located between Beta and Gamma Sagittae is Messier 71, a very loose globular cluster mistaken for some time for a dense open cluster. At a distance of about from Earth, it was first discovered by the French astronomer Philippe Loys de Chéseaux in the year 1745 or 1746. The loose globular cluster has a mass of around and a luminosity of approximately 19,000 .
There are two notable planetary nebulae in Sagitta: NGC 6886 is composed of a hot central post-AGB star that has 55% of the Sun's mass yet times its luminosity, with a surface temperature of , and surrounding nebula estimated to have been expanding for between 1,280 and 1,600 years, The nebula was discovered by Ralph Copeland in 1884. The Necklace Nebula—originally a close binary, one component of which swallowed the other as it expanded to become a giant star. The smaller star remained in orbit inside the larger, whose rotation speed increased greatly, resulting in it flinging its outer layers off into space, forming a ring with knots of bright gas formed from clumps of stellar material. It was discovered in 2005 and is around 2 light-years wide. It has a size of . Both nebulae are around from Earth.
| Physical sciences | Other | Astronomy |
28174 | https://en.wikipedia.org/wiki/Data%20storage | Data storage | Data storage is the recording (storing) of information (data) in a storage medium. Handwriting, phonographic recording, magnetic tape, and optical discs are all examples of storage media. Biological molecules such as RNA and DNA are considered by some as data storage. Recording may be accomplished with virtually any form of energy. Electronic data storage requires electrical power to store and retrieve data.
Data storage in a digital, machine-readable medium is sometimes called digital data. Computer data storage is one of the core functions of a general-purpose computer. Electronic documents can be stored in much less space than paper documents. Barcodes and magnetic ink character recognition (MICR) are two ways of recording machine-readable data on paper.
Recording media
A recording medium is a physical material that holds information. Newly created information is distributed and can be stored in four storage media–print, film, magnetic, and optical–and seen or heard in four information flows–telephone, radio and TV, and the Internet as well as being observed directly. Digital information is stored on electronic media in many different recording formats.
With electronic media, the data and the recording media are sometimes referred to as "software" despite the more common use of the word to describe computer software. With (traditional art) static media, art materials such as crayons may be considered both equipment and medium as the wax, charcoal or chalk material from the equipment becomes part of the surface of the medium.
Some recording media may be temporary either by design or by nature. Volatile organic compounds may be used to preserve the environment or to purposely make data expire over time. Data such as smoke signals or skywriting are temporary by nature. Depending on the volatility, a gas (e.g. atmosphere, smoke) or a liquid surface such as a lake would be considered a temporary recording medium if at all.
Global capacity, digitization, and trends
A 2003 UC Berkeley report estimated that about five exabytes of new information were produced in 2002 and that 92% of this data was stored on hard disk drives. This was about twice the data produced in 2000. The amount of data transmitted over telecommunications systems in 2002 was nearly 18 exabytes—three and a half times more than was recorded on non-volatile storage. Telephone calls constituted 98% of the telecommunicated information in 2002. The researchers' highest estimate for the growth rate of newly stored information (uncompressed) was more than 30% per year.
In a more limited study, the International Data Corporation estimated that the total amount of digital data in 2007 was 281 exabytes, and that the total amount of digital data produced exceeded the global storage capacity for the first time.
A 2011 Science Magazine article estimated that the year 2002 was the beginning of the digital age for information storage: an age in which more information is stored on digital storage devices than on analog storage devices. In 1986, approximately 1% of the world's capacity to store information was in digital format; this grew to 3% by 1993, to 25% by 2000, and to 97% by 2007. These figures correspond to less than three compressed exabytes in 1986, and 295 compressed exabytes in 2007. The quantity of digital storage doubled roughly every three years.
, an increase of 60x from 2010, and that it will increase to 181 zettabytes generated in 2025.
| Technology | Data storage | null |
28184 | https://en.wikipedia.org/wiki/Sound%20card | Sound card | A sound card (also known as an audio card) is an internal expansion card that provides input and output of audio signals to and from a computer under the control of computer programs. The term sound card is also applied to external audio interfaces used for professional audio applications.
Sound functionality can also be integrated into the motherboard, using components similar to those found on plug-in cards. The integrated sound system is often still referred to as a sound card. Sound processing hardware is also present on modern video cards with HDMI to output sound along with the video using that connector; previously they used a S/PDIF connection to the motherboard or sound card.
Typical uses of sound cards or sound card functionality include providing the audio component for multimedia applications such as music composition, editing video or audio, presentation, education and entertainment (games) and video projection. Sound cards are also used for computer-based communication such as voice over IP and teleconferencing.
General characteristics
Sound cards use a digital-to-analog converter (DAC), which converts recorded or generated digital signal data into an analog format. The output signal is connected to an amplifier, headphones, or external device using standard interconnects, such as a TRS phone connector.
A common external connector is the microphone connector. Input through a microphone connector can be used, for example, by speech recognition or voice over IP applications. Most sound cards have a line in connector for an analog input from a sound source that has higher voltage levels than a microphone. In either case, the sound card uses an analog-to-digital converter (ADC) to digitize this signal.
Some cards include a sound chip to support the production of synthesized sounds, usually for real-time generation of music and sound effects using minimal data and CPU time.
The card may use direct memory access to transfer the samples to and from main memory, from where a recording and playback software may read and write it to the hard disk for storage, editing, or further processing.
Sound channels and polyphony
An important sound card characteristic is polyphony, which refers to its ability to process and output multiple independent voices or sounds simultaneously. These distinct channels are seen as the number of audio outputs, which may correspond to a speaker configuration such as 2.0 (stereo), 2.1 (stereo and sub woofer), 5.1 (surround), or other configurations. Sometimes, the terms voice and channel are used interchangeably to indicate the degree of polyphony, not the output speaker configuration. For example, much older sound chips could accommodate three voices, but only one output audio channel (i.e., a single mono output), requiring all voices to be mixed together. Later cards, such as the AdLib sound card, had a 9-voice polyphony combined in 1 mono output channel.
Early PC sound cards had multiple FM synthesis voices (typically 9 or 16) which were used for MIDI music. The full capabilities of advanced cards are often not fully used; only one (mono) or two (stereo) voice(s) and channel(s) are usually dedicated to playback of digital sound samples, and playing back more than one digital sound sample usually requires a software downmix at a fixed sampling rate. Modern low-cost integrated sound cards (i.e., those built into motherboards) such as audio codecs like those meeting the AC'97 standard and even some lower-cost expansion sound cards still work this way. These devices may provide more than two sound output channels (typically 5.1 or 7.1 surround sound), but they usually have no actual hardware polyphony for either sound effects or MIDI reproduction these tasks are performed entirely in software. This is similar to the way inexpensive softmodems perform modem tasks in software rather than in hardware.
In the early days of wavetable synthesis, some sound card manufacturers advertised polyphony solely on the MIDI capabilities alone. In this case, typically, the card is only capable of two channels of digital sound and the polyphony specification solely applies to the number of MIDI instruments the sound card is capable of producing at once.
Modern sound cards may provide more flexible audio accelerator capabilities which can be used in support of higher levels of polyphony or other purposes such as hardware acceleration of 3D sound, positional audio and real-time DSP effects.
List of sound card standards
Color codes
Connectors on the sound cards are color-coded as per the PC System Design Guide. They may also have symbols of arrows, holes and soundwaves that are associated with each jack position.
History of sound cards for the IBM PC architecture
Sound cards for IBM PC–compatible computers were very uncommon until 1988. For the majority IBM PC users, the internal PC speaker was the only way for early PC software to produce sound and music. The speaker hardware was typically limited to square waves. The resulting sound was generally described as "beeps and boops" which resulted in the common nickname beeper. Several companies, most notably Access Software, developed techniques for digital sound reproduction over the PC speaker like RealSound. The resulting audio, while functional, suffered from the heavily distorted output and low volume, and usually required all other processing to be stopped while sounds were played. Other home computers of the 1980s like the Commodore 64 included hardware support for digital sound playback or music synthesis, leaving the IBM PC at a disadvantage when it came to multimedia applications. Early sound cards for the IBM PC platform were not designed for gaming or multimedia applications, but rather on specific audio applications, such as music composition with the AdLib Personal Music System, IBM Music Feature Card, and Creative Music System, or on speech synthesis like Digispeech DS201, Covox Speech Thing, and Street Electronics Echo.
In 1988, a panel of computer-game CEOs stated at the Consumer Electronics Show that the PC's limited sound capability prevented it from becoming the leading home computer, that it needed a $49–79 sound card with better capability than current products, and that once such hardware was widely installed, their companies would support it. Sierra On-Line, which had pioneered supporting EGA and VGA video, and 3-1/2" disks, promised that year to support the AdLib, IBM Music Feature, and Roland MT-32 sound cards in its games. A 1989 Computer Gaming World survey found that 18 of 25 game companies planned to support AdLib, six Roland and Covox, and seven Creative Music System/Game Blaster.
Hardware manufacturers
One of the first manufacturers of sound cards for the IBM PC was AdLib, which produced a card based on the Yamaha YM3812 sound chip, also known as the OPL2. The AdLib had two modes: A 9-voice mode where each voice could be fully programmed, and a less frequently used percussion mode with 3 regular voices producing 5 independent percussion-only voices for a total of 11.
Creative Labs also marketed a sound card called the Creative Music System (C/MS) at about the same time. Although the C/MS had twelve voices to AdLib's nine and was a stereo card while the AdLib was mono, the basic technology behind it was based on the Philips SAA1099 chip which was essentially a square-wave generator. It sounded much like twelve simultaneous PC speakers would have except for each channel having amplitude control, and failed to sell well, even after Creative renamed it the Game Blaster a year later, and marketed it through RadioShack in the US. The Game Blaster retailed for under $100 and was compatible with many popular games, such as Silpheed.
A large change in the IBM PC-compatible sound card market happened when Creative Labs introduced the Sound Blaster card. Recommended by Microsoft to developers creating software based on the Multimedia PC standard, the Sound Blaster cloned the AdLib and added a sound coprocessor for recording and playback of digital audio. The card also included a game port for adding a joystick, and the capability to interface to MIDI equipment using the game port and a special cable. With AdLib compatibility and more features at nearly the same price, most buyers chose the Sound Blaster. It eventually outsold the AdLib and dominated the market.
Roland also made sound cards in the late 1980s such as the MT-32 and LAPC-I. Roland cards sold for hundreds of dollars. Many games, such as Silpheed and Police Quest II, had music written for their cards. The cards were often poor at sound effects such as laughs, but for music were by far the best sound cards available until the mid-nineties. Some Roland cards, such as the SCC, and later versions of the MT-32 were made to be less expensive.
By 1992, one sound card vendor advertised that its product was "Sound Blaster, AdLib, Disney Sound Source and Covox Speech Thing Compatible!" Responding to readers complaining about an article on sound cards that unfavorably mentioned the Gravis Ultrasound, Computer Gaming World stated in January 1994 that, "The de facto standard in the gaming world is Sound Blaster compatibility ... It would have been unfair to have recommended anything else." The magazine that year stated that Wing Commander II was "Probably the game responsible" for making it the standard card. The Sound Blaster line of cards, together with the first inexpensive CD-ROM drives and evolving video technology, ushered in a new era of multimedia computer applications that could play back CD audio, add recorded dialogue to video games, or even reproduce full motion video (albeit at much lower resolutions and quality in early days). The widespread decision to support the Sound Blaster design in multimedia and entertainment titles meant that future sound cards such as Media Vision's Pro Audio Spectrum and the Gravis Ultrasound had to be Sound Blaster compatible if they were to sell well. Until the early 2000s, when the AC'97 audio standard became more widespread and eventually usurped the SoundBlaster as a standard due to its low cost and integration into many motherboards, Sound Blaster compatibility was a standard that many other sound cards supported to maintain compatibility with many games and applications released.
Industry adoption
When game company Sierra On-Line opted to support add-on music hardware in addition to built-in hardware such as the PC speaker and built-in sound capabilities of the IBM PCjr and Tandy 1000, what could be done with sound and music on the IBM PC changed dramatically. Two of the companies Sierra partnered with were Roland and AdLib, opting to produce in-game music for King's Quest 4 that supported the MT-32 and AdLib Music Synthesizer. The MT-32 had superior output quality, due in part to its method of sound synthesis as well as built-in reverb. Since it was the most sophisticated synthesizer they supported, Sierra chose to use most of the MT-32's custom features and unconventional instrument patches, producing background sound effects (e.g., chirping birds, clopping horse hooves, etc.) before the Sound Blaster brought digital audio playback to the PC. Many game companies also supported the MT-32, but supported the Adlib card as an alternative because of the latter's higher market base. The adoption of the MT-32 led the way for the creation of the MPU-401, Roland Sound Canvas and General MIDI standards as the most common means of playing in-game music until the mid-1990s.
Feature evolution
Early ISA bus sound cards were half-duplex, meaning they couldn't record and play digitized sound simultaneously. Later, ISA cards like the SoundBlaster AWE series and Plug-and-play Soundblaster clones supported simultaneous recording and playback, but at the expense of using up two IRQ and DMA channels instead of one. Conventional PCI bus cards generally do not have these limitations and are mostly full-duplex.
Sound cards have evolved in terms of digital audio sampling rate (starting from 8-bit , to 32-bit, that the latest solutions support). Along the way, some cards started offering wavetable synthesis, which provides superior MIDI synthesis quality relative to the earlier Yamaha OPL based solutions, which uses FM-synthesis. Some higher-end cards (such as Sound Blaster AWE32, Sound Blaster AWE64 and Sound Blaster Live!) introduced their own RAM and processor for user-definable sound samples and MIDI instruments as well as to offload audio processing from the CPU. Later, the integrated audio (AC'97 and later HD Audio) prefer the use of a software MIDI synthesizer, for example, Microsoft GS Wavetable SW Synth in Microsoft Windows.
With some exceptions, for years, sound cards, most notably the Sound Blaster series and their compatibles, had only one or two channels of digital sound. Early games and MOD-players needing more channels than a card could support had to resort to mixing multiple channels in software. Even today, the tendency is still to mix multiple sound streams in software, except in products specifically intended for gamers or professional musicians.
Crippling of features
As of 2024, sound cards are not commonly programmed with the audio loopback systems commonly called stereo mix, wave out mix, mono mix or what u hear, which previously allowed users to digitally record output otherwise only accessible to speakers.
Lenovo and other manufacturers fail to implement the feature in hardware, while other manufacturers disable the driver from supporting it. In some cases, loopback can be reinstated with driver updates. Alternatively, software such as virtual audio cable applications can be purchased to enable the functionality. According to Microsoft, the functionality was hidden by default in Windows Vista to reduce user confusion, but is still available, as long as the underlying sound card drivers and hardware support it.
Ultimately, the user can use the analog loophole and connect the line out directly to the line in on the sound card. However, in laptops, manufacturers have gradually moved from providing 3 separate jacks with TRS connectorsusually for line in, line out/headphone out and microphoneinto just a single combo jack with TRRS connector that combines inputs and outputs.
Outputs
The number of physical sound channels has also increased. The first sound card solutions were mono. Stereo sound was introduced in the early 1980s, and quadraphonic sound came in 1989. This was shortly followed by 5.1 channel audio. The latest sound cards support up to 8 audio channels for the 7.1 speaker setup.
A few early sound cards had sufficient power to drive unpowered speakers directlyfor example, two watts per channel. With the popularity of amplified speakers, sound cards no longer have a power stage, though in many cases they can adequately drive headphones.
Professional sound cards
Professional sound cards are sound cards optimized for high-fidelity, low-latency multichannel sound recording and playback. Their drivers usually follow the Audio Stream Input/Output protocol for use with professional sound engineering and music software.
Professional sound cards are usually described as audio interfaces, and sometimes have the form of external rack-mountable units using USB, FireWire, or an optical interface, to offer sufficient data rates. The emphasis in these products is, in general, on multiple input and output connectors, direct hardware support for multiple input and output sound channels, as well as higher sampling rates and fidelity as compared to the usual consumer sound card.
On the other hand, certain features of consumer sound cards such as support for 3D audio, hardware acceleration in video games, or real-time ambiance effects are secondary, nonexistent or even undesirable in professional audio interfaces.
The typical consumer-grade sound card is intended for generic home, office, and entertainment purposes with an emphasis on playback and casual use, rather than catering to the needs of audio professionals. In general, consumer-grade sound cards impose several restrictions and inconveniences that would be unacceptable to an audio professional. Consumer sound cards are also limited in the effective sampling rates and bit depths they can actually manage and have lower numbers of less flexible input channels. Professional studio recording use typically requires more than the two channels that consumer sound cards provide, and more accessible connectors, unlike the variable mixture of internal—and sometimes virtual—and external connectors found in consumer-grade sound cards.
Sound devices other than expansion cards
Integrated sound hardware on PC motherboards
In 1984, the first IBM PCjr had a rudimentary 3-voice sound synthesis chip (the SN76489) which was capable of generating three square-wave tones with variable amplitude, and a pseudo-white noise channel that could generate primitive percussion sounds. The Tandy 1000, initially a clone of the PCjr, duplicated this functionality, with the Tandy 1000 TL/SL/RL models adding digital sound recording and playback capabilities. Many games during the 1980s that supported the PCjr's video standard (described as Tandy-compatible, Tandy graphics, or TGA) also supported PCjr/Tandy 1000 audio.
In the late 1990s, many computer manufacturers began to replace plug-in sound cards with an audio codec chip (a combined audio AD/DA-converter) integrated into the motherboard. Many of these used Intel's AC'97 specification. Others used inexpensive ACR slot accessory cards.
From around 2001, many motherboards incorporated full-featured sound cards, usually in the form of a custom chipset, providing something akin to full Sound Blaster compatibility and relatively high-quality sound. However, these features were dropped when AC'97 was superseded by Intel's HD Audio standard, which was released in 2004, again specified the use of a codec chip, and slowly gained acceptance. As of 2011, most motherboards have returned to using a codec chip, albeit an HD Audio compatible one, and the requirement for Sound Blaster compatibility relegated to history.
Integrated sound on other platforms
Many home computers have their own motherboard-integrated sound devices: Commodore 64, Amiga, PC-88, FM-7, FM Towns, Sharp X1, X68000, BBC Micro, Electron, Archimedes, Atari 8-bit computers, Atari ST, Atari Falcon, Amstrad CPC, later revisions of the ZX Spectrum, MSX, Mac, and Apple IIGS. Workstations from Sun, Silicon Graphics and NeXT do as well. In some cases, most notably in those of the Macintosh, IIGS, Amiga, C64, SGI Indigo, X68000, MSX, Falcon, Archimedes, FM-7 and FM Towns, they provide very advanced capabilities (as of the time of manufacture), in others they are only minimal capabilities. Some of these platforms have also had sound cards designed for their bus architectures that cannot be used in a standard PC.
Several Japanese computer platforms, including the MSX, X1, X68000, FM Towns and FM-7, have built-in FM synthesis sound from Yamaha by the mid-1980s. By 1989, the FM Towns computer platform featured built-in PCM sample-based sound and supported the CD-ROM format.
The custom sound chip on Amiga, named Paula, has four digital sound channels (2 for the left speaker and 2 for the right) with 8-bit resolution for each channel and a 6-bit volume control per channel. Sound playback on Amiga was done by reading directly from the chip RAM without using the main CPU.
Most arcade video games have integrated sound chips. In the 1980s it was common to have a separate microprocessor for handling communication with the sound chip.
Sound cards on other platforms
The earliest known sound card used by computers was the Gooch Synthetic Woodwind, a music device for PLATO terminals, and is widely hailed as the precursor to sound cards and MIDI. It was invented in 1972.
Certain early arcade machines made use of sound cards to achieve playback of complex audio waveforms and digital music, despite being already equipped with onboard audio. An example of a sound card used in arcade machines is the Digital Compression System card, used in games from Midway. For example, Mortal Kombat II on the Midway T-Unit hardware. The T-Unit hardware already has an onboard YM2151 OPL chip coupled with an OKI 6295 DAC, but said game uses an added-on DCS card instead. The card is also used in the arcade version of Midway and Aerosmith's Revolution X for complex looping music and speech playback.
MSX computers, while equipped with built-in sound capabilities, also relied on sound cards to produce better-quality audio. The card, known as Moonsound, uses a Yamaha OPL4 sound chip. Prior to the Moonsound, there were also sound cards called MSX Music and MSX Audio for the system, which uses OPL2 and OPL3 chipsets.
The Apple II computers, which did not have sound capabilities beyond rapidly clicking a speaker until the IIGS, could use plug-in sound cards from a variety of manufacturers. The first, in 1978, was ALF's Apple Music Synthesizer, with 3 voices; two or three cards could be used to create 6 or 9 voices in stereo. Later ALF created the Apple Music II, a 9-voice model. The most widely supported card, however, was the Mockingboard. Sweet Micro Systems sold the Mockingboard in various models. Early Mockingboard models ranged from 3 voices in mono, while some later designs had 6 voices in stereo. Some software supported use of two Mockingboard cards, which allowed 12-voice music and sound. A 12-voice, single-card clone of the Mockingboard called the Phasor was made by Applied Engineering.
The ZX Spectrum that initially only had a beeper had some sound cards made for it. Examples include TurboSound Other examples are the Fuller Box, and Zon X-81.
The Commodore 64, while having an integrated SID (Sound Interface Device) chip, also had sound cards made for it. For example, the Sound Expander, which added on an OPL FM synthesizer.
The PC-98 series of computers, like their IBM PC cousins, also do not have integrated sound contrary to popular belief, and their default configuration is a PC speaker driven by a timer. Sound cards were made for the C-Bus expansion slots that these computers had, most of which used Yamaha's FM and PSG chips and made by NEC themselves, although aftermarket clones can also be purchased, and Creative did release a C-Bus version of the SoundBlaster line of sound cards for the platform.
External sound devices
Devices such as the Covox Speech Thing could be attached to the parallel port of an IBM PC and fed 6- or 8-bit PCM sample data to produce audio. Also, many types of professional sound cards take the form of an external FireWire or USB unit, usually for convenience and improved fidelity.
Sound cards using the PC Card interface were available before laptop and notebook computers routinely had onboard sound. Most of these units were designed for mobile DJs, providing separate outputs to allow both playback and monitoring from one system, however, some also target mobile gamers.
USB sound cards
USB sound cards are external devices that plug into the computer via USB. They are often used in studios and on stage by electronic musicians including live PA performers and DJs. DJs who use DJ software typically use sound cards integrated into DJ controllers or specialized DJ sound cards. DJ sound cards sometimes have inputs with phono preamplifiers to allow turntables to be connected to the computer to control the software's playback of music files with vinyl emulation.
The USB specification defines a standard interface, the USB audio device class, allowing a single driver to work with the various USB sound devices and interfaces on the market. Mac OS X, Windows, and Linux support this standard. However, some USB sound cards do not conform to the standard and require proprietary drivers from the manufacturer.
Cards meeting the older USB 1.1 specification are capable of high-quality sound with a limited number of channels, but USB 2.0 or later is more capable with their higher bandwidths.
Uses
The main function of a sound card is to play audio, usually music, with varying formats (monophonic, stereophonic, various multiple speaker setups) and degrees of control. The source may be a CD or DVD, a file, streamed audio, or any external source connected to a sound card input. Audio may be recorded. Sometimes sound card hardware and drivers do not support recording a source that is being played.
Non-sound uses
Sound cards can be used to generate (output) arbitrary electrical waveforms, as any digital waveform played by the soundcard is converted to the desired output within the bounds of its capabilities. In other words, sound cards are consumer-grade arbitrary waveform generators. A number of free and commercial software allow sound cards to act like function generators by generating desired waveforms from functions; there are also online services that generate audio files for any desired waveforms, playable through a sound card.
Sound cards can also be used to record electrical waveforms, in the same way it records an analog audio input. The recording can be displayed by special or general-purpose audio-editing software (acting as an oscilloscope) or further transformed and analyzed. A protection circuit should be used to keep the input voltage within acceptable bounds.
As general-purpose waveform generators and analyzers, sound cards are bound by several design and physical limitations.
Sound cards have a limited sample rate, typically up to 192 kHz. Under the assumptions of the Nyquist–Shannon sampling theorem, this means a maximum signal frequency (bandwidth) of half that: 96 kHz. Real sound cards tend to have a bandwidth smaller than implied by the Nyquist limit from internal filtering.
As with all ADCs and DACs, sound cards produce distortion and noise. A typical integrated sound card, the Realtek ALC887, according to its data sheet has distortion about 80 dB below the fundamental; cards are available with distortion better than −100 dB.
Sound cards commonly suffer from some clock drift, requiring correction of measurement results.
Sound cards have been used to analyze and generate the following types of signals:
Sound equipment testing. A very-low-distortion sinewave oscillator can be used as input to equipment under test; the output is sent to a sound card's line input and run through Fourier transform software to find the amplitude of each harmonic of the added distortion. Alternatively, a less pure signal source may be used, with circuitry to subtract the input from the output, attenuated and phase-corrected; the result is distortion and noise only, which can be analyzed.
Gamma spectroscopy. A sound card can serve as a cheap multichannel analyzer for gamma spectroscopy, which allows one to distinguish different radioactive isotopes.
Longwave radio. A 192 KHz sound card can be used to receive radio signals up to 96 kHz. This bandwidth is enough for longwave time signals such as the DCF77 (77.5 KHz). A coil is attached to the input side as an antenna, while special software decodes the signal. A sound card can also work in the opposite direction and generate low power time signal transmissions (JJY at 40 KHz, using harmonics).
Driver architecture
To use a sound card, the operating system (OS) typically requires a specific device driver, a low-level program that handles the data connections between the physical hardware and the operating system. Some operating systems include the drivers for many cards; for cards not so supported, drivers are supplied with the card, or available for download.
DOS programs for the IBM PC often had to use universal middleware driver libraries (such as the HMI Sound Operating System, the Miles Audio Interface Libraries (AIL), the Miles Sound System etc.) which had drivers for most common sound cards, since DOS itself had no real concept of a sound card. Some card manufacturers provided terminate-and-stay-resident drivers for their products. Often the driver is a Sound Blaster and AdLib emulator designed to allow their products to emulate a Sound Blaster and AdLib, and to allow games that could only use SoundBlaster or AdLib sound to work with the card. Finally, some programs simply had driver or middleware source code incorporated into the program itself for the sound cards that were supported.
Microsoft Windows uses drivers generally written by the sound card manufacturers. Many device manufacturers supply the drivers on their own discs or to Microsoft for inclusion on Windows installation disc. USB audio device class support is present from Windows 98 onwards. Since Microsoft's Universal Audio Architecture (UAA) initiative which supports HD Audio, FireWire and USB audio device class standards, a universal class driver by Microsoft can be used. The driver is included with Windows Vista. For Windows XP, Windows 2000 or Windows Server 2003, the driver can be obtained by contacting Microsoft support. Almost all manufacturer-supplied drivers for such devices also include this universal class driver.
A number of versions of UNIX make use of the portable Open Sound System (OSS). Drivers are seldom produced by the card manufacturer.
Most present-day Linux distributions make use of the Advanced Linux Sound Architecture (ALSA).
Mockingboard support on the Apple II is usually incorporated into the programs itself as many programs for the Apple II boot directly from disk. However a TSR is shipped on a disk that adds instructions to Apple Basic so users can create programs that use the card, provided that the TSR is loaded first.
List of notable sound card manufacturers
Asus
Advanced Gravis Computer Technology (defunct)
AdLib (defunct)
Aureal Semiconductor (defunct)
Auzentech (defunct)
C-Media
Creative Technology
E-mu (bought out by Creative)
ESS Technology
Hercules Computer Technology
HT Omega
IBM
Korg
Media Vision
M-Audio
Onkyo
Turtle Beach Systems
VIA Technologies
| Technology | Computer hardware | null |
28189 | https://en.wikipedia.org/wiki/Space%20Shuttle | Space Shuttle | The Space Shuttle is a retired, partially reusable low Earth orbital spacecraft system operated from 1981 to 2011 by the U.S. National Aeronautics and Space Administration (NASA) as part of the Space Shuttle program. Its official program name was Space Transportation System (STS), taken from the 1969 plan led by U.S. Vice President Spiro Agnew for a system of reusable spacecraft where it was the only item funded for development.
The first (STS-1) of four orbital test flights occurred in 1981, leading to operational flights (STS-5) beginning in 1982. Five complete Space Shuttle orbiter vehicles were built and flown on a total of 135 missions from 1981 to 2011. They launched from the Kennedy Space Center (KSC) in Florida. Operational missions launched numerous satellites, interplanetary probes, and the Hubble Space Telescope (HST), conducted science experiments in orbit, participated in the Shuttle-Mir program with Russia, and participated in the construction and servicing of the International Space Station (ISS). The Space Shuttle fleet's total mission time was 1,323 days.
Space Shuttle components include the Orbiter Vehicle (OV) with three clustered Rocketdyne RS-25 main engines, a pair of recoverable solid rocket boosters (SRBs), and the expendable external tank (ET) containing liquid hydrogen and liquid oxygen. The Space Shuttle was launched vertically, like a conventional rocket, with the two SRBs operating in parallel with the orbiter's three main engines, which were fueled from the ET. The SRBs were jettisoned before the vehicle reached orbit, while the main engines continued to operate, and the ET was jettisoned after main engine cutoff and just before orbit insertion, which used the orbiter's two Orbital Maneuvering System (OMS) engines. At the conclusion of the mission, the orbiter fired its OMS to deorbit and reenter the atmosphere. The orbiter was protected during reentry by its thermal protection system tiles, and it glided as a spaceplane to a runway landing, usually to the Shuttle Landing Facility at KSC, Florida, or to Rogers Dry Lake in Edwards Air Force Base, California. If the landing occurred at Edwards, the orbiter was flown back to the KSC atop the Shuttle Carrier Aircraft (SCA), a specially modified Boeing 747 designed to carry the shuttle above it.
The first orbiter, Enterprise, was built in 1976 and used in Approach and Landing Tests (ALT), but had no orbital capability. Four fully operational orbiters were initially built: Columbia, Challenger, Discovery, and Atlantis. Of these, two were lost in mission accidents: Challenger in 1986 and Columbia in 2003, with a total of 14 astronauts killed. A fifth operational (and sixth in total) orbiter, Endeavour, was built in 1991 to replace Challenger. The three surviving operational vehicles were retired from service following Atlantiss final flight on July 21, 2011. The U.S. relied on the Russian Soyuz spacecraft to transport astronauts to the ISS from the last Shuttle flight until the launch of the Crew Dragon Demo-2 mission in May 2020.
Design and development
Historical background
In the late 1930s, the German government launched the "Amerikabomber" project, and Eugen Sanger's idea, together with mathematician Irene Bredt, was a winged rocket called the Silbervogel (German for "silver bird"). During the 1950s, the United States Air Force proposed using a reusable piloted glider to perform military operations such as reconnaissance, satellite attack, and air-to-ground weapons employment. In the late 1950s, the Air Force began developing the partially reusable X-20 Dyna-Soar. The Air Force collaborated with NASA on the Dyna-Soar and began training six pilots in June 1961. The rising costs of development and the prioritization of Project Gemini led to the cancellation of the Dyna-Soar program in December 1963. In addition to the Dyna-Soar, the Air Force had conducted a study in 1957 to test the feasibility of reusable boosters. This became the basis for the aerospaceplane, a fully reusable spacecraft that was never developed beyond the initial design phase in 1962–1963.
Beginning in the early 1950s, NASA and the Air Force collaborated on developing lifting bodies to test aircraft that primarily generated lift from their fuselages instead of wings, and tested the NASA M2-F1, Northrop M2-F2, Northrop M2-F3, Northrop HL-10, Martin Marietta X-24A, and the Martin Marietta X-24B. The program tested aerodynamic characteristics that would later be incorporated in design of the Space Shuttle, including unpowered landing from a high altitude and speed.
Design process
On September 24, 1966, as the Apollo space program neared its design completion, NASA and the Air Force released a joint study concluding that a new vehicle was required to satisfy their respective future demands and that a partially reusable system would be the most cost-effective solution. The head of the NASA Office of Manned Space Flight, George Mueller, announced the plan for a reusable shuttle on August 10, 1968. NASA issued a request for proposal (RFP) for designs of the Integral Launch and Reentry Vehicle (ILRV) on October 30, 1968. Rather than award a contract based upon initial proposals, NASA announced a phased approach for the Space Shuttle contracting and development; Phase A was a request for studies completed by competing aerospace companies, Phase B was a competition between two contractors for a specific contract, Phase C involved designing the details of the spacecraft components, and Phase D was the production of the spacecraft.
In December 1968, NASA created the Space Shuttle Task Group to determine the optimal design for a reusable spacecraft, and issued study contracts to General Dynamics, Lockheed, McDonnell Douglas, and North American Rockwell. In July 1969, the Space Shuttle Task Group issued a report that determined the Shuttle would support short-duration crewed missions and space station, as well as the capabilities to launch, service, and retrieve satellites. The report also created three classes of a future reusable shuttle: Class I would have a reusable orbiter mounted on expendable boosters, Class II would use multiple expendable rocket engines and a single propellant tank (stage-and-a-half), and Class III would have both a reusable orbiter and a reusable booster. In September 1969, the Space Task Group, under the leadership of U.S. Vice President Spiro Agnew, issued a report calling for the development of a space shuttle to bring people and cargo to low Earth orbit (LEO), as well as a space tug for transfers between orbits and the Moon, and a reusable nuclear upper stage for deep space travel.
After the release of the Space Shuttle Task Group report, many aerospace engineers favored the Class III, fully reusable design because of perceived savings in hardware costs. Max Faget, a NASA engineer who had worked to design the Mercury capsule, patented a design for a two-stage fully recoverable system with a straight-winged orbiter mounted on a larger straight-winged booster. The Air Force Flight Dynamics Laboratory argued that a straight-wing design would not be able to withstand the high thermal and aerodynamic stresses during reentry, and would not provide the required cross-range capability. Additionally, the Air Force required a larger payload capacity than Faget's design allowed. In January 1971, NASA and Air Force leadership decided that a reusable delta-wing orbiter mounted on an expendable propellant tank would be the optimal design for the Space Shuttle.
After they established the need for a reusable, heavy-lift spacecraft, NASA and the Air Force determined the design requirements of their respective services. The Air Force expected to use the Space Shuttle to launch large satellites, and required it to be capable of lifting to an eastward LEO or into a polar orbit. The satellite designs also required that the Space Shuttle have a payload bay. NASA evaluated the F-1 and J-2 engines from the Saturn rockets, and determined that they were insufficient for the requirements of the Space Shuttle; in July 1971, it issued a contract to Rocketdyne to begin development on the RS-25 engine.
NASA reviewed 29 potential designs for the Space Shuttle and determined that a design with two side boosters should be used, and the boosters should be reusable to reduce costs. NASA and the Air Force elected to use solid-propellant boosters because of the lower costs and the ease of refurbishing them for reuse after they landed in the ocean. In January 1972, President Richard Nixon approved the Shuttle, and NASA decided on its final design in March. The development of the Space Shuttle Main Engine (SSME) remained the responsibility of Rocketdyne, and the contract was issued in July 1971, and updated SSME specifications were submitted to Rocketdyne that April. The following August, NASA awarded the contract to build the orbiter to North American Rockwell, which had by then constructed a full-scale mock-up, later named Inspiration. In August 1973, NASA awarded the external tank contract to Martin Marietta, and in November the solid-rocket booster contract to Morton Thiokol.
Development
On June 4, 1974, Rockwell began construction on the first orbiter, OV-101, dubbed Constitution, later to be renamed Enterprise. Enterprise was designed as a test vehicle, and did not include engines or heat shielding. Construction was completed on September 17, 1976, and Enterprise was moved to the Edwards Air Force Base to begin testing. Rockwell constructed the Main Propulsion Test Article (MPTA)-098, which was a structural truss mounted to the ET with three RS-25 engines attached. It was tested at the National Space Technology Laboratory (NSTL) to ensure that the engines could safely run through the launch profile. Rockwell conducted mechanical and thermal stress tests on Structural Test Article (STA)-099 to determine the effects of aerodynamic and thermal stresses during launch and reentry.
The beginning of the development of the RS-25 Space Shuttle Main Engine was delayed for nine months while Pratt & Whitney challenged the contract that had been issued to Rocketdyne. The first engine was completed in March 1975, after issues with developing the first throttleable, reusable engine. During engine testing, the RS-25 experienced multiple nozzle failures, as well as broken turbine blades. Despite the problems during testing, NASA ordered the nine RS-25 engines needed for its three orbiters under construction in May 1978.
NASA experienced significant delays in the development of the Space Shuttle's thermal protection system. Previous NASA spacecraft had used ablative heat shields, but those could not be reused. NASA chose to use ceramic tiles for thermal protection, as the shuttle could then be constructed of lightweight aluminum, and the tiles could be individually replaced as needed. Construction began on Columbia on March 27, 1975, and it was delivered to the KSC on March 25, 1979. At the time of its arrival at the KSC, Columbia still had 6,000 of its 30,000 tiles remaining to be installed. However, many of the tiles that had been originally installed had to be replaced, requiring two years of installation before Columbia could fly.
On January 5, 1979, NASA commissioned a second orbiter. Later that month, Rockwell began converting STA-099 to OV-099, later named Challenger. On January 29, 1979, NASA ordered two additional orbiters, OV-103 and OV-104, which were named Discovery and Atlantis. Construction of OV-105, later named Endeavour, began in February 1982, but NASA decided to limit the Space Shuttle fleet to four orbiters in 1983. After the loss of Challenger, NASA resumed production of Endeavour in September 1987.
Testing
After it arrived at Edwards AFB, Enterprise underwent flight testing with the Shuttle Carrier Aircraft, a Boeing 747 that had been modified to carry the orbiter. In February 1977, Enterprise began the Approach and Landing Tests (ALT) and underwent captive flights, where it remained attached to the Shuttle Carrier Aircraft for the duration of the flight. On August 12, 1977, Enterprise conducted its first glide test, where it detached from the Shuttle Carrier Aircraft and landed at Edwards AFB. After four additional flights, Enterprise was moved to the Marshall Space Flight Center (MSFC) on March 13, 1978. Enterprise underwent shake tests in the Mated Vertical Ground Vibration Test, where it was attached to an external tank and solid rocket boosters, and underwent vibrations to simulate the stresses of launch. In April 1979, Enterprise was taken to the KSC, where it was attached to an external tank and solid rocket boosters, and moved to LC-39. Once installed at the launch pad, the Space Shuttle was used to verify the proper positioning of the launch complex hardware. Enterprise was taken back to California in August 1979, and later served in the development of the SLC-6 at Vandenberg AFB in 1984.
On November 24, 1980, Columbia was mated with its external tank and solid-rocket boosters, and was moved to LC-39 on December 29. The first Space Shuttle mission, STS-1, would be the first time NASA performed a crewed first-flight of a spacecraft. On April 12, 1981, the Space Shuttle launched for the first time, and was piloted by John Young and Robert Crippen. During the two-day mission, Young and Crippen tested equipment on board the shuttle, and found several of the ceramic tiles had fallen off the top side of the Columbia. NASA coordinated with the Air Force to use satellites to image the underside of Columbia, and determined there was no damage. Columbia reentered the atmosphere and landed at Edwards AFB on April 14.
NASA conducted three additional test flights with Columbia in 1981 and 1982. On July 4, 1982, STS-4, flown by Ken Mattingly and Henry Hartsfield, landed on a concrete runway at Edwards AFB. President Ronald Reagan and his wife Nancy met the crew, and delivered a speech. After STS-4, NASA declared its Space Transportation System (STS) operational.
Description
The Space Shuttle was the first operational orbital spacecraft designed for reuse. Each Space Shuttle orbiter was designed for a projected lifespan of 100 launches or ten years of operational life, although this was later extended. At launch, it consisted of the orbiter, which contained the crew and payload, the external tank (ET), and the two solid rocket boosters (SRBs).
Responsibility for the Space Shuttle components was spread among multiple NASA field centers. The KSC was responsible for launch, landing, and turnaround operations for equatorial orbits (the only orbit profile actually used in the program). The U.S. Air Force at the Vandenberg Air Force Base was responsible for launch, landing, and turnaround operations for polar orbits (though this was never used). The Johnson Space Center (JSC) served as the central point for all Shuttle operations and the MSFC was responsible for the main engines, external tank, and solid rocket boosters. The John C. Stennis Space Center handled main engine testing, and the Goddard Space Flight Center managed the global tracking network.
Orbiter
The orbiter had design elements and capabilities of both a rocket and an aircraft to allow it to launch vertically and then land as a glider. Its three-part fuselage provided support for the crew compartment, cargo bay, flight surfaces, and engines. The rear of the orbiter contained the Space Shuttle Main Engines (SSME), which provided thrust during launch, as well as the Orbital Maneuvering System (OMS), which allowed the orbiter to achieve, alter, and exit its orbit once in space. Its double-delta wings were long, and were swept 81° at the inner leading edge and 45° at the outer leading edge. Each wing had an inboard and outboard elevon to provide flight control during reentry, along with a flap located between the wings, below the engines to control pitch. The orbiter's vertical stabilizer was swept backwards at 45° and contained a rudder that could split to act as a speed brake. The vertical stabilizer also contained a two-part drag parachute system to slow the orbiter after landing. The orbiter used retractable landing gear with a nose landing gear and two main landing gear, each containing two tires. The main landing gear contained two brake assemblies each, and the nose landing gear contained an electro-hydraulic steering mechanism.
Crew
The Space Shuttle crew varied per mission. They underwent rigorous testing and training to meet the qualification requirements for their roles. The crew was divided into three categories: Pilots, Mission Specialists, and Payload Specialists. Pilots were further divided into two roles: the Space Shuttle Commander, who would seat in the forward left seat and the Space Shuttle Pilot who would seat in the forward right seat. The test flights, STS-1 through STS-4 only had two members each, the commander and pilot. The commander and the pilot were both qualified to fly and land the orbiter. The on-orbit operations, such as experiments, payload deployment, and EVAs, were conducted primarily by the mission specialists who were specifically trained for their intended missions and systems. Early in the Space Shuttle program, NASA flew with payload specialists, who were typically systems specialists who worked for the company paying for the payload's deployment or operations. The final payload specialist, Gregory B. Jarvis, flew on STS-51-L, and future non-pilots were designated as mission specialists. An astronaut flew as a crewed spaceflight engineer on both STS-51-C and STS-51-J to serve as a military representative for a National Reconnaissance Office payload. A Space Shuttle crew typically had seven astronauts, with STS-61-A flying with eight.
Crew compartment
The crew compartment comprised three decks and was the pressurized, habitable area on all Space Shuttle missions. The flight deck consisted of two seats for the commander and pilot, as well as an additional two to four seats for crew members. The mid-deck was located below the flight deck and was where the galley and crew bunks were set up, as well as three or four crew member seats. The mid-deck contained the airlock, which could support two astronauts on an extravehicular activity (EVA), as well as access to pressurized research modules. An equipment bay was below the mid-deck, which stored environmental control and waste management systems.
On the first four Shuttle missions, astronauts wore modified U.S. Air Force high-altitude full-pressure suits, which included a full-pressure helmet during ascent and descent. From the fifth flight, STS-5, until the loss of Challenger, the crew wore one-piece light blue nomex flight suits and partial-pressure helmets. After the Challenger disaster, the crew members wore the Launch Entry Suit (LES), a partial-pressure version of the high-altitude pressure suits with a helmet. In 1994, the LES was replaced by the full-pressure Advanced Crew Escape Suit (ACES), which improved the safety of the astronauts in an emergency situation. Columbia originally had modified SR-71 zero-zero ejection seats installed for the ALT and first four missions, but these were disabled after STS-4 and removed after STS-9.
The flight deck was the top level of the crew compartment and contained the flight controls for the orbiter. The commander sat in the front left seat, and the pilot sat in the front right seat, with two to four additional seats set up for additional crew members. The instrument panels contained over 2,100 displays and controls, and the commander and pilot were both equipped with a heads-up display (HUD) and a Rotational Hand Controller (RHC) to gimbal the engines during powered flight and fly the orbiter during unpowered flight. Both seats also had rudder controls, to allow rudder movement in flight and nose-wheel steering on the ground. The orbiter vehicles were originally installed with the Multifunction CRT Display System (MCDS) to display and control flight information. The MCDS displayed the flight information at the commander and pilot seats, as well as at the aft seating location, and also controlled the data on the HUD. In 1998, Atlantis was upgraded with the Multifunction Electronic Display System (MEDS), which was a glass cockpit upgrade to the flight instruments that replaced the eight MCDS display units with 11 multifunction colored digital screens. MEDS was flown for the first time in May 2000 on STS-101, and the other orbiter vehicles were upgraded to it. The aft section of the flight deck contained windows looking into the payload bay, as well as an RHC to control the Remote Manipulator System during cargo operations. Additionally, the aft flight deck had monitors for a closed-circuit television to view the cargo bay.
The mid-deck contained the crew equipment storage, sleeping area, galley, medical equipment, and hygiene stations for the crew. The crew used modular lockers to store equipment that could be scaled depending on their needs, as well as permanently installed floor compartments. The mid-deck contained a port-side hatch that the crew used for entry and exit while on Earth.
Airlock
The airlock is a structure installed to allow movement between two spaces with different gas components, conditions, or pressures. Continuing on the mid-deck structure, each orbiter was originally installed with an internal airlock in the mid-deck. The internal airlock was installed as an external airlock in the payload bay on Discovery, Atlantis, and Endeavour to improve docking with Mir and the ISS, along with the Orbiter Docking System. The airlock module can be fitted in the mid-bay, or connected to it but in the payload bay. With an internal cylindrical volume of diameter and in length, it can hold two suited astronauts. It has two D-shaped hatchways long (diameter), and wide.
Flight systems
The orbiter was equipped with an avionics system to provide information and control during atmospheric flight. Its avionics suite contained three microwave scanning beam landing systems, three gyroscopes, three TACANs, three accelerometers, two radar altimeters, two barometric altimeters, three attitude indicators, two Mach indicators, and two Mode C transponders. During reentry, the crew deployed two air data probes once they were traveling slower than Mach 5. The orbiter had three inertial measuring units (IMU) that it used for guidance and navigation during all phases of flight. The orbiter contains two star trackers to align the IMUs while in orbit. The star trackers are deployed while in orbit, and can automatically or manually align on a star. In 1991, NASA began upgrading the inertial measurement units with an inertial navigation system (INS), which provided more accurate location information. In 1993, NASA flew a GPS receiver for the first time aboard STS-51. In 1997, Honeywell began developing an integrated GPS/INS to replace the IMU, INS, and TACAN systems, which first flew on STS-118 in August 2007.
While in orbit, the crew primarily communicated using one of four S band radios, which provided both voice and data communications. Two of the S band radios were phase modulation transceivers, and could transmit and receive information. The other two S band radios were frequency modulation transmitters and were used to transmit data to NASA. As S band radios can operate only within their line of sight, NASA used the Tracking and Data Relay Satellite System and the Spacecraft Tracking and Data Acquisition Network ground stations to communicate with the orbiter throughout its orbit. Additionally, the orbiter deployed a high-bandwidth Ku band radio out of the cargo bay, which could also be utilized as a rendezvous radar. The orbiter was also equipped with two UHF radios for communications with air traffic control and astronauts conducting EVA.
The Space Shuttle's fly-by-wire control system was entirely reliant on its main computer, the Data Processing System (DPS). The DPS controlled the flight controls and thrusters on the orbiter, as well as the ET and SRBs during launch. The DPS consisted of five general-purpose computers (GPC), two magnetic tape mass memory units (MMUs), and the associated sensors to monitor the Space Shuttle components. The original GPC used was the IBM AP-101B, which used a separate central processing unit (CPU) and input/output processor (IOP), and non-volatile solid-state memory. From 1991 to 1993, the orbiter vehicles were upgraded to the AP-101S, which improved the memory and processing capabilities, and reduced the volume and weight of the computers by combining the CPU and IOP into a single unit. Four of the GPCs were loaded with the Primary Avionics Software System (PASS), which was Space Shuttle-specific software that provided control through all phases of flight. During ascent, maneuvering, reentry, and landing, the four PASS GPCs functioned identically to produce quadruple redundancy and would error check their results. In case of a software error that would cause erroneous reports from the four PASS GPCs, a fifth GPC ran the Backup Flight System, which used a different program and could control the Space Shuttle through ascent, orbit, and reentry, but could not support an entire mission. The five GPCs were separated in three separate bays within the mid-deck to provide redundancy in the event of a cooling fan failure. After achieving orbit, the crew would switch some of the GPCs functions from guidance, navigation, and control (GNC) to systems management (SM) and payload (PL) to support the operational mission. The Space Shuttle was not launched if its flight would run from December to January, as its flight software would have required the orbiter vehicle's computers to be reset at the year change. In 2007, NASA engineers devised a solution so Space Shuttle flights could cross the year-end boundary.
Space Shuttle missions typically brought a portable general support computer (PGSC) that could integrate with the orbiter vehicle's computers and communication suite, as well as monitor scientific and payload data. Early missions brought the Grid Compass, one of the first laptop computers, as the PGSC, but later missions brought Apple and Intel laptops.
Payload bay
The payload bay comprised most of the orbiter vehicle's fuselage, and provided the cargo-carrying space for the Space Shuttle's payloads. It was long and wide, and could accommodate cylindrical payloads up to in diameter. Two payload bay doors hinged on either side of the bay, and provided a relatively airtight seal to protect payloads from heating during launch and reentry. Payloads were secured in the payload bay to the attachment points on the longerons. The payload bay doors served an additional function as radiators for the orbiter vehicle's heat, and were opened upon reaching orbit for heat rejection.
The orbiter could be used in conjunction with a variety of add-on components depending on the mission. This included orbital laboratories, boosters for launching payloads farther into space, the Remote Manipulator System (RMS), and optionally the EDO pallet to extend the mission duration. To limit the fuel consumption while the orbiter was docked at the ISS, the Station-to-Shuttle Power Transfer System (SSPTS) was developed to convert and transfer station power to the orbiter. The SSPTS was first used on STS-118, and was installed on Discovery and Endeavour.
Remote Manipulator System
The Remote Manipulator System (RMS), also known as Canadarm, was a mechanical arm attached to the cargo bay. It could be used to grasp and manipulate payloads, as well as serve as a mobile platform for astronauts conducting an EVA. The RMS was built by the Canadian company Spar Aerospace and was controlled by an astronaut inside the orbiter's flight deck using their windows and closed-circuit television. The RMS allowed for six degrees of freedom and had six joints located at three points along the arm. The original RMS could deploy or retrieve payloads up to , which was later improved to .
Spacelab
The Spacelab module was a European-funded pressurized laboratory that was carried within the payload bay and allowed for scientific research while in orbit. The Spacelab module contained two segments that were mounted in the aft end of the payload bay to maintain the center of gravity during flight. Astronauts entered the Spacelab module through a tunnel that connected to the airlock. The Spacelab equipment was primarily stored in pallets, which provided storage for both experiments as well as computer and power equipment. Spacelab hardware was flown on 28 missions through 1999 and studied subjects including astronomy, microgravity, radar, and life sciences. Spacelab hardware also supported missions such as Hubble Space Telescope (HST) servicing and space station resupply. The Spacelab module was tested on STS-2 and STS-3, and the first full mission was on STS-9.
RS-25 engines
Three RS-25 engines, also known as the Space Shuttle Main Engines (SSME), were mounted on the orbiter's aft fuselage in a triangular pattern. The engine nozzles could gimbal ±10.5° in pitch, and ±8.5° in yaw during ascent to change the direction of their thrust to steer the Shuttle. The titanium alloy reusable engines were independent of the orbiter vehicle and would be removed and replaced in between flights. The RS-25 is a staged-combustion cycle cryogenic engine that used liquid oxygen and hydrogen and had a higher chamber pressure than any previous liquid-fueled rocket. The original main combustion chamber operated at a maximum pressure of . The engine nozzle is tall and has an interior diameter of . The nozzle is cooled by 1,080 interior lines carrying liquid hydrogen and is thermally protected by insulative and ablative material.
The RS-25 engines had several improvements to enhance reliability and power. During the development program, Rocketdyne determined that the engine was capable of safe reliable operation at 104% of the originally specified thrust. To keep the engine thrust values consistent with previous documentation and software, NASA kept the originally specified thrust at 100%, but had the RS-25 operate at higher thrust. RS-25 upgrade versions were denoted as Block I and Block II. 109% thrust level was achieved with the Block II engines in 2001, which reduced the chamber pressure to , as it had a larger throat area. The normal maximum throttle was 104 percent, with 106% or 109% used for mission aborts.
Orbital Maneuvering System
The Orbital Maneuvering System (OMS) consisted of two aft-mounted AJ10-190 engines and the associated propellant tanks. The AJ10 engines used monomethylhydrazine (MMH) oxidized by dinitrogen tetroxide (N2O4). The pods carried a maximum of of MMH and of N2O4. The OMS engines were used after main engine cut-off (MECO) for orbital insertion. Throughout the flight, they were used for orbit changes, as well as the deorbit burn prior to reentry. Each OMS engine produced of thrust, and the entire system could provide of velocity change.
Thermal protection system
The orbiter was protected from heat during reentry by the thermal protection system (TPS), a thermal soaking protective layer around the orbiter. In contrast with previous US spacecraft, which had used ablative heat shields, the reusability of the orbiter required a multi-use heat shield. During reentry, the TPS experienced temperatures up to , but had to keep the orbiter vehicle's aluminum skin temperature below . The TPS primarily consisted of four types of tiles. The nose cone and leading edges of the wings experienced temperatures above , and were protected by reinforced carbon-carbon tiles (RCC). Thicker RCC tiles were developed and installed in 1998 to prevent damage from micrometeoroid and orbital debris, and were further improved after RCC damage caused in the Columbia disaster. Beginning with STS-114, the orbiter vehicles were equipped with the wing leading edge impact detection system to alert the crew to any potential damage. The entire underside of the orbiter vehicle, as well as the other hottest surfaces, were protected with tiles of high-temperature reusable surface insulation, made of borosilicate glass-coated silica fibers that trapped heat in air pockets and redirected it out. Areas on the upper parts of the orbiter vehicle were coated in tiles of white low-temperature reusable surface insulation with similar composition, which provided protection for temperatures below . The payload bay doors and parts of the upper wing surfaces were coated in reusable Nomex felt surface insulation or in beta cloth, as the temperature there remained below .
External tank
The Space Shuttle external tank (ET) carried the propellant for the Space Shuttle Main Engines, and connected the orbiter vehicle with the solid rocket boosters. The ET was tall and in diameter, and contained separate tanks for liquid oxygen and liquid hydrogen. The liquid oxygen tank was housed in the nose of the ET, and was tall. The liquid hydrogen tank comprised the bulk of the ET, and was tall. The orbiter vehicle was attached to the ET at two umbilical plates, which contained five propellant and two electrical umbilicals, and forward and aft structural attachments. The exterior of the ET was covered in orange spray-on foam to allow it to survive the heat of ascent.
The ET provided propellant to the Space Shuttle Main Engines from liftoff until main engine cutoff. The ET separated from the orbiter vehicle 18 seconds after engine cutoff and could be triggered automatically or manually. At the time of separation, the orbiter vehicle retracted its umbilical plates, and the umbilical cords were sealed to prevent excess propellant from venting into the orbiter vehicle. After the bolts attached at the structural attachments were sheared, the ET separated from the orbiter vehicle. At the time of separation, gaseous oxygen was vented from the nose to cause the ET to tumble, ensuring that it would break up upon reentry. The ET was the only major component of the Space Shuttle system that was not reused, and it would travel along a ballistic trajectory into the Indian or Pacific Ocean.
For the first two missions, STS-1 and STS-2, the ET was covered in of white fire-retardant latex paint to provide protection against damage from ultraviolet radiation. Further research determined that the orange foam itself was sufficiently protected, and the ET was no longer covered in latex paint beginning on STS-3. A light-weight tank (LWT) was first flown on STS-6, which reduced tank weight by . The LWT's weight was reduced by removing components from the hydrogen tank and reducing the thickness of some skin panels. In 1998, a super light-weight ET (SLWT) first flew on STS-91. The SLWT used the 2195 aluminum-lithium alloy, which was 40% stronger and 10% less dense than its predecessor, 2219 aluminum-lithium alloy. The SLWT weighed less than the LWT, which allowed the Space Shuttle to deliver heavy elements to ISS's high inclination orbit.
Solid Rocket Boosters
The Solid Rocket Boosters (SRB) provided 71.4% of the Space Shuttle's thrust during liftoff and ascent, and were the largest solid-propellant motors ever flown. Each SRB was tall and wide, weighed , and had a steel exterior approximately thick. The SRB's subcomponents were the solid-propellant motor, nose cone, and rocket nozzle. The solid-propellant motor comprised the majority of the SRB's structure. Its casing consisted of 11 steel sections which made up its four main segments. The nose cone housed the forward separation motors and the parachute systems that were used during recovery. The rocket nozzles could gimbal up to 8° to allow for in-flight adjustments.
The rocket motors were each filled with a total of solid rocket propellant (APCP+PBAN), and joined in the Vehicle Assembly Building (VAB) at KSC. In addition to providing thrust during the first stage of launch, the SRBs provided structural support for the orbiter vehicle and ET, as they were the only system that was connected to the mobile launcher platform (MLP). At the time of launch, the SRBs were armed at T−5 minutes, and could only be electrically ignited once the RS-25 engines had ignited and were without issue. They each provided of thrust, which was later improved to beginning on STS-8. After expending their fuel, the SRBs were jettisoned approximately two minutes after launch at an altitude of approximately . Following separation, they deployed drogue and main parachutes, landed in the ocean, and were recovered by the crews aboard the ships MV Freedom Star and MV Liberty Star. Once they were returned to Cape Canaveral, they were cleaned and disassembled. The rocket motor, igniter, and nozzle were then shipped to Thiokol to be refurbished and reused on subsequent flights.
The SRBs underwent several redesigns throughout the program's lifetime. STS-6 and STS-7 used SRBs lighter due to walls that were thinner, but were determined to be too thin to fly safely. Subsequent flights until STS-26 used cases that were thinner than the standard-weight cases, which reduced . After the Challenger disaster as a result of an O-ring failing at low temperature, the SRBs were redesigned to provide a constant seal regardless of the ambient temperature.
Support vehicles
The Space Shuttle's operations were supported by vehicles and infrastructure that facilitated its transportation, construction, and crew access. The crawler-transporters carried the MLP and the Space Shuttle from the VAB to the launch site. The Shuttle Carrier Aircraft (SCA) were two modified Boeing 747s that could carry an orbiter on its back. The original SCA (N905NA) was first flown in 1975, and was used for the ALT and ferrying the orbiter from Edwards AFB to the KSC on all missions prior to 1991. A second SCA (N911NA) was acquired in 1988, and was first used to transport Endeavour from the factory to the KSC. Following the retirement of the Space Shuttle, N905NA was put on display at the JSC, and N911NA was put on display at the Joe Davies Heritage Airpark in Palmdale, California. The Crew Transport Vehicle (CTV) was a modified airport jet bridge that was used to assist astronauts to egress from the orbiter after landing, where they would undergo their post-mission medical checkups. The Astrovan transported astronauts from the crew quarters in the Operations and Checkout Building to the launch pad on launch day. The NASA Railroad comprised three locomotives that transported SRB segments from the Florida East Coast Railway in Titusville to the KSC.
Mission profile
Launch preparation
The Space Shuttle was prepared for launch primarily in the VAB at the KSC. The SRBs were assembled and attached to the external tank on the MLP. The orbiter vehicle was prepared at the Orbiter Processing Facility (OPF) and transferred to the VAB, where a crane was used to rotate it to the vertical orientation and mate it to the external tank. Once the entire stack was assembled, the MLP was carried for to Launch Complex 39 by one of the crawler-transporters. After the Space Shuttle arrived at one of the two launchpads, it would connect to the Fixed and Rotation Service Structures, which provided servicing capabilities, payload insertion, and crew transportation. The crew was transported to the launch pad at T−3 hours and entered the orbiter vehicle, which was closed at T−2 hours. Liquid oxygen and hydrogen were loaded into the external tank via umbilicals that attached to the orbiter vehicle, which began at T−5 hours 35 minutes. At T−3 hours 45 minutes, the hydrogen fast-fill was complete, followed 15 minutes later by the oxygen tank fill. Both tanks were slowly filled up until the launch as the oxygen and hydrogen evaporated.
The launch commit criteria considered precipitation, temperatures, cloud cover, lightning forecast, wind, and humidity. The Space Shuttle was not launched under conditions where it could have been struck by lightning, as its exhaust plume could have triggered lightning by providing a current path to ground after launch, which occurred on Apollo 12. The NASA Anvil Rule for a Shuttle launch stated that an anvil cloud could not appear within a distance of . The Shuttle Launch Weather Officer monitored conditions until the final decision to scrub a launch was announced. In addition to the weather at the launch site, conditions had to be acceptable at one of the Transatlantic Abort Landing sites and the SRB recovery area.
Launch
The mission crew and the Launch Control Center (LCC) personnel completed systems checks throughout the countdown. Two built-in holds at T−20 minutes and T−9 minutes provided scheduled breaks to address any issues and additional preparation. After the built-in hold at T−9 minutes, the countdown was automatically controlled by the Ground Launch Sequencer (GLS) at the LCC, which stopped the countdown if it sensed a critical problem with any of the Space Shuttle's onboard systems. At T−3 minutes 45 seconds, the engines began conducting gimbal tests, which were concluded at T−2 minutes 15 seconds. The ground Launch Processing System handed off the control to the orbiter vehicle's GPCs at T−31 seconds. At T−16 seconds, the GPCs armed the SRBs, the sound suppression system (SPS) began to drench the MLP and SRB trenches with of water to protect the orbiter vehicle from damage by acoustical energy and rocket exhaust reflected from the flame trench and MLP during lift-off. At T−10 seconds, hydrogen igniters were activated under each engine bell to quell the stagnant gas inside the cones before ignition. Failure to burn these gases could trip the onboard sensors and create the possibility of an overpressure and explosion of the vehicle during the firing phase. The hydrogen tank's prevalves were opened at T−9.5 seconds in preparation for engine start.
Beginning at T−6.6 seconds, the main engines were ignited sequentially at 120-millisecond intervals. All three RS-25 engines were required to reach 90% rated thrust by T−3 seconds, otherwise the GPCs would initiate an RSLS abort. If all three engines indicated nominal performance by T−3 seconds, they were commanded to gimbal to liftoff configuration and the command would be issued to arm the SRBs for ignition at T−0. Between T−6.6 seconds and T−3 seconds, while the RS-25 engines were firing but the SRBs were still bolted to the pad, the offset thrust would cause the Space Shuttle to pitch down measured at the tip of the external tank; the 3-second delay allowed the stack to return to nearly vertical before SRB ignition. This movement was nicknamed the "twang." At T−0, the eight frangible nuts holding the SRBs to the pad were detonated, the final umbilicals were disconnected, the SSMEs were commanded to 100% throttle, and the SRBs were ignited. By T+0.23 seconds, the SRBs built up enough thrust for liftoff to commence, and reached maximum chamber pressure by T+0.6 seconds. At T−0, the JSC Mission Control Center assumed control of the flight from the LCC.
At T+4 seconds, when the Space Shuttle reached an altitude of , the RS-25 engines were throttled up to 104.5%. At approximately T+7 seconds, the Space Shuttle rolled to a heads-down orientation at an altitude of , which reduced aerodynamic stress and provided an improved communication and navigation orientation. Approximately 20–30 seconds into ascent and an altitude of , the RS-25 engines were throttled down to 65–72% to reduce the maximum aerodynamic forces at Max Q. Additionally, the shape of the SRB propellant was designed to cause thrust to decrease at the time of Max Q. The GPCs could dynamically control the throttle of the RS-25 engines based upon the performance of the SRBs.
At approximately T+123 seconds and an altitude of , pyrotechnic fasteners released the SRBs, which reached an apogee of before parachuting into the Atlantic Ocean. The Space Shuttle continued its ascent using only the RS-25 engines. On earlier missions, the Space Shuttle remained in the heads-down orientation to maintain communications with the tracking station in Bermuda, but later missions, beginning with STS-87, rolled to a heads-up orientation at T+6 minutes for communication with the tracking and data relay satellite constellation. The RS-25 engines were throttled at T+7 minutes 30 seconds to limit vehicle acceleration to 3 g. At 6 seconds prior to main engine cutoff (MECO), which occurred at T+8 minutes 30 seconds, the RS-25 engines were throttled down to 67%. The GPCs controlled ET separation and dumped the remaining liquid oxygen and hydrogen to prevent outgassing while in orbit. The ET continued on a ballistic trajectory and broke up during reentry, with some small pieces landing in the Indian or Pacific Ocean.
Early missions used two firings of the OMS to achieve orbit; the first firing raised the apogee while the second circularized the orbit. Missions after STS-38 used the RS-25 engines to achieve the optimal apogee, and used the OMS engines to circularize the orbit. The orbital altitude and inclination were mission-dependent, and the Space Shuttle's orbits varied from .
In orbit
The type of mission the Space Shuttle was assigned to dictate the type of orbit that it entered. The initial design of the reusable Space Shuttle envisioned an increasingly cheap launch platform to deploy commercial and government satellites. Early missions routinely ferried satellites, which determined the type of orbit that the orbiter vehicle would enter. Following the Challenger disaster, many commercial payloads were moved to expendable commercial rockets, such as the Delta II. While later missions still launched commercial payloads, Space Shuttle assignments were routinely directed towards scientific payloads, such as the Hubble Space Telescope, Spacelab, and the Galileo spacecraft. Beginning with STS-71, the orbiter vehicle conducted dockings with the Mir space station. In its final decade of operation, the Space Shuttle was used for the construction of the International Space Station. Most missions involved staying in orbit several days to two weeks, although longer missions were possible with the Extended Duration Orbiter pallet. The 17 day 15 hour STS-80 mission was the longest Space Shuttle mission duration.
Re-entry and landing
Approximately four hours prior to deorbit, the crew began preparing the orbiter vehicle for reentry by closing the payload doors, radiating excess heat, and retracting the Ku band antenna. The orbiter vehicle maneuvered to an upside-down, tail-first orientation and began a 2–4 minute OMS burn approximately 20 minutes before it reentered the atmosphere. The orbiter vehicle reoriented itself to a nose-forward position with a 40° angle-of-attack, and the forward reaction control system (RCS) jets were emptied of fuel and disabled prior to reentry. The orbiter vehicle's reentry was defined as starting at an altitude of , when it was traveling at approximately Mach 25. The orbiter vehicle's reentry was controlled by the GPCs, which followed a preset angle-of-attack plan to prevent unsafe heating of the TPS. During reentry, the orbiter's speed was regulated by altering the amount of drag produced, which was controlled by means of angle of attack, as well as bank angle. The latter could be used to control drag without changing the angle of attack. A series of roll reversals were performed to control azimuth while banking. The orbiter vehicle's aft RCS jets were disabled as its ailerons, elevators, and rudder became effective in the lower atmosphere. At an altitude of , the orbiter vehicle opened its speed brake on the vertical stabilizer. At 8 minutes 44 seconds prior to landing, the crew deployed the air data probes, and began lowering the angle-of-attack to 36°. The orbiter's maximum glide ratio/lift-to-drag ratio varied considerably with speed, ranging from 1.3 at hypersonic speeds to 4.9 at subsonic speeds. The orbiter vehicle flew to one of the two Heading Alignment Cones, located away from each end of the runway's centerline, where it made its final turns to dissipate excess energy prior to its approach and landing. Once the orbiter vehicle was traveling subsonically, the crew took over manual control of the flight.
The approach and landing phase began when the orbiter vehicle was at an altitude of and traveling at . The orbiter followed either a 20° or 18° glideslope and descended at approximately . The speed brake was used to keep a continuous speed, and crew initiated a pre-flare maneuver to a 1.5° glideslope at an altitude of . The landing gear was deployed 10 seconds prior to touchdown, when the orbiter was at an altitude of and traveling . A final flare maneuver reduced the orbiter vehicle's descent rate to , with touchdown occurring at , depending on the weight of the orbiter vehicle. After the landing gear touched down, the crew deployed a drag chute out of the vertical stabilizer, and began wheel braking when the orbiter was traveling slower than . After the orbiter's wheels stopped, the crew deactivated the flight components and prepared to exit.
Landing sites
The primary Space Shuttle landing site was the Shuttle Landing Facility at KSC, where 78 of the 133 successful landings occurred. In the event of unfavorable landing conditions, the Shuttle could delay its landing or land at an alternate location. The primary alternate was Edwards AFB, which was used for 54 landings. STS-3 landed at the White Sands Space Harbor in New Mexico and required extensive post-processing after exposure to the gypsum-rich sand, some of which was found in Columbia debris after STS-107. Landings at alternate airfields required the Shuttle Carrier Aircraft to transport the orbiter back to Cape Canaveral.
In addition to the pre-planned landing airfields, there were 85 agreed-upon emergency landing sites to be used in different abort scenarios, with 58 located in other countries. The landing locations were chosen based upon political relationships, favorable weather, a runway at least long, and TACAN or DME equipment. Additionally, as the orbiter vehicle only had UHF radios, international sites with only VHF radios would have been unable to communicate directly with the crew. Facilities on the east coast of the US were planned for East Coast Abort Landings, while several sites in Europe and Africa were planned in the event of a Transoceanic Abort Landing. The facilities were prepared with equipment and personnel in the event of an emergency shuttle landing but were never used.
Post-landing processing
After the landing, ground crews approached the orbiter to conduct safety checks. Teams wearing self-contained breathing gear tested for the presence of hydrogen, hydrazine, monomethylhydrazine, nitrogen tetroxide, and ammonia to ensure the landing area was safe. Air conditioning and Freon lines were connected to cool the crew and equipment and dissipate excess heat from reentry. A flight surgeon boarded the orbiter and performed medical checks of the crew before they disembarked.
Once the orbiter was secured, it was towed to the OPF to be inspected, repaired, and prepared for the next mission. The processing included:
removal and installation of mission-specific items and payloads
draining of waste and leftover consumables, and refilling of new consumables
inspection and (if necessary) repair of the thermal protection system
checkout and servicing of main engines (done in the Main Engine Processing Facility to facilitate easier access, necessitating their removal from the orbiter)
if necessary, removal of the Orbital Maneuvering System and Reaction Control System pods for maintenance at the Hypergol Maintenance Facility
installation of any mid-life upgrades and modifications
Space Shuttle program
The Space Shuttle flew from April 12, 1981, until July 21, 2011. Throughout the program, the Space Shuttle had 135 missions, of which 133 returned safely. Throughout its lifetime, the Space Shuttle was used to conduct scientific research, deploy commercial, military, and scientific payloads, and was involved in the construction and operation of Mir and the ISS. During its tenure, the Space Shuttle served as the only U.S. vehicle to launch astronauts, of which there was no replacement until the launch of Crew Dragon Demo-2 on May 30, 2020.
Budget
The overall NASA budget of the Space Shuttle program has been estimated to be $221 billion (in 2012 dollars). The developers of the Space Shuttle advocated for reusability as a cost-saving measure, which resulted in higher development costs for presumed lower costs-per-launch. During the design of the Space Shuttle, the Phase B proposals were not as cheap as the initial Phase A estimates indicated; Space Shuttle program manager Robert Thompson acknowledged that reducing cost-per-pound was not the primary objective of the further design phases, as other technical requirements could not be met with the reduced costs. Development estimates made in 1972 projected a per-pound cost of payload as low as $1,109 (in 2012) per pound, but the actual payload costs, not to include the costs for the research and development of the Space Shuttle, were $37,207 (in 2012) per pound. Per-launch costs varied throughout the program and were dependent on the rate of flights as well as research, development, and investigation proceedings throughout the Space Shuttle program. In 1982, NASA published an estimate of $260 million (in 2012) per flight, which was based on the prediction of 24 flights per year for a decade. The per-launch cost from 1995 to 2002, when the orbiters and ISS were not being constructed and there was no recovery work following a loss of crew, was $806 million. NASA published a study in 1999 that concluded that costs were $576 million (in 2012) if there were seven launches per year. In 2009, NASA determined that the cost of adding a single launch per year was $252 million (in 2012), which indicated that much of the Space Shuttle program costs are for year-round personnel and operations that continued regardless of the launch rate. Accounting for the entire Space Shuttle program budget, the per-launch cost was $1.642 billion (in 2012).
Disasters
On January 28, 1986, STS-51-L disintegrated 73 seconds after launch, due to the failure of the right SRB, killing all seven astronauts on board Challenger. The disaster was caused by the low-temperature impairment of an O-ring, a mission-critical seal used between segments of the SRB casing. Failure of the O-ring allowed hot combustion gases to escape from between the booster sections and burn through the adjacent ET, leading to a sequence of catastrophic events which caused the orbiter to disintegrate. Repeated warnings from design engineers voicing concerns about the lack of evidence of the O-rings' safety when the temperature was below had been ignored by NASA managers.
On February 1, 2003, Columbia disintegrated during re-entry, killing all seven of the STS-107 crew, because of damage to the carbon-carbon leading edge of the wing caused during launch. Ground control engineers had made three separate requests for high-resolution images taken by the Department of Defense that would have provided an understanding of the extent of the damage, while NASA's chief TPS engineer requested that astronauts on board Columbia be allowed to leave the vehicle to inspect the damage. NASA managers intervened to stop the Department of Defense's imaging of the orbiter and refused the request for the spacewalk, and thus the feasibility of scenarios for astronaut repair or rescue by Atlantis were not considered by NASA management at the time.
Criticism
The partial reusability of the Space Shuttle was one of the primary design requirements during its initial development. The technical decisions that dictated the orbiter's return and re-use reduced the per-launch payload capabilities. The original intention was to compensate for this lower payload by lowering the per-launch costs and a high launch frequency. However, the actual costs of a Space Shuttle launch were higher than initially predicted, and the Space Shuttle did not fly the intended 24 missions per year as initially predicted by NASA.
The Space Shuttle was originally intended as a launch vehicle to deploy satellites, which it was primarily used for on the missions prior to the Challenger disaster. NASA's pricing, which was below cost, was lower than expendable launch vehicles; the intention was that the high volume of Space Shuttle missions would compensate for early financial losses. The improvement of expendable launch vehicles and the transition away from commercial payloads on the Space Shuttle resulted in expendable launch vehicles becoming the primary deployment option for satellites. A key customer for the Space Shuttle was the National Reconnaissance Office (NRO) responsible for spy satellites. The existence of NRO's connection was classified through 1993, and secret considerations of NRO payload requirements led to lack of transparency in the program. The proposed Shuttle-Centaur program, cancelled in the wake of the Challenger disaster, would have pushed the spacecraft beyond its operational capacity.
The fatal Challenger and Columbia disasters demonstrated the safety risks of the Space Shuttle that could result in the loss of the crew. The spaceplane design of the orbiter limited the abort options, as the abort scenarios required the controlled flight of the orbiter to a runway or to allow the crew to egress individually, rather than the abort escape options on the Apollo and Soyuz space capsules. Early safety analyses advertised by NASA engineers and management predicted the chance of a catastrophic failure resulting in the death of the crew as ranging from 1 in 100 launches to as rare as 1 in 100,000. Following the loss of two Space Shuttle missions, the risks for the initial missions were reevaluated, and the chance of a catastrophic loss of the vehicle and crew was found to be as high as 1 in 9. NASA management was criticized afterwards for accepting increased risk to the crew in exchange for higher mission rates. Both the Challenger and Columbia reports explained that NASA culture had failed to keep the crew safe by not objectively evaluating the potential risks of the missions.
Retirement
The Space Shuttle retirement was announced in January 2004. President George W. Bush announced his Vision for Space Exploration, which called for the retirement of the Space Shuttle once it completed construction of the ISS. To ensure the ISS was properly assembled, the contributing partners determined the need for 16 remaining assembly missions in March 2006. One additional Hubble Space Telescope servicing mission was approved in October 2006. Originally, STS-134 was to be the final Space Shuttle mission. However, the Columbia disaster resulted in additional orbiters being prepared for launch on need in the event of a rescue mission. As Atlantis was prepared for the final launch-on-need mission, the decision was made in September 2010 that it would fly as STS-135 with a four-person crew that could remain at the ISS in the event of an emergency. STS-135 launched on July 8, 2011, and landed at the KSC on July 21, 2011, at 5:57 a.m. EDT (09:57 UTC). From then until the launch of Crew Dragon Demo-2 on May 30, 2020, the US launched its astronauts aboard Russian Soyuz spacecraft.
Following each orbiter's final flight, it was processed to make it safe for display. The OMS and RCS systems used presented the primary dangers due to their toxic hypergolic propellant, and most of their components were permanently removed to prevent any dangerous outgassing. Atlantis is on display at the Kennedy Space Center Visitor Complex in Florida, Discovery is on display at the Steven F. Udvar-Hazy Center in Virginia, Endeavour is on display at the California Science Center in Los Angeles, and Enterprise is displayed at the Intrepid Museum in New York. Components from the orbiters were transferred to the US Air Force, ISS program, and Russian and Canadian governments. The engines were removed to be used on the Space Launch System, and spare RS-25 nozzles were attached for display purposes.
| Technology | Crewed spacecraft | null |
28191 | https://en.wikipedia.org/wiki/Snow | Snow | Snow consists of individual ice crystals that grow while suspended in the atmosphere—usually within clouds—and then fall, accumulating on the ground where they undergo further changes. It consists of frozen crystalline water throughout its life cycle, starting when, under suitable conditions, the ice crystals form in the atmosphere, increase to millimeter size, precipitate and accumulate on surfaces, then metamorphose in place, and ultimately melt, slide or sublimate away.
Snowstorms organize and develop by feeding on sources of atmospheric moisture and cold air. Snowflakes nucleate around particles in the atmosphere by attracting supercooled water droplets, which freeze in hexagonal-shaped crystals. Snowflakes take on a variety of shapes, basic among these are platelets, needles, columns and rime. As snow accumulates into a snowpack, it may blow into drifts. Over time, accumulated snow metamorphoses, by sintering, sublimation and freeze-thaw. Where the climate is cold enough for year-to-year accumulation, a glacier may form. Otherwise, snow typically melts seasonally, causing runoff into streams and rivers and recharging groundwater.
Major snow-prone areas include the polar regions, the northernmost half of the Northern Hemisphere and mountainous regions worldwide with sufficient moisture and cold temperatures. In the Southern Hemisphere, snow is confined primarily to mountainous areas, apart from Antarctica.
Snow affects such human activities as transportation: creating the need for keeping roadways, wings, and windows clear; agriculture: providing water to crops and safeguarding livestock; sports such as skiing, snowboarding, and snowmachine travel; and warfare. Snow affects ecosystems, as well, by providing an insulating layer during winter under which plants and animals are able to survive the cold.
Precipitation
Snow develops in clouds that themselves are part of a larger weather system. The physics of snow crystal development in clouds results from a complex set of variables that include moisture content and temperatures. The resulting shapes of the falling and fallen crystals can be classified into a number of basic shapes and combinations thereof. Occasionally, some plate-like, dendritic and stellar-shaped snowflakes can form under clear sky with a very cold temperature inversion present.
Cloud formation
Snow clouds usually occur in the context of larger weather systems, the most important of which is the low-pressure area, which typically incorporate warm and cold fronts as part of their circulation. Two additional and locally productive sources of snow are lake-effect (also sea-effect) storms and elevation effects, especially in mountains.
Low-pressure areas
Mid-latitude cyclones are low-pressure areas which are capable of producing anything from cloudiness and mild snow storms to heavy blizzards. During a hemisphere's fall, winter, and spring, the atmosphere over continents can be cold enough through the depth of the troposphere to cause snowfall. In the Northern Hemisphere, the northern side of the low-pressure area produces the most snow. For the southern mid-latitudes, the side of a cyclone that produces the most snow is the southern side.
Fronts
A cold front, the leading edge of a cooler mass of air, can produce frontal snowsqualls—an intense frontal convective line (similar to a rainband), when temperature is near freezing at the surface. The strong convection that develops has enough moisture to produce whiteout conditions at places which the line passes over as the wind causes intense blowing snow. This type of snowsquall generally lasts less than 30 minutes at any point along its path, but the motion of the line can cover large distances. Frontal squalls may form a short distance ahead of the surface cold front or behind the cold front where there may be a deepening low-pressure system or a series of trough lines which act similar to a traditional cold frontal passage. In situations where squalls develop post-frontally, it is not unusual to have two or three linear squall bands pass in rapid succession separated only by , with each passing the same point roughly 30 minutes apart. In cases where there is a large amount of vertical growth and mixing, the squall may develop embedded cumulonimbus clouds resulting in lightning and thunder which is dubbed thundersnow.
A warm front can produce snow for a period as warm, moist air overrides below-freezing air and creates precipitation at the boundary. Often, snow transitions to rain in the warm sector behind the front.
Lake and ocean effects
Lake-effect snow is produced during cooler atmospheric conditions when a cold air mass moves across long expanses of warmer lake water, warming the lower layer of air which picks up water vapor from the lake, rises up through the colder air above, freezes, and is deposited on the leeward (downwind) shores.
The same effect occurring over bodies of salt water is termed ocean-effect or bay-effect snow. The effect is enhanced when the moving air mass is uplifted by the orographic influence of higher elevations on the downwind shores. This uplifting can produce narrow but very intense bands of precipitation which may deposit at a rate of many inches of snow each hour, often resulting in a large amount of total snowfall.
The areas affected by lake-effect snow are called snowbelts. These include areas east of the Great Lakes, the west coasts of northern Japan, the Kamchatka Peninsula in Russia, and areas near the Great Salt Lake, Black Sea, Caspian Sea, Baltic Sea, and parts of the northern Atlantic Ocean.
Mountain effects
Orographic or relief snowfall is created when moist air is forced up the windward side of mountain ranges by a large-scale wind flow. The lifting of moist air up the side of a mountain range results in adiabatic cooling, and ultimately condensation and precipitation. Moisture is gradually removed from the air by this process, leaving drier and warmer air on the descending, or leeward, side. The resulting enhanced snowfall, along with the decrease in temperature with elevation, combine to increase snow depth and seasonal persistence of snowpack in snow-prone areas.
Mountain waves have also been found to help enhance precipitation amounts downwind of mountain ranges by enhancing the lift needed for condensation and precipitation.
Cloud physics
A snowflake consists of roughly 1019 water molecules which are added to its core at different rates and in different patterns depending on the changing temperature and humidity within the atmosphere that the snowflake falls through on its way to the ground. As a result, snowflakes differ from each other though they follow similar patterns.
Snow crystals form when tiny supercooled cloud droplets (about 10 μm in diameter) freeze. These droplets are able to remain liquid at temperatures lower than , because to freeze, a few molecules in the droplet need to get together by chance to form an arrangement similar to that in an ice lattice. The droplet freezes around this "nucleus". In warmer clouds, an aerosol particle or "ice nucleus" must be present in (or in contact with) the droplet to act as a nucleus. Ice nuclei are very rare compared to cloud condensation nuclei on which liquid droplets form. Clays, desert dust, and biological particles can be nuclei. Artificial nuclei include particles of silver iodide and dry ice, and these are used to stimulate precipitation in cloud seeding.
Once a droplet has frozen, it grows in the supersaturated environment—one where air is saturated with respect to ice when the temperature is below the freezing point. The droplet then grows by diffusion of water molecules in the air (vapor) onto the ice crystal surface where they are collected. Because water droplets are so much more numerous than the ice crystals, the crystals are able to grow to hundreds of micrometers or millimeters in size at the expense of the water droplets by the Wegener–Bergeron–Findeisen process. These large crystals are an efficient source of precipitation, since they fall through the atmosphere due to their mass, and may collide and stick together in clusters, or aggregates. These aggregates are snowflakes, and are usually the type of ice particle that falls to the ground. Although the ice is clear, scattering of light by the crystal facets and hollows/imperfections mean that the crystals often appear white in color due to diffuse reflection of the whole spectrum of light by the small ice particles.
Classification of snowflakes
Micrography of thousands of snowflakes from 1885 onward, starting with Wilson Alwyn Bentley, revealed the wide diversity of snowflakes within a classifiable set of patterns. Closely matching snow crystals have been observed.
Ukichiro Nakaya developed a crystal morphology diagram, relating crystal shapes to the temperature and moisture conditions under which they formed, which is summarized in the following table.
Nakaya discovered that the shape is also a function of whether the prevalent moisture is above or below saturation. Forms below the saturation line tend more toward solid and compact while crystals formed in supersaturated air tend more toward lacy, delicate, and ornate. Many more complex growth patterns also form, which include side-planes, bullet-rosettes, and planar types, depending on the conditions and ice nuclei. If a crystal has started forming in a column growth regime at around and then falls into the warmer plate-like regime, plate or dendritic crystals sprout at the end of the column, producing so called "capped columns".
Magono and Lee devised a classification of freshly formed snow crystals that includes 80 distinct shapes. They documented each with micrographs.
Accumulation
Snow accumulates from a series of snow events, punctuated by freezing and thawing, over areas that are cold enough to retain snow seasonally or perennially. Major snow-prone areas include the Arctic and Antarctic, the Northern Hemisphere, and alpine regions. The liquid equivalent of snowfall may be evaluated using a snow gauge or with a standard rain gauge, adjusted for winter by removal of a funnel and inner cylinder. Both types of gauges melt the accumulated snow and report the amount of water collected. At some automatic weather stations an ultrasonic snow depth sensor may be used to augment the precipitation gauge.
Event
Snow flurry, snow shower, snow storm and blizzard describe snow events of progressively greater duration and intensity. A blizzard is a weather condition involving snow and has varying definitions in different parts of the world. In the United States, a blizzard occurs when two conditions are met for a period of three hours or more: a sustained wind or frequent gusts to , and sufficient snow in the air to reduce visibility to less than . In Canada and the United Kingdom, the criteria are similar. While heavy snowfall often occurs during blizzard conditions, falling snow is not a requirement, as blowing snow can create a ground blizzard.
Snowstorm intensity may be categorized by visibility and depth of accumulation. Snowfall's intensity is determined by visibility, as follows:
Light: visibility greater than
Moderate: visibility restrictions between
Heavy: visibility is less than
Snowsqualls may deposit snow in bands that extend from bodies of water as lake-event weather or result from the passage of an upper-level front.
The International Classification for Seasonal Snow on the Ground defines "height of new snow" as the depth of freshly fallen snow, in centimeters as measured with a ruler, that accumulated on a snowboard during an observation period of 24 hours, or other observation interval. After the measurement, the snow is cleared from the board and the board is placed flush with the snow surface to provide an accurate measurement at the end of the next interval. Melting, compacting, blowing and drifting contribute to the difficulty of measuring snowfall.
Distribution
Glaciers with their permanent snowpacks cover about 10% of the earth's surface, while seasonal snow covers about nine percent, mostly in the Northern Hemisphere, where seasonal snow covers about , according to a 1987 estimate. A 2007 estimate of snow cover over the Northern Hemisphere suggested that, on average, snow cover ranges from a minimum extent of each August to a maximum extent of each January or nearly half of the land surface in that hemisphere. A study of Northern Hemisphere snow cover extent for the period 1972–2006 suggests a reduction of over the 35-year period.
Records
The following are world records regarding snowfall and snowflakes:
Highest seasonal total snowfall – The world record for the highest seasonal total snowfall was measured in the United States at Mt. Baker Ski Area, outside of the city of Bellingham, Washington during the 1998–1999 season. Mount Baker received of snow, thus surpassing the previous record holder, Mount Rainier, Washington, which during the 1971–1972 season received of snow.
Highest seasonal average annual snowfall – The world record for the highest average annual snowfall is , measured in Sukayu Onsen, Japan for the period of 1981–2010.
Largest snowflake – According to Guinness World Records, the world's largest snowflake fell in January 1887 outside present-day Miles City, Montana. It measured in diameter.
The cities (more than 100,000 inhabitants) with the highest annual snowfall are Aomori (792 cm), Sapporo (485 cm) and Toyama (363 cm) in Japan, followed by St. John's (332 cm) and Quebec City (315 cm) in Canada, and Syracuse, NY (325 cm).
Metamorphism
According to the International Association of Cryospheric Sciences, snow metamorphism is "the transformation that the snow undergoes in the period from deposition to either melting or passage to glacial ice". Starting as a powdery deposition, snow becomes more granular when it begins to compact under its own weight, be blown by the wind, sinter particles together and commence the cycle of melting and refreezing. Water vapor plays a role as it deposits ice crystals, known as hoar frost, during cold, still conditions. During this transition, snow "is a highly porous, sintered material made up of a continuous ice structure and a continuously connected pore space, forming together the snow microstructure". Almost always near its melting temperature, a snowpack is continually transforming these properties wherein all three phases of water may coexist, including liquid water partially filling the pore space. After deposition, snow progresses on one of two paths that determine its fate, either by ablation (mostly by melting) from a snow fall or seasonal snowpack, or by transitioning from firn (multi-year snow) into glacier ice.
Seasonal
Over the course of time, a snowpack may settle under its own weight until its density is approximately 30% of water. Increases in density above this initial compression occur primarily by melting and refreezing, caused by temperatures above freezing or by direct solar radiation. In colder climates, snow lies on the ground all winter. By late spring, snow densities typically reach a maximum of 50% of water. Snow that persists into summer evolves into névé, granular snow, which has been partially melted, refrozen and compacted. Névé has a minimum density of , which is roughly half of the density of liquid water.
Firn
Firn is snow that has persisted for multiple years and has been recrystallized into a substance denser than névé, yet less dense and hard than glacial ice. Firn resembles caked sugar and is very resistant to shovelling. Its density generally ranges from , and it can often be found underneath the snow that accumulates at the head of a glacier. The minimum altitude that firn accumulates on a glacier is called the firn limit, firn line or snowline.
Movement
There are four main mechanisms for movement of deposited snow: drifting of unsintered snow, avalanches of accumulated snow on steep slopes, snowmelt during thaw conditions, and the movement of glaciers after snow has persisted for multiple years and metamorphosed into glacier ice.
Drifting
When powdery snow drifts with the wind from the location where it originally fell, forming deposits with a depth of several meters in isolated locations. After attaching to hillsides, blown snow can evolve into a snow slab, which is an avalanche hazard on steep slopes.
Avalanche
An avalanche (also called a snowslide or snowslip) is a rapid flow of snow down a sloping surface. Avalanches are typically triggered in a starting zone from a mechanical failure in the snowpack (slab avalanche) when the forces on the snow exceed its strength but sometimes only with gradually widening (loose snow avalanche). After initiation, avalanches usually accelerate rapidly and grow in mass and volume as they entrain more snow. If the avalanche moves fast enough some of the snow may mix with the air forming a powder snow avalanche, which is a type of gravity current. They occur in three major mechanisms:
Slab avalanches occur in snow that has been deposited, or redeposited by wind. They have the characteristic appearance of a block (slab) of snow cut out from its surroundings by fractures. These account for most back-country fatalities.
Powder snow avalanches result from a deposition of fresh dry powder and generate a powder cloud, which overlies a dense avalanche. They can exceed speeds of , and masses of ; their flows can travel long distances along flat valley bottoms and even uphill for short distances.
Wet snow avalanches are a low-velocity suspension of snow and water, with the flow confined to the surface of the pathway. The low speed of travel is due to the friction between the sliding surface of the pathway and the water saturated flow. Despite the low speed of travel (~), wet snow avalanches are capable of generating powerful destructive forces, due to the large mass, and density.
Melting
Many rivers originating in mountainous or high-latitude regions receive a significant portion of their flow from snowmelt. This often makes the river's flow highly seasonal resulting in periodic flooding during the spring months and at least in dry mountainous regions like the mountain West of the US or most of Iran and Afghanistan, very low flow for the rest of the year. In contrast, if much of the melt is from glaciated or nearly glaciated areas, the melt continues through the warm season, with peak flows occurring in mid to late summer.
Glaciers
Glaciers form where the accumulation of snow and ice exceeds ablation. The area in which an alpine glacier forms is called a cirque (corrie or cwm), a typically armchair-shaped geological feature, which collects snow and where the snowpack compacts under the weight of successive layers of accumulating snow, forming névé. Further crushing of the individual snow crystals and reduction of entrapped air in the snow turns it into glacial ice. This glacial ice will fill the cirque until it overflows through a geological weakness or an escape route, such as the gap between two mountains. When the mass of snow and ice is sufficiently thick, it begins to move due to a combination of surface slope, gravity and pressure. On steeper slopes, this can occur with as little as of snow-ice.
Science
Scientists study snow at a wide variety of scales that include the physics of chemical bonds and clouds; the distribution, accumulation, metamorphosis, and ablation of snowpacks; and the contribution of snowmelt to river hydraulics and ground hydrology. In doing so, they employ a variety of instruments to observe and measure the phenomena studied. Their findings contribute to knowledge applied by engineers, who adapt vehicles and structures to snow, by agronomists, who address the availability of snowmelt to agriculture, and those, who design equipment for sporting activities on snow. Scientists develop and others employ snow classification systems that describe its physical properties at scales ranging from the individual crystal to the aggregated snowpack. A sub-specialty is avalanches, which are of concern to engineers and outdoors sports people, alike.
Snow science addresses how snow forms, its distribution, and processes affecting how snowpacks change over time. Scientists improve storm forecasting, study global snow cover and its effect on climate, glaciers, and water supplies around the world. The study includes physical properties of the material as it changes, bulk properties of in-place snow packs, and the aggregate properties of regions with snow cover. In doing so, they employ on-the-ground physical measurement techniques to establish ground truth and remote sensing techniques to develop understanding of snow-related processes over large areas.
Measurement and classification
In the field snow scientists often excavate a snow pit within which to make basic measurements and observations. Observations can describe features caused by wind, water percolation, or snow unloading from trees. Water percolation into a snowpack can create flow fingers and ponding or flow along capillary barriers, which can refreeze into horizontal and vertical solid ice formations within the snowpack. Among the measurements of the properties of snowpacks that the International Classification for Seasonal Snow on the Ground includes are: snow height, snow water equivalent, snow strength, and extent of snow cover. Each has a designation with code and detailed description. The classification extends the prior classifications of Nakaya and his successors to related types of precipitation and are quoted in the following table:
All are formed in cloud, except for rime, which forms on objects exposed to supercooled moisture.
It also has a more extensive classification of deposited snow than those that pertain to airborne snow. The categories include both natural and man-made snow types, descriptions of snow crystals as they metamorphose and melt, the development of hoar frost in the snow pack and the formation of ice therein. Each such layer of a snowpack differs from the adjacent layers by one or more characteristics that describe its microstructure or density, which together define the snow type, and other physical properties. Thus, at any one time, the type and state of the snow forming a layer have to be defined because its physical and mechanical properties depend on them. Physical properties include microstructure, grain size and shape, snow density, liquid water content, and temperature.
When it comes to measuring snow cover on the ground, typically three variables are measured: the snow cover extent (SCE) — the land area covered by snow, snow cover duration (SD) — how long a particular area is covered by snow, and the snow accumulation, often expressed as snow water equivalent (SWE), which expresses how much water the snow would be if it were all melted: this last one is a measurement of the volume of the snowpack. To measure these variables a variety of techniques are used: surface observations, remote sensing, land surface models and reanalysis products. These techniques are often combined to form the most complete datasets.
Satellite data
Remote sensing of snowpacks with satellites and other platforms typically includes multi-spectral collection of imagery. Multi-faceted interpretation of the data obtained allows inferences about what is observed. The science behind these remote observations has been verified with ground-truth studies of the actual conditions.
Satellite observations record a decrease in snow-covered areas since the 1960s, when satellite observations began. In some regions such as China, a trend of increasing snow cover was observed from 1978 to 2006. These changes are attributed to global climate change, which may lead to earlier melting and less coverage area. In some areas, snow depth increases because of higher temperatures in latitudes north of 40°. For the Northern Hemisphere as a whole the mean monthly snow-cover extent has been decreasing by 1.3% per decade.
The most frequently used methods to map and measure snow extent, snow depth and snow water equivalent employ multiple inputs on the visible–infrared spectrum to deduce the presence and properties of snow. The National Snow and Ice Data Center (NSIDC) uses the reflectance of visible and infrared radiation to calculate a normalized difference snow index, which is a ratio of radiation parameters that can distinguish between clouds and snow. Other researchers have developed decision trees, employing the available data to make more accurate assessments. One challenge to this assessment is where snow cover is patchy, for example during periods of accumulation or ablation and also in forested areas. Cloud cover inhibits optical sensing of surface reflectance, which has led to other methods for estimating ground conditions underneath clouds. For hydrological models, it is important to have continuous information about the snow cover. Passive microwave sensors are especially valuable for temporal and spatial continuity because they can map the surface beneath clouds and in darkness. When combined with reflective measurements, passive microwave sensing greatly extends the inferences possible about the snowpack.
Satellite measurements show that snow cover has been decreasing in many areas of the world since 1978.
Models
Snow science often leads to predictive models that include snow deposition, snow melt, and snow hydrology—elements of the Earth's water cycle—which help describe global climate change.
Global climate change models (GCMs) incorporate snow as a factor in their calculations. Some important aspects of snow cover include its albedo (reflectivity of incident radiation, including light) and insulating qualities, which slow the rate of seasonal melting of sea ice. As of 2011, the melt phase of GCM snow models were thought to perform poorly in regions with complex factors that regulate snow melt, such as vegetation cover and terrain. These models typically derive snow water equivalent (SWE) in some manner from satellite observations of snow cover. The International Classification for Seasonal Snow on the Ground defines SWE as "the depth of water that would result if the mass of snow melted completely".
Given the importance of snowmelt to agriculture, hydrological runoff models that include snow in their predictions address the phases of accumulating snowpack, melting processes, and distribution of the meltwater through stream networks and into the groundwater. Key to describing the melting processes are solar heat flux, ambient temperature, wind, and precipitation. Initial snowmelt models used a degree-day approach that emphasized the temperature difference between the air and the snowpack to compute snow water equivalent, SWE. More recent models use an energy balance approach that take into account the following factors to compute Qm, the energy available for melt. This requires measurement of an array of snowpack and environmental factors to compute six heat flow mechanisms that contribute to Qm.
Effects on civilization
Snow routinely affects civilization in four major areas, transportation, agriculture, structures, and sports. Most transportation modes are impeded by snow on the travel surface. Agriculture often relies on snow as a source of seasonal moisture. Structures may fail under snow loads. Humans find a wide variety of recreational activities in snowy landscapes. It also affects the conduct of warfare.
Transportation
Snow affects the rights of way of highways, airfields and railroads. The snowplow is common to all workers, though roadways take anti-icing chemicals to prevent bonding of ice and airfields may not; railroads rely on abrasives for track traction.
Highway
In the late 20th century, an estimated $2 billion was spent annually in North America on roadway winter maintenance, owing to snow and other winter weather events, according to a 1994 report by Kuemmel. The study surveyed the practices of jurisdictions within 44 US states and nine Canadian provinces. It assessed the policies, practices, and equipment used for winter maintenance. It found similar practices and progress to be prevalent in Europe.
The dominant effect of snow on vehicle contact with the road is diminished friction. This can be improved with the use of snow tires, which have a tread designed to compact snow in a manner that enhances traction. The key to maintaining a roadway that can accommodate traffic during and after a snow event is an effective anti-icing program that employs both chemicals and plowing. The Federal Highway Administration Manual of Practice for an Effective Anti-icing Program emphasizes "anti-icing" procedures that prevent the bonding of snow and ice to the road. Key aspects of the practice include: understanding anti-icing in light of the level of service to be achieved on a given roadway, the climatic conditions to be encountered, and the different roles of deicing, anti-icing, and abrasive materials and applications, and employing anti-icing "toolboxes", one for operations, one for decision-making and another for personnel. The elements to the toolboxes are:
Operations – Addresses the application of solid and liquid chemicals, using various techniques, including prewetting of chloride-salts. It also addresses plowing capability, including types of snowplows and blades used.
Decision-making – Combines weather forecast information with road information to assess the upcoming needs for application of assets and the evaluation of treatment effectiveness with operations underway.
Personnel – Addresses training and deployment of staff to effectively execute the anti-icing program, using the appropriate materials, equipment and procedures.
The manual offers matrices that address different types of snow and the rate of snowfall to tailor applications appropriately and efficiently.
Snow fences, constructed upwind of roadways control snow drifting by causing windblown, drifting snow to accumulate in a desired place. They are also used on railways. Additionally, farmers and ranchers use snow fences to create drifts in basins for a ready supply of water in the spring.
Aviation
In order to keep airports open during winter storms, runways and taxiways require snow removal. Unlike roadways, where chloride chemical treatment is common to prevent snow from bonding to the pavement surface, such chemicals are typically banned from airports because of their strong corrosive effect on aluminum aircraft. Consequently, mechanical brushes are often used to complement the action of snow plows. Given the width of runways on airfields that handle large aircraft, vehicles with large plow blades, an echelon of plow vehicles or rotary snowplows are used to clear snow on runways and taxiways. Terminal aprons may require or more to be cleared.
Properly equipped aircraft are able to fly through snowstorms under instrument flight rules. Prior to takeoff, during snowstorms they require deicing fluid to prevent accumulation and freezing of snow and other precipitation on wings and fuselages, which may compromise the safety of the aircraft and its occupants. In flight, aircraft rely on a variety of mechanisms to avoid rime and other types of icing in clouds, these include pulsing pneumatic boots, electro-thermal areas that generate heat, and fluid deicers that bleed onto the surface.
Rail
Railroads have traditionally employed two types of snow plows for clearing track, the wedge plow, which casts snow to both sides, and the rotary snowplow, which is suited for addressing heavy snowfall and casting snow far to one side or the other. Prior to the invention of the rotary snowplow ca. 1865, it required multiple locomotives to drive a wedge plow through deep snow. Subsequent to clearing the track with such plows, a "flanger" is used to clear snow from between the rails that are below the reach of the other types of plow. Where icing may affect the steel-to-steel contact of locomotive wheels on track, abrasives (typically sand) have been used to provide traction on steeper uphills.
Railroads employ snow sheds—structures that cover the track—to prevent the accumulation of heavy snow or avalanches to cover tracks in snowy mountainous areas, such as the Alps and the Rocky Mountains.
Construction
Snow can be compacted to form a snow road and be part of a winter road route for vehicles to access isolated communities or construction projects during the winter. Snow can also be used to provide the supporting structure and surface for a runway, as with the Phoenix Airfield in Antarctica. The snow-compacted runway is designed to withstand approximately 60 wheeled flights of heavy-lift military aircraft a year.
Agriculture
Snowfall can be beneficial to agriculture by serving as a thermal insulator, conserving the heat of the Earth and protecting crops from subfreezing weather. Some agricultural areas depend on an accumulation of snow during winter that will melt gradually in spring, providing water for crop growth, both directly and via runoff through streams and rivers, which supply irrigation canals. The following are examples of rivers that rely on meltwater from glaciers or seasonal snowpack as an important part of their flow on which irrigation depends: the Ganges, many of whose tributaries rise in the Himalayas and which provide much irrigation in northeast India, the Indus River, which rises in Tibet and provides irrigation water to Pakistan from rapidly retreating Tibetan glaciers, and the Colorado River, which receives much of its water from seasonal snowpack in the Rocky Mountains and provides irrigation water to some .
Structures
Snow is an important consideration for loads on structures. To address these, European countries employ Eurocode 1: Actions on structures - Part 1-3: General actions - Snow loads. In North America, ASCE Minimum Design Loads for Buildings and Other Structures gives guidance on snow loads. Both standards employ methods that translate maximum expected ground snow loads onto design loads for roofs.
Roofs
Snow loads and icings are two principal issues for roofs. Snow loads are related to the climate in which a structure is sited. Icings are usually a result of the building or structure generating heat that melts the snow that is on it.
Snow loads – The Minimum Design Loads for Buildings and Other Structures gives guidance on how to translate the following factors into roof snow loads:
Ground snow loads
Exposure of the roof
Thermal properties of the roof
Shape of the roof
Drifting
Importance of the building
It gives tables for ground snow loads by region and a methodology for computing ground snow loads that may vary with elevation from nearby, measured values. The Eurocode 1 uses similar methodologies, starting with ground snow loads that are tabulated for portions of Europe.
Icings – Roofs must also be designed to avoid ice dams, which result from meltwater running under the snow on the roof and freezing at the eave. Ice dams on roofs form when accumulated snow on a sloping roof melts and flows down the roof, under the insulating blanket of snow, until it reaches below freezing temperature air, typically at the eaves. When the meltwater reaches the freezing air, ice accumulates, forming a dam, and snow that melts later cannot drain properly through the dam. Ice dams may result in damaged building materials or in damage or injury when the ice dam falls off or from attempts to remove ice dams. The melting results from heat passing through the roof under the highly insulating layer of snow.
Utility lines
In areas with trees, utility distribution lines on poles are less susceptible to snow loads than they are subject to damage from trees falling on them, felled by heavy, wet snow. Elsewhere, snow can accrete on power lines as "sleeves" of rime ice. Engineers design for such loads, which are measured in kg/m (lb/ft) and power companies have forecasting systems that anticipate types of weather that may cause such accretions. Rime ice may be removed manually or by creating a sufficient short circuit in the affected segment of power lines to melt the accretions.
Sports and recreation
Snow figures into many winter sports and forms of recreation, including skiing and sledding. Common examples include cross-country skiing, Alpine skiing, snowboarding, snowshoeing, and snowmobiling. The design of the equipment used, e.g. skis and snowboards, typically relies on the bearing strength of snow and contends with the coefficient of friction bearing on snow.
Skiing is by far the largest form of winter recreation. As of 1994, of the estimated 65–75 million skiers worldwide, there were approximately 55 million who engaged in Alpine skiing, the rest engaged in cross-country skiing. Approximately 30 million skiers (of all kinds) were in Europe, 15 million in the US, and 14 million in Japan. As of 1996, there were reportedly 4,500 ski areas, operating 26,000 ski lifts and enjoying 390 million skier visits per year. The preponderant region for downhill skiing was Europe, followed by Japan and the US.
Increasingly, ski resorts are relying on snowmaking, the production of snow by forcing water and pressurized air through a snow gun on ski slopes. Snowmaking is mainly used to supplement natural snow at ski resorts. This allows them to improve the reliability of their snow cover and to extend their ski seasons from late autumn to early spring. The production of snow requires low temperatures. The threshold temperature for snowmaking increases as humidity decreases. Wet-bulb temperature is used as a metric since it takes air temperature and relative humidity into account. Snowmaking is a relatively expensive process in its energy consumption, thereby limiting its use.
Ski wax enhances the ability of a ski (or other runner) to slide over snow by reducing its coefficient of friction, which depends on both the properties of the snow and the ski to result in an optimum amount of lubrication from melting the snow by friction with the ski—too little and the ski interacts with solid snow crystals, too much and capillary attraction of meltwater retards the ski. Before a ski can slide, it must overcome the maximum value static friction. Kinetic (or dynamic) friction occurs when the ski is moving over the snow.
Warfare
Snow affects warfare conducted in winter, alpine environments or at high latitudes. The main factors are impaired visibility for acquiring targets during falling snow, enhanced visibility of targets against snowy backgrounds for targeting, and mobility for both mechanized and infantry troops. Snowfall can severely inhibit the logistics of supplying troops, as well. Snow can also provide cover and fortification against small-arms fire. Noted winter warfare campaigns where snow and other factors affected the operations include:
The French invasion of Russia, where poor traction conditions for ill-shod horses made it difficult for supply wagons to keep up with troops. That campaign was also strongly affected by cold, whereby the retreating army reached Neman River in December 1812 with only 10,000 of the 420,000 that had set out to invade Russia in June of the same year.
The Winter War, an attempt by the Soviet Union to take territory in Finland in late 1939 demonstrated superior winter tactics of the Finnish Army, regarding over-snow mobility, camouflage, and use of the terrain.
The Battle of the Bulge, a German counteroffensive during World War II, starting December 16, 1944, was marked by heavy snowstorms that hampered allied air support for ground troops, but also impaired German attempts to supply their front lines. On the Eastern Front with the Nazi invasion of Russia in 1941, Operation Barbarossa, both Russian and German soldiers had to endure terrible conditions during the Russian winter. While use of ski infantry was common in the Red Army, Germany formed only one division for movement on skis.
The Korean War which lasted from June 25, 1950, until an armistice on July 27, 1953, began when North Korea invaded South Korea. Much of the fighting occurred during winter conditions, involving snow, notably during the Battle of Chosin Reservoir, which was a stark example of cold affecting military operations, especially vehicles and weapons.
Effects on plants and animals
Plants and animals endemic to snowbound areas develop ways to adapt. Among the adaptive mechanisms for plants are freeze-adaptive chemistry, dormancy, seasonal dieback, survival of seeds; and for animals are hibernation, insulation, anti-freeze chemistry, storing food, drawing on reserves from within the body, and clustering for mutual heat.
Snow interacts with vegetation in two principal ways, vegetation can influence the deposition and retention of snow and, conversely, the presence of snow can affect the distribution and growth of vegetation. Tree branches, especially of conifers intercept falling snow and prevent accumulation on the ground. Snow suspended in trees ablates more rapidly than that on the ground, owing to its greater exposure to sun and air movement. Trees and other plants can also promote snow retention on the ground, which would otherwise be blown elsewhere or melted by the sun. Snow affects vegetation in several ways, the presence of stored water can promote growth, yet the annual onset of growth is dependent on the departure of the snowpack for those plants that are buried beneath it. Furthermore, avalanches and erosion from snowmelt can scour terrain of vegetation.
Snow supports a wide variety of animals both on the surface and beneath. Many invertebrates thrive in snow, including spiders, wasps, beetles, snow scorpionflies and springtails. Such arthropods are typically active at temperatures down to . Invertebrates fall into two groups, regarding surviving subfreezing temperatures: freezing resistant and those that avoid freezing because they are freeze-sensitive. The first group may be cold hardy owing to the ability to produce antifreeze agents in their body fluids that allows survival of long exposure to sub-freezing conditions. Some organisms fast during the winter, which expels freezing-sensitive contents from their digestive tracts. The ability to survive the absence of oxygen in ice is an additional survival mechanism.
Small vertebrates are active beneath the snow. Among vertebrates, alpine salamanders are active in snow at temperatures as low as ; they burrow to the surface in springtime and lay their eggs in melt ponds. Among mammals, those that remain active are typically smaller than . Omnivores are more likely to enter a torpor or be hibernators, whereas herbivores are more likely to maintain food caches beneath the snow. Voles store up to of food and pikas up to . Voles also huddle in communal nests to benefit from one another's warmth. On the surface, wolves, coyotes, foxes, lynx, and weasels rely on these subsurface dwellers for food and often dive into the snowpack to find them.
Outside of Earth
Extraterrestrial "snow" includes water-based precipitation, but also precipitation of other compounds prevalent on other planets and moons in the Solar System. Examples are:
On Mars, observations of the Phoenix Mars lander revealed that water-based snow crystals occur at high latitudes. Additionally, carbon dioxide precipitates from clouds during the Martian winters at the poles and contributes to a seasonal deposit of that compound, which is the principal component of that planet's ice caps.
On Venus, observations from the Magellan spacecraft revealed the presence a metallic substance, which precipitates as "Venus snow" and leaves a highly reflective substance at the tops of Venus's highest mountain peaks resembling terrestrial snow. Given the high temperatures on Venus, the leading candidates for the precipitate are lead sulfide and bismuth(III) sulfide.
On Saturn's moon, Titan, Cassini–Huygens spacecraft observations suggested the presence of methane or some other form of hydrocarbon-based crystalline deposits.
On Pluto, New Horizons' observations showed that methane condenses at high altitude and falls down as frost.
| Physical sciences | Earth science | null |
28212 | https://en.wikipedia.org/wiki/Skewness | Skewness | In probability theory and statistics, skewness is a measure of the asymmetry of the probability distribution of a real-valued random variable about its mean. The skewness value can be positive, zero, negative, or undefined.
For a unimodal distribution (a distribution with a single peak), negative skew commonly indicates that the tail is on the left side of the distribution, and positive skew indicates that the tail is on the right. In cases where one tail is long but the other tail is fat, skewness does not obey a simple rule. For example, a zero value in skewness means that the tails on both sides of the mean balance out overall; this is the case for a symmetric distribution but can also be true for an asymmetric distribution where one tail is long and thin, and the other is short but fat. Thus, the judgement on the symmetry of a given distribution by using only its skewness is risky; the distribution shape must be taken into account.
Introduction
Consider the two distributions in the figure just below. Within each graph, the values on the right side of the distribution taper differently from the values on the left side. These tapering sides are called tails, and they provide a visual means to determine which of the two kinds of skewness a distribution has:
: The left tail is longer; the mass of the distribution is concentrated on the right of the figure. The distribution is said to be left-skewed, left-tailed, or skewed to the left, despite the fact that the curve itself appears to be skewed or leaning to the right; left instead refers to the left tail being drawn out and, often, the mean being skewed to the left of a typical center of the data. A left-skewed distribution usually appears as a right-leaning curve.
: The right tail is longer; the mass of the distribution is concentrated on the left of the figure. The distribution is said to be right-skewed, right-tailed, or skewed to the right, despite the fact that the curve itself appears to be skewed or leaning to the left; right instead refers to the right tail being drawn out and, often, the mean being skewed to the right of a typical center of the data. A right-skewed distribution usually appears as a left-leaning curve.
Skewness in a data series may sometimes be observed not only graphically but by simple inspection of the values. For instance, consider the numeric sequence (49, 50, 51), whose values are evenly distributed around a central value of 50. We can transform this sequence into a negatively skewed distribution by adding a value far below the mean, which is probably a negative outlier, e.g. (40, 49, 50, 51). Therefore, the mean of the sequence becomes 47.5, and the median is 49.5. Based on the formula of nonparametric skew, defined as the skew is negative. Similarly, we can make the sequence positively skewed by adding a value far above the mean, which is probably a positive outlier, e.g. (49, 50, 51, 60), where the mean is 52.5, and the median is 50.5.
As mentioned earlier, a unimodal distribution with zero value of skewness does not imply that this distribution is symmetric necessarily. However, a symmetric unimodal or multimodal distribution always has zero skewness.
Relationship of mean and median
The skewness is not directly related to the relationship between the mean and median: a distribution with negative skew can have its mean greater than or less than the median, and likewise for positive skew.
In the older notion of nonparametric skew, defined as where is the mean, is the median, and is the standard deviation, the skewness is defined in terms of this relationship: positive/right nonparametric skew means the mean is greater than (to the right of) the median, while negative/left nonparametric skew means the mean is less than (to the left of) the median. However, the modern definition of skewness and the traditional nonparametric definition do not always have the same sign: while they agree for some families of distributions, they differ in some of the cases, and conflating them is misleading.
If the distribution is symmetric, then the mean is equal to the median, and the distribution has zero skewness. If the distribution is both symmetric and unimodal, then the mean = median = mode. This is the case of a coin toss or the series 1,2,3,4,... Note, however, that the converse is not true in general, i.e. zero skewness (defined below) does not imply that the mean is equal to the median.
A 2005 journal article points out:Many textbooks teach a rule of thumb stating that the mean is right of the median under right skew, and left of the median under left skew. This rule fails with surprising frequency. It can fail in multimodal distributions, or in distributions where one tail is long but the other is heavy. Most commonly, though, the rule fails in discrete distributions where the areas to the left and right of the median are not equal. Such distributions not only contradict the textbook relationship between mean, median, and skew, they also contradict the textbook interpretation of the median.
For example, in the distribution of adult residents across US households, the skew is to the right. However, since the majority of cases is less than or equal to the mode, which is also the median, the mean sits in the heavier left tail. As a result, the rule of thumb that the mean is right of the median under right skew failed.
Definition
Fisher's moment coefficient of skewness
The skewness of a random variable X is the third standardized moment , defined as:
where μ is the mean, σ is the standard deviation, E is the expectation operator, μ3 is the third central moment, and κt are the t-th cumulants. It is sometimes referred to as Pearson's moment coefficient of skewness, or simply the moment coefficient of skewness, but should not be confused with Pearson's other skewness statistics (see below). The last equality expresses skewness in terms of the ratio of the third cumulant κ3 to the 1.5th power of the second cumulant κ2. This is analogous to the definition of kurtosis as the fourth cumulant normalized by the square of the second cumulant.
The skewness is also sometimes denoted Skew[X].
If σ is finite and μ is finite too, then skewness can be expressed in terms of the non-central moment E[X3] by expanding the previous formula:
Examples
Skewness can be infinite, as when
where the third cumulants are infinite, or as when
where the third cumulant is undefined.
Examples of distributions with finite skewness include the following.
A normal distribution and any other symmetric distribution with finite third moment has a skewness of 0
A half-normal distribution has a skewness just below 1
An exponential distribution has a skewness of 2
A lognormal distribution can have a skewness of any positive value, depending on its parameters
Sample skewness
For a sample of n values, two natural estimators of the population skewness are
and
where is the sample mean, s is the sample standard deviation, m2 is the (biased) sample second central moment, and m3 is the (biased) sample third central moment. is a method of moments estimator.
Another common definition of the sample skewness is
where is the unique symmetric unbiased estimator of the third cumulant and is the symmetric unbiased estimator of the second cumulant (i.e. the sample variance). This adjusted Fisher–Pearson standardized moment coefficient is the version found in Excel and several statistical packages including Minitab, SAS and SPSS.
Under the assumption that the underlying random variable is normally distributed, it can be shown that all three ratios , and are unbiased and consistent estimators of the population skewness , with , i.e., their distributions converge to a normal distribution with mean 0 and variance 6 (Fisher, 1930). The variance of the sample skewness is thus approximately for sufficiently large samples. More precisely, in a random sample of size n from a normal distribution,
In normal samples, has the smaller variance of the three estimators, with
For non-normal distributions, , and are generally biased estimators of the population skewness ; their expected values can even have the opposite sign from the true skewness. For instance, a mixed distribution consisting of very thin Gaussians centred at −99, 0.5, and 2 with weights 0.01, 0.66, and 0.33 has a skewness of about −9.77, but in a sample of 3 has an expected value of about 0.32, since usually all three samples are in the positive-valued part of the distribution, which is skewed the other way.
Applications
Skewness is a descriptive statistic that can be used in conjunction with the histogram and the normal quantile plot to characterize the data or distribution.
Skewness indicates the direction and relative magnitude of a distribution's deviation from the normal distribution.
With pronounced skewness, standard statistical inference procedures such as a confidence interval for a mean will be not only incorrect, in the sense that the true coverage level will differ from the nominal (e.g., 95%) level, but they will also result in unequal error probabilities on each side.
Skewness can be used to obtain approximate probabilities and quantiles of distributions (such as value at risk in finance) via the Cornish–Fisher expansion.
Many models assume normal distribution; i.e., data are symmetric about the mean. The normal distribution has a skewness of zero. But in reality, data points may not be perfectly symmetric. So, an understanding of the skewness of the dataset indicates whether deviations from the mean are going to be positive or negative.
D'Agostino's K-squared test is a goodness-of-fit normality test based on sample skewness and sample kurtosis.
Other measures of skewness
Other measures of skewness have been used, including simpler calculations suggested by Karl Pearson (not to be confused with Pearson's moment coefficient of skewness, see above). These other measures are:
Pearson's first skewness coefficient (mode skewness)
The Pearson mode skewness, or first skewness coefficient, is defined as
.
Pearson's second skewness coefficient (median skewness)
The Pearson median skewness, or second skewness coefficient, is defined as
.
Which is a simple multiple of the nonparametric skew.
Quantile-based measures
Bowley's measure of skewness (from 1901), also called Yule's coefficient (from 1912) is defined as:
where Q is the quantile function (i.e., the inverse of the cumulative distribution function). The numerator is difference between the average of the upper and lower quartiles (a measure of location) and the median (another measure of location), while the denominator is the semi-interquartile range , which for symmetric distributions is equal to the MAD measure of dispersion.
Other names for this measure are Galton's measure of skewness, the Yule–Kendall index and the quartile skewness,
Similarly, Kelly's measure of skewness is defined as
A more general formulation of a skewness function was described by Groeneveld, R. A. and Meeden, G. (1984):<ref name=Hinkley1975>Hinkley DV (1975) "On power transformations to symmetry", Biometrika, 62, 101–111</ref>
The function γ(u) satisfies −1 ≤ γ(u) ≤ 1 and is well defined without requiring the existence of any moments of the distribution. Bowley's measure of skewness is γ(u) evaluated at u = 3/4 while Kelly's measure of skewness is γ(u) evaluated at u = 9/10. This definition leads to a corresponding overall measure of skewness defined as the supremum of this over the range 1/2 ≤ u < 1. Another measure can be obtained by integrating the numerator and denominator of this expression.
Quantile-based skewness measures are at first glance easy to interpret, but they often show significantly larger sample variations than moment-based methods. This means that often samples from a symmetric distribution (like the uniform distribution) have a large quantile-based skewness, just by chance.
Groeneveld and Meeden's coefficient
Groeneveld and Meeden have suggested, as an alternative measure of skewness,
where μ is the mean, ν is the median, |...| is the absolute value, and E() is the expectation operator. This is closely related in form to Pearson's second skewness coefficient.
L-moments
Use of L-moments in place of moments provides a measure of skewness known as the L-skewness.
Distance skewness
A value of skewness equal to zero does not imply that the probability distribution is symmetric. Thus there is a need for another measure of asymmetry that has this property: such a measure was introduced in 2000. It is called distance skewness and denoted by dSkew. If X is a random variable taking values in the d-dimensional Euclidean space, X has finite expectation, X is an independent identically distributed copy of X, and denotes the norm in the Euclidean space, then a simple measure of asymmetry with respect to location parameter θ is
and dSkew(X) := 0 for X = θ (with probability 1). Distance skewness is always between 0 and 1, equals 0 if and only if X is diagonally symmetric with respect to θ (X and 2θ−X have the same probability distribution) and equals 1 if and only if X is a constant c'' () with probability one. Thus there is a simple consistent statistical test of diagonal symmetry based on the sample distance skewness:
Medcouple
The medcouple is a scale-invariant robust measure of skewness, with a breakdown point of 25%. It is the median of the values of the kernel function
taken over all couples such that , where is the median of the sample . It can be seen as the median of all possible quantile skewness measures.
| Mathematics | Probability | null |
28219 | https://en.wikipedia.org/wiki/Spontaneous%20emission | Spontaneous emission | Spontaneous emission is the process in which a quantum mechanical system (such as a molecule, an atom or a subatomic particle) transits from an excited energy state to a lower energy state (e.g., its ground state) and emits a quantized amount of energy in the form of a photon. Spontaneous emission is ultimately responsible for most of the light we see all around us; it is so ubiquitous that there are many names given to what is essentially the same process. If atoms (or molecules) are excited by some means other than heating, the spontaneous emission is called luminescence. For example, fireflies are luminescent. And there are different forms of luminescence depending on how excited atoms are produced (electroluminescence, chemiluminescence etc.). If the excitation is effected by the absorption of radiation the spontaneous emission is called fluorescence. Sometimes molecules have a metastable level and continue to fluoresce long after the exciting radiation is turned off; this is called phosphorescence. Figurines that glow in the dark are phosphorescent. Lasers start via spontaneous emission, then during continuous operation work by stimulated emission.
Spontaneous emission cannot be explained by classical electromagnetic theory and is fundamentally a quantum process. The first person to correctly predict the phenomenon of spontaneous emission was Albert Einstein in a series of papers starting in 1916, culminating in what is now called the Einstein A Coefficient. Einstein's quantum theory of radiation anticipated ideas later expressed in quantum electrodynamics and quantum optics by several decades. Later, after the formal discovery of quantum mechanics in 1926, the rate of spontaneous emission was accurately described from first principles by Dirac in his quantum theory of radiation, the precursor to the theory which he later called quantum electrodynamics. Contemporary physicists, when asked to give a physical explanation for spontaneous emission, generally invoke the zero-point energy of the electromagnetic field. In 1963, the Jaynes–Cummings model was developed describing the system of a two-level atom interacting with a quantized field mode (i.e. the vacuum) within an optical cavity. It gave the nonintuitive prediction that the rate of spontaneous emission could be controlled depending on the boundary conditions of the surrounding vacuum field. These experiments gave rise to cavity quantum electrodynamics (CQED), the study of effects of mirrors and cavities on radiative corrections.
Introduction
If a light source ('the atom') is in an excited state with energy , it may spontaneously decay to a lower lying level (e.g., the ground state) with energy , releasing the difference in energy between the two states as a photon. The photon will have angular frequency and an energy :
where is the reduced Planck constant. Note: , where is the Planck constant and is the linear frequency. The phase of the photon in spontaneous emission is random as is the direction in which the photon propagates. This is not true for stimulated emission. An energy level diagram illustrating the process of spontaneous emission is shown below:
If the number of light sources in the excited state at time is given by , the rate at which decays is:
where is the rate of spontaneous emission. In the rate-equation is a proportionality constant for this particular transition in this particular light source. The constant is referred to as the Einstein A coefficient, and has units s.
The above equation can be solved to give:
where is the initial number of light sources in the excited state, is the time and is the radiative decay rate of the transition. The number of excited states thus decays exponentially with time, similar to radioactive decay. After one lifetime, the number of excited states decays to 36.8% of its original value (-time). The radiative decay rate is inversely proportional to the lifetime :
Theory
Spontaneous transitions were not explainable within the framework of the Schrödinger equation, in which the electronic energy levels were quantized, but the electromagnetic field was not. Given that the eigenstates of an atom are properly diagonalized, the overlap of the wavefunctions between the excited state and the ground state of the atom is zero. Thus, in the absence of a quantized electromagnetic field, the excited state atom cannot decay to the ground state. In order to explain spontaneous transitions, quantum mechanics must be extended to a quantum field theory, wherein the electromagnetic field is quantized at every point in space. The quantum field theory of electrons and electromagnetic fields is known as quantum electrodynamics.
In quantum electrodynamics (or QED), the electromagnetic field has a ground state, the QED vacuum, which can mix with the excited stationary states of the atom. As a result of this interaction, the "stationary state" of the atom is no longer a true eigenstate of the combined system of the atom plus electromagnetic field. In particular, the electron transition from the excited state to the electronic ground state mixes with the transition of the electromagnetic field from the ground state to an excited state, a field state with one photon in it. Spontaneous emission in free space depends upon vacuum fluctuations to get started.
Although there is only one electronic transition from the excited state to ground state, there are many ways in which the electromagnetic field may go from the ground state to a one-photon state. That is, the electromagnetic field has infinitely more degrees of freedom, corresponding to the different directions in which the photon can be emitted. Equivalently, one might say that the phase space offered by the electromagnetic field is infinitely larger than that offered by the atom. This infinite degree of freedom for the emission of the photon results in the apparent irreversible decay, i.e., spontaneous emission.
In the presence of electromagnetic vacuum modes, the combined atom-vacuum system is explained by the superposition of the wavefunctions of the excited state atom with no photon and the ground state atom with a single emitted photon:
where and are the atomic excited state-electromagnetic vacuum wavefunction and its probability amplitude, and are the ground state atom with a single photon (of mode ) wavefunction and its probability amplitude, is the atomic transition frequency, and is the frequency of the photon. The sum is over and , which are the wavenumber and polarization of the emitted photon, respectively. As mentioned above, the emitted photon has a chance to be emitted with different wavenumbers and polarizations, and the resulting wavefunction is a superposition of these possibilities. To calculate the probability of the atom at the ground state (), one needs to solve the time evolution of the wavefunction with an appropriate Hamiltonian. To solve for the transition amplitude, one needs to average over (integrate over) all the vacuum modes, since one must consider the probabilities that the emitted photon occupies various parts of phase space equally. The "spontaneously" emitted photon has infinite different modes to propagate into, thus the probability of the atom re-absorbing the photon and returning to the original state is negligible, making the atomic decay practically irreversible. Such irreversible time evolution of the atom-vacuum system is responsible for the apparent spontaneous decay of an excited atom. If one were to keep track of all the vacuum modes, the combined atom-vacuum system would undergo unitary time evolution, making the decay process reversible. Cavity quantum electrodynamics is one such system where the vacuum modes are modified resulting in the reversible decay process, see also Quantum revival. The theory of the spontaneous emission under the QED framework was first calculated by Victor Weisskopf and Eugene Wigner in 1930 in a landmark paper. The Weisskopf-Wigner calculation remains the standard approach to spontaneous radiation emission in atomic and molecular physics. Dirac had also developed the same calculation a couple of years prior to the paper by Wigner and Weisskopf.
Rate of spontaneous emission
The rate of spontaneous emission (i.e., the radiative rate) can be described by Fermi's golden rule. The rate of emission depends on two factors: an 'atomic part', which describes
the internal structure of the light source and a 'field part', which describes the density of electromagnetic modes of the environment. The atomic part describes the strength of a transition between two states in terms of transition moments. In a homogeneous medium, such as free space, the rate of spontaneous emission in the dipole approximation is given by:
where is the emission frequency, is the index of refraction, is the transition dipole moment, is the vacuum permittivity, is the reduced Planck constant, is the vacuum speed of light, and is the fine-structure constant. The expression stands for the definition of the transition dipole moment for dipole moment operator , where is the elementary charge and stands for position operator. (This approximation breaks down in the case of inner shell electrons in high-Z atoms.) The above equation clearly shows that the rate of spontaneous emission in free space increases proportionally to .
In contrast with atoms, which have a discrete emission spectrum, quantum dots can be tuned continuously by changing their size. This property has been used to check the -frequency dependence of the spontaneous emission rate as described by Fermi's golden rule.
Radiative and nonradiative decay: the quantum efficiency
In the rate-equation above, it is assumed that decay of the number of excited states only occurs under emission of light. In this case one speaks of full radiative decay and this means that the quantum efficiency is 100%. Besides radiative decay, which occurs under the emission of light, there is a second decay mechanism; nonradiative decay. To determine the total decay rate , radiative and nonradiative rates should be summed:
where is the total decay rate, is the radiative decay rate and the nonradiative decay rate. The quantum efficiency (QE) is defined as the fraction of emission processes in which emission of light is involved:
In nonradiative relaxation, the energy is released as phonons, more commonly known as heat. Nonradiative relaxation occurs when the energy difference between the levels is very small, and these typically occur on a much faster time scale than radiative transitions. For many materials (for instance, semiconductors), electrons move quickly from a high energy level to a meta-stable level via small nonradiative transitions and then make the final move down to the bottom level via an optical or radiative transition. This final transition is the transition over the bandgap in semiconductors. Large nonradiative transitions do not occur frequently because the crystal structure generally cannot support large vibrations without destroying bonds (which generally doesn't happen for relaxation). Meta-stable states form a very important feature that is exploited in the construction of lasers. Specifically, since electrons decay slowly from them, they can be deliberately piled up in this state without too much loss and then stimulated emission can be used to boost an optical signal.
Radiative cascade
If emission leaves a system in an excited state, additional transitions can occur, leading to atomic radiative cascade. For example, if calcium atoms a low pressure atomic beam are excited by ultraviolet light from their in the 41S0 ground state to the 61P1 state, they can decay in three steps, first to 61S0 then to 41P1 and finally to the ground state. The photons from the second and third transitions have correlated polarizations demonstrating quantum entanglement. These correlations were used by John Clauser and Alain Aspect in work that contributed to their 2022 Nobel prize in physics.
| Physical sciences | Atomic physics | Physics |
28249 | https://en.wikipedia.org/wiki/Search%20algorithm | Search algorithm | In computer science, a search algorithm is an algorithm designed to solve a search problem. Search algorithms work to retrieve information stored within particular data structure, or calculated in the search space of a problem domain, with either discrete or continuous values.
Although search engines use search algorithms, they belong to the study of information retrieval, not algorithmics.
The appropriate search algorithm to use often depends on the data structure being searched, and may also include prior knowledge about the data. Search algorithms can be made faster or more efficient by specially constructed database structures, such as search trees, hash maps, and database indexes.
Search algorithms can be classified based on their mechanism of searching into three types of algorithms: linear, binary, and hashing. Linear search algorithms check every record for the one associated with a target key in a linear fashion. Binary, or half-interval, searches repeatedly target the center of the search structure and divide the search space in half. Comparison search algorithms improve on linear searching by successively eliminating records based on comparisons of the keys until the target record is found, and can be applied on data structures with a defined order. Digital search algorithms work based on the properties of digits in data structures by using numerical keys. Finally, hashing directly maps keys to records based on a hash function.
Algorithms are often evaluated by their computational complexity, or maximum theoretical run time. Binary search functions, for example, have a maximum complexity of , or logarithmic time. In simple terms, the maximum number of operations needed to find the search target is a logarithmic function of the size of the search space.
Applications of search algorithms
Specific applications of search algorithms include:
Problems in combinatorial optimization, such as:
The vehicle routing problem, a form of shortest path problem
The knapsack problem: Given a set of items, each with a weight and a value, determine the number of each item to include in a collection so that the total weight is less than or equal to a given limit and the total value is as large as possible.
The nurse scheduling problem
Problems in constraint satisfaction, such as:
The map coloring problem
Filling in a sudoku or crossword puzzle
In game theory and especially combinatorial game theory, choosing the best move to make next (such as with the minmax algorithm)
Finding a combination or password from the whole set of possibilities
Factoring an integer (an important problem in cryptography)
Search engine optimization (SEO) and content optimization for web crawlers
Optimizing an industrial process, such as a chemical reaction, by changing the parameters of the process (like temperature, pressure, and pH)
Retrieving a record from a database
Finding the maximum or minimum value in a list or array
Checking to see if a given value is present in a set of values
Classes
For virtual search spaces
Algorithms for searching virtual spaces are used in the constraint satisfaction problem, where the goal is to find a set of value assignments to certain variables that will satisfy specific mathematical equations and inequations / equalities. They are also used when the goal is to find a variable assignment that will maximize or minimize a certain function of those variables. Algorithms for these problems include the basic brute-force search (also called "naïve" or "uninformed" search), and a variety of heuristics that try to exploit partial knowledge about the structure of this space, such as linear relaxation, constraint generation, and constraint propagation.
An important subclass are the local search methods, that view the elements of the search space as the vertices of a graph, with edges defined by a set of heuristics applicable to the case; and scan the space by moving from item to item along the edges, for example according to the steepest descent or best-first criterion, or in a stochastic search. This category includes a great variety of general metaheuristic methods, such as simulated annealing, tabu search, A-teams, and genetic programming, that combine arbitrary heuristics in specific ways. The opposite of local search would be global search methods. This method is applicable when the search space is not limited and all aspects of the given network are available to the entity running the search algorithm.
This class also includes various tree search algorithms, that view the elements as vertices of a tree, and traverse that tree in some special order. Examples of the latter include the exhaustive methods such as depth-first search and breadth-first search, as well as various heuristic-based search tree pruning methods such as backtracking and branch and bound. Unlike general metaheuristics, which at best work only in a probabilistic sense, many of these tree-search methods are guaranteed to find the exact or optimal solution, if given enough time. This is called "completeness".
Another important sub-class consists of algorithms for exploring the game tree of multiple-player games, such as chess or backgammon, whose nodes consist of all possible game situations that could result from the current situation. The goal in these problems is to find the move that provides the best chance of a win, taking into account all possible moves of the opponent(s). Similar problems occur when humans or machines have to make successive decisions whose outcomes are not entirely under one's control, such as in robot guidance or in marketing, financial, or military strategy planning. This kind of problem — combinatorial search — has been extensively studied in the context of artificial intelligence. Examples of algorithms for this class are the minimax algorithm, alpha–beta pruning, and the A* algorithm and its variants.
For sub-structures of a given structure
The name "combinatorial search" is generally used for algorithms that look for a specific sub-structure of a given discrete structure, such as a graph, a string, a finite group, and so on. The term combinatorial optimization is typically used when the goal is to find a sub-structure with a maximum (or minimum) value of some parameter. (Since the sub-structure is usually represented in the computer by a set of integer variables with constraints, these problems can be viewed as special cases of constraint satisfaction or discrete optimization; but they are usually formulated and solved in a more abstract setting where the internal representation is not explicitly mentioned.)
An important and extensively studied subclass are the graph algorithms, in particular graph traversal algorithms, for finding specific sub-structures in a given graph — such as subgraphs, paths, circuits, and so on. Examples include Dijkstra's algorithm, Kruskal's algorithm, the nearest neighbour algorithm, and Prim's algorithm.
Another important subclass of this category are the string searching algorithms, that search for patterns within strings. Two famous examples are the Boyer–Moore and Knuth–Morris–Pratt algorithms, and several algorithms based on the suffix tree data structure.
Search for the maximum of a function
In 1953, American statistician Jack Kiefer devised Fibonacci search which can be used to find the maximum of a unimodal function and has many other applications in computer science.
For quantum computers
There are also search methods designed for quantum computers, like Grover's algorithm, that are theoretically faster than linear or brute-force search even without the help of data structures or heuristics. While the ideas and applications behind quantum computers are still entirely theoretical, studies have been conducted with algorithms like Grover's that accurately replicate the hypothetical physical versions of quantum computing systems.
| Mathematics | Algorithms | null |
28266 | https://en.wikipedia.org/wiki/Scurvy | Scurvy | Scurvy is a deficiency disease (state of malnutrition) resulting from a lack of vitamin C (ascorbic acid). Early symptoms of deficiency include weakness, fatigue, and sore arms and legs. Without treatment, decreased red blood cells, gum disease, changes to hair, and bleeding from the skin may occur. As scurvy worsens, there can be poor wound healing, personality changes, and finally death from infection or bleeding.
It takes at least a month of little to no vitamin C in the diet before symptoms occur. In modern times, scurvy occurs most commonly in people with mental disorders, unusual eating habits, alcoholism, and older people who live alone. Other risk factors include intestinal malabsorption and dialysis.
While many animals produce their vitamin C, humans, and a few others do not. Vitamin C, an antioxidant, is required to make the building blocks for collagen, carnitine, and catecholamines, and assists the intestines in the absorption of iron from foods. Diagnosis is typically based on outward appearance, X-rays, and improvement after treatment.
Treatment is with vitamin C supplements taken by mouth. Improvement often begins in a few days with complete recovery in a few weeks. Sources of vitamin C in the diet include citrus fruit and a number of vegetables, including red peppers, broccoli, and tomatoes. Cooking often decreases the residual amount of vitamin C in foods.
Scurvy is rare compared to other nutritional deficiencies. It occurs more often in the developing world in association with malnutrition. Rates among refugees are reported at 5 to 45 percent. Scurvy was described as early as the time of ancient Egypt, and historically it was a limiting factor in long-distance sea travel, often killing large numbers of people. During the Age of Sail, it was assumed that 50 percent of the sailors would die of scurvy on a major trip.
Signs and symptoms
Early symptoms are malaise and lethargy. After one to three months, patients develop shortness of breath and bone pain. Myalgias may occur because of reduced carnitine production. Other symptoms include skin changes with roughness, easy bruising and petechiae, gum disease, loosening of teeth, poor wound healing, and emotional changes (which may appear before any physical changes). Dry mouth and dry eyes similar to Sjögren's syndrome may occur. In the late stages, jaundice, generalised edema, oliguria, neuropathy, fever, convulsions, and eventual death are frequently seen.
Cause
Scurvy, including subclinical scurvy, is caused by a deficiency of dietary vitamin C since humans are unable to synthesize vitamin C. Provided the diet contains sufficient vitamin C, the lack of working L-gulonolactone oxidase (GULO) enzyme has no significance. In modern Western societies, scurvy is rarely present in adults, although infants and elderly people are affected. Virtually all commercially available baby formulas contain added vitamin C, preventing infantile scurvy. Human breast milk contains sufficient vitamin C if the mother has an adequate intake. Commercial milk is pasteurized, a heating process that destroys the natural vitamin C content of the milk.
Scurvy is one of the accompanying diseases of malnutrition (other such micronutrient deficiencies are beriberi and pellagra) and thus is still widespread in areas of the world dependent on external food aid. Although rare, there are also documented cases of scurvy due to poor dietary choices by people living in industrialized nations.
Pathogenesis
Vitamins are essential to the production and use of enzymes in ongoing processes throughout the human body. Ascorbic acid is needed for a variety of biosynthetic pathways, by accelerating hydroxylation and amidation reactions.
The early symptoms of malaise and lethargy may be due to either impaired fatty acid metabolism from a lack of carnitine and/or from a lack of catecholamines which are needed for the cAMP-dependent pathway in both glycogen metabolism and fatty acid metabolism. Impairment of either fatty acid metabolism or glycogen metabolism leads to decreased ATP (energy) production. ATP is needed for cellular functions, including muscle contraction. (For low ATP within the muscle cell, see also Purine nucleotide cycle.)
In the synthesis of collagen, ascorbic acid is required as a cofactor for prolyl hydroxylase and lysyl hydroxylase. These two enzymes are responsible for the hydroxylation of the proline and lysine amino acids in collagen. Hydroxyproline and hydroxylysine are important for stabilizing collagen by cross-linking the propeptides in collagen.
Collagen is a primary structural protein in the human body, necessary for healthy blood vessels, muscle, skin, bone, cartilage, and other connective tissues. Defective connective tissue leads to fragile capillaries, resulting in abnormal bleeding, bruising, and internal hemorrhaging. Collagen is an important part of bone, so bone formation is also affected. Teeth loosen, bones break more easily, and once-healed breaks may recur. Defective collagen fibrillogenesis impairs wound healing. Untreated scurvy is invariably fatal.
Diagnosis
Diagnosis is typically based on physical signs, X-rays, and improvement after treatment.
Differential diagnosis
Various childhood-onset disorders can mimic the clinical and X-ray picture of scurvy such as:
Rickets
Osteochondrodysplasias especially osteogenesis imperfecta
Blount's disease
Osteomyelitis
Prevention
Scurvy can be prevented by a diet that includes uncooked vitamin C-rich foods such as amla, bell peppers (sweet peppers), blackcurrants, broccoli, chili peppers, guava, kiwifruit, and parsley. Other sources rich in vitamin C are fruits such as lemons, limes, oranges, papaya, and strawberries. It is also found in vegetables, such as brussels sprouts, cabbage, potatoes, and spinach. Some fruits and vegetables not high in vitamin C may be pickled in lemon juice, which is high in vitamin C. Nutritional supplements that provide ascorbic acid well above what is required to prevent scurvy may cause adverse health effects.
Fresh meat from animals, notably internal organs, contains enough vitamin C to prevent scurvy, and even partly treat it.
Scott's 1902 Antarctic expedition used fresh seal meat and increased allowance of bottled fruits, whereby complete recovery from incipient scurvy was reported to have taken less than two weeks.
Treatment
Scurvy will improve with doses of vitamin C as low as 10 mg per day though doses of around 100 mg per day are typically recommended. Most people make a full recovery within 2 weeks.
History
Symptoms of scurvy have been recorded in Ancient Egypt as early as 1550 BCE. It was first reported amongst soldiers and sailors having inadequate access to fruits and vegetables which resulted in vitamin C deficiency. In Ancient Greece, the physician Hippocrates (460–370 BC) described symptoms of scurvy, specifically a "swelling and obstruction of the spleen." In 406 CE, the Chinese monk Faxian wrote that ginger was carried on Chinese ships to prevent scurvy.
The knowledge that consuming certain foods is a cure for scurvy has been repeatedly forgotten and rediscovered into the early-20th century. Scurvy occurred during the Irish potato famine in 1845 and also the American Civil War. Recent In 2002, Scurvy outbreaks were recorded in Afghanistan post war and drought.
Early modern era
In the 13th century Crusaders developed scurvy. In the 1497 expedition of Vasco da Gama, the curative effects of citrus fruit were already observed and were confirmed by Pedro Álvares Cabral and his crew in 1507.
The Portuguese planted fruit trees and vegetables on Saint Helena, a stopping point for homebound voyages from Asia, and left their sick who had scurvy and other ailments to be taken home by the next ship if they recovered. In 1500, one of the pilots of Cabral's fleet bound for India noted that in Malindi, its king offered the expedition fresh supplies such as lambs, chickens, and ducks, along with lemons and oranges, due to which "some of our ill were cured of scurvy".
These travel accounts did not stop further maritime tragedies caused by scurvy, first because of the lack of communication between travelers and those responsible for their health, and because fruits and vegetables could not be kept for long on ships.
In 1536, the French explorer Jacques Cartier, while exploring the St. Lawrence River, used the local St. Lawrence Iroquoians' knowledge to save his men dying of scurvy. He boiled the needles of the arbor vitae tree (eastern white cedar) to make a tea that was later shown to contain 50 mg of vitamin C per 100 grams. Such treatments were not available aboard ship, where the disease was most common. Later, possibly inspired by this incident, several European countries experimented with preparations of various conifers, such as spruce beer, as cures for scurvy.
In 1579, the Spanish friar and physician Agustin Farfán published a book Tractado breve de anathomía y chirugía, y de algunas enfermedades que más comúnmente suelen haver en esta Nueva España in which he recommended oranges and lemons for scurvy, a remedy that was already known in the Spanish navy.
In February 1601, Captain James Lancaster, while commanding the first English East India Company fleet en route to Sumatra, landed on the northern coast of Madagascar specifically to obtain lemons and oranges for his crew to stop scurvy. Captain Lancaster conducted an experiment using four ships under his command. One ship's crew received routine doses of lemon juice while the other three did not receive such treatment. As a result, members of the non-treated ships started to contract scurvy, with many dying as a result.
Researchers have estimated that during the Age of Exploration (between 1500 and 1800), scurvy killed at least two million sailors. Jonathan Lamb wrote: "In 1499, Vasco da Gama lost 116 of his crew of 170; In 1520, Magellan lost 208 out of 230; ... all mainly to scurvy."
In 1593, Admiral Sir Richard Hawkins advocated drinking orange and lemon juice to prevent scurvy.
A 1609 book by Bartolomé Leonardo de Argensola recorded several different remedies for scurvy known at this time in the Moluccas, including a kind of wine mixed with cloves and ginger, and "certain herbs". The Dutch sailors in the area were said to cure the same disease by drinking lime juice.
In 1614, John Woodall, Surgeon General of the East India Company, published The Surgion's Mate as a handbook for apprentice surgeons aboard the company's ships. He repeated the experience of mariners that the cure for scurvy was fresh food or, if not available, oranges, lemons, limes, and tamarinds. He was, however, unable to explain the reason why, and his assertion had no impact on the prevailing opinion of the influential physicians of the age, that scurvy was a digestive complaint.
Besides ocean travel, even in Europe, until the late Middle Ages, scurvy was common in late winter, when few green vegetables, fruits, and root vegetables were available. This gradually improved with the introduction of potatoes from the Americas; by 1800, scurvy was virtually unheard of in Scotland, where it had previously been endemic.
18th century
In 2009, a handwritten household book authored by a Cornishwoman in 1707 was discovered in a house in Hasfield, Gloucestershire, containing a " for the Scurvy" amongst other largely medicinal and herbal recipes. The recipe consisted of extracts from various plants mixed with a plentiful supply of orange juice, white wine, or beer.
In 1734, Leiden-based physician Johann Bachstrom published a book on scurvy in which he stated, "scurvy is solely owing to a total abstinence from fresh vegetable food, and greens; which is alone the primary cause of the disease", and urged the use of fresh fruit and vegetables as a cure.
It was not until 1747 that James Lind formally demonstrated that scurvy could be treated by supplementing the diet with citrus fruit, in one of the first controlled clinical experiments reported in the history of medicine. As a naval surgeon on HMS Salisbury, Lind had compared several suggested scurvy cures: hard cider, vitriol, vinegar, seawater, oranges, lemons, and a mixture of balsam of Peru, garlic, myrrh, mustard seed and radish root. In A Treatise on the Scurvy (1753)
Lind explained the details of his clinical trial and concluded "the results of all my experiments was, that oranges and lemons were the most effectual remedies for this distemper at sea." However, the experiment and its results occupied only a few paragraphs in a work that was long and complex and had little impact. Lind himself never actively promoted lemon juice as a single 'cure'. He shared medical opinion at the time that scurvy had multiple causes – notably hard work, bad water, and the consumption of salt meat in a damp atmosphere which inhibited healthful perspiration and normal excretion – and therefore required multiple solutions. Lind was also sidetracked by the possibilities of producing a concentrated 'rob' of lemon juice by boiling it. This process destroyed the vitamin C and was therefore unsuccessful.
During the 18th century, scurvy killed more British sailors than wartime enemy action. It was mainly by scurvy that George Anson, in his celebrated voyage of 1740–1744, lost nearly two-thirds of his crew (1,300 out of 2,000) within the first 10 months of the voyage. The Royal Navy enlisted 184,899 sailors during the Seven Years' War; 133,708 of these were "missing" or died from disease, and scurvy was the leading cause.
Although sailors and naval surgeons were increasingly convinced that citrus fruits could cure scurvy throughout this period, the classically trained physicians who determined medical policy dismissed this evidence as merely anecdotal, as it did not conform to their theories of disease. Literature championing the cause of citrus juice had no practical impact. The medical theory was based on the assumption that scurvy was a disease of internal putrefaction brought on by faulty digestion caused by the hardships of life at sea and the naval diet. Although successive theorists gave this basic idea different emphases, the remedies they advocated (and which the navy accepted) amounted to little more than the consumption of 'fizzy drinks' to activate the digestive system, the most extreme of which was the regular consumption of 'elixir of vitriol' – sulphuric acid taken with spirits and barley water, and laced with spices.
In 1764, a new and similarly inaccurate theory on scurvy appeared. Advocated by Dr David MacBride and Sir John Pringle, Surgeon General of the Army and later President of the Royal Society, this idea was that scurvy was the result of a lack of 'fixed air' in the tissues which could be prevented by drinking infusions of malt and wort whose fermentation within the body would stimulate digestion and restore the missing gases. These ideas received wide and influential backing, when James Cook set off to circumnavigate the world (1768–1771) in , malt and wort were top of the list of the remedies he was ordered to investigate. The others were beer, Sauerkraut (a good source of vitamin C), and Lind's 'rob'. The list did not include lemons.
Cook did not lose a single man to scurvy, and his report came down in favor of malt and wort. The reason for the health of his crews on this and other voyages was Cook's regime of shipboard cleanliness, enforced by strict discipline, and frequent replenishment of fresh food and greenstuffs. Another beneficial rule implemented by Cook was his prohibition of the consumption of salt fat skimmed from the ship's copper boiling pans, then a common practice elsewhere in the Navy. In contact with air, the copper formed compounds that prevented the absorption of vitamins by the intestines.
The first major long-distance expedition that experienced virtually no scurvy was that of the Spanish naval officer Alessandro Malaspina, 1789–1794. Malaspina's medical officer, Pedro González, was convinced that fresh oranges and lemons were essential for preventing scurvy. Only one outbreak occurred, during a 56-day trip across the open sea. Five sailors came down with symptoms, one seriously. After three days at Guam, all five were healthy again. Spain's large empire and many ports of call made it easier to acquire fresh fruit.
Although towards the end of the century, MacBride's theories were being challenged, the medical authorities in Britain remained committed to the notion that scurvy was a disease of internal 'putrefaction' and the Sick and Hurt Board, run by administrators, felt obliged to follow its advice. Within the Royal Navy, however, opinion – strengthened by first-hand experience with lemon juice at the siege of Gibraltar and during Admiral Rodney's expedition to the Caribbean – had become increasingly convinced of its efficacy. This was reinforced by the writings of experts like Gilbert Blane and Thomas Trotter and by the reports of up-and-coming naval commanders.
With the coming of war in 1793, the need to eliminate scurvy became more urgent. The first initiative came not from the medical establishment but from the admirals. Ordered to lead an expedition against Mauritius, Rear Admiral Gardner was uninterested in the wort, malt, and elixir of vitriol that were still being issued to ships of the Royal Navy, and demanded that he be supplied with lemons, to counteract scurvy on the voyage. Members of the Sick and Hurt Board, recently augmented by two practical naval surgeons, supported the request, and the Admiralty ordered that it be done. There was, however, a last-minute change of plan, and the expedition against Mauritius was canceled. On 2 May 1794, only and two sloops under Commodore Peter Rainier sailed for the east with an outward bound convoy, but the warships were fully supplied with lemon juice and the sugar with which it had to be mixed.
In March 1795, it was reported that the Suffolk had arrived in India after a four-month voyage without a trace of scurvy and with a crew that was healthier than when it set out. The effect was immediate. Fleet commanders clamored also to be supplied with lemon juice, and by June the Admiralty acknowledged the groundswell of demand in the navy and agreed to a proposal from the Sick and Hurt Board that lemon juice and sugar should in future be issued as a daily ration to the crews of all warships.
It took a few years before the method of distribution to all ships in the fleet had been perfected and the supply of the huge quantities of lemon juice required to be secured, but by 1800, the system was in place and functioning. This led to a remarkable health improvement among the sailors and consequently played a critical role in gaining an advantage in naval battles against enemies who had yet to introduce the measures.
Scurvy was not only a disease of seafarers. The early colonists of Australia suffered greatly because of the lack of fresh fruit and vegetables in the winter. There the disease was called Spring fever or Spring disease and was described as an often fatal condition associated with skin lesions, bleeding gums, and lethargy. It was eventually identified as scurvy and the remedies already in use at sea were implemented.
19th century
The surgeon-in-chief of Napoleon's army at the Siege of Alexandria (1801), Baron Dominique-Jean Larrey, wrote in his memoirs that the consumption of horse meat helped the French to curb an epidemic of scurvy. The meat was cooked but was freshly obtained from young horses bought from Arabs, and was nevertheless effective. This helped to start the 19th-century tradition of horse meat consumption in France.
Lauchlin Rose patented a method used to preserve citrus juice without alcohol in 1867, creating a concentrated drink known as Rose's lime juice. The Merchant Shipping Act of 1867 required all ships of the Royal Navy and Merchant Navy to provide a daily "lime or lemon juice" ration of one pound to sailors to prevent scurvy. The product became nearly ubiquitous, hence the term "limey", first for British sailors, then for English immigrants within the former British colonies (particularly America, New Zealand, and South Africa), and finally, in old American slang, all British people.
The plant Cochlearia officinalis, also known as "common scurvygrass", acquired its common name from the observation that it cured scurvy, and it was taken on board ships in dried bundles or distilled extracts. Its bitter taste was usually disguised with herbs and spices; however, this did not prevent scurvygrass drinks and sandwiches from becoming a popular fad in the UK until the middle of the nineteenth century, when citrus fruits became more readily available.
West Indian limes began to take over from lemons, when Spain's alliance with France against Britain in the Napoleonic Wars made the supply of Mediterranean lemons problematic, and because they were more easily obtained from Britain's Caribbean colonies and were believed to be more effective because they were more acidic. It was the acid, not the (then-unknown) Vitamin C that was believed to cure scurvy. The West Indian limes were significantly lower in Vitamin C than the previous lemons and further were not served fresh but rather as lime juice, which had been exposed to light and air, and piped through copper tubing, all of which significantly reduced the Vitamin C. Indeed, a 1918 animal experiment using representative samples of the Navy and Merchant Marine's lime juice showed that it had virtually no antiscorbutic power at all.
The belief that scurvy was fundamentally a nutritional deficiency, best treated by consumption of fresh food, particularly fresh citrus or fresh meat, was not universal in the 19th and early 20th centuries, and thus sailors and explorers continued to have scurvy into the 20th century. For example, the Belgian Antarctic Expedition of 1897–1899 became seriously affected by scurvy when its leader, Adrien de Gerlache, initially discouraged his men from eating penguin and seal meat.
In the Royal Navy's Arctic expeditions in the mid-19th century, it was widely believed that scurvy was prevented by good hygiene on board ship, regular exercise, and maintaining crew morale, rather than by a diet of fresh food. Navy expeditions continued to be plagued by scurvy even while fresh (not jerked or tinned) meat was well known as a practical antiscorbutic among civilian whalers and explorers in the Arctic. In the latter half of the 19th century, there was greater recognition of the value of eating fresh meat as a means of avoiding or treating scurvy, but the lack of available game to hunt at high latitudes in winter meant it was not always a viable remedy. Criticism also focused on the fact that some of the men most affected by scurvy on Naval polar expeditions had been heavy drinkers, with suggestions that this predisposed them to the condition. Even cooking fresh meat did not destroy its antiscorbutic properties, especially as many cooking methods failed to bring all the meat to high temperature.
The confusion is attributed to several factors:
while fresh citrus (particularly lemons) cured scurvy, lime juice that had been exposed to light, air, and copper tubing did not – thus undermining the theory that citrus cured scurvy;
fresh meat (especially organ meat and raw meat, consumed in arctic exploration) also cured scurvy, undermining the theory that fresh vegetable matter was essential to preventing and curing scurvy;
increased marine speed via steam shipping, improved nutrition on land, reduced the incidence of scurvy – and thus the ineffectiveness of copper-piped lime juice compared to fresh lemons was not immediately revealed.
In the resulting confusion, a new hypothesis was proposed, following the new germ theory of disease – that scurvy was caused by ptomaine, a waste product of bacteria, particularly in tainted tinned meat.
Infantile scurvy emerged in the late 19th century because children were fed pasteurized cow's milk, particularly in the urban upper class. While pasteurization killed bacteria, it also destroyed vitamin C. This was eventually resolved by supplementing with onion juice or cooked potatoes. Native Americans helped save some newcomers from scurvy by directing them to eat wild onions.
20th century
By the early 20th century, when Robert Falcon Scott made his first expedition to the Antarctic (1901–1904), the prevailing theory was that scurvy was caused by "ptomaine poisoning", particularly in tinned meat. However, Scott discovered that a diet of fresh meat from Antarctic seals cured scurvy before any fatalities occurred. But while he saw fresh meat as a cure for scurvy, he remained confused about its underlying causes.
In 1907, an animal model that would eventually help to isolate and identify the "antiscorbutic factor" was discovered. Axel Holst and Theodor Frølich, two Norwegian physicians studying shipboard beriberi contracted by ship's crews in the Norwegian Fishing Fleet, wanted a small test mammal to substitute for the pigeons then used in beriberi research. They fed guinea pigs their test diet of grains and flour, which had earlier produced beriberi in their pigeons, and were surprised when classic scurvy resulted instead. This was a serendipitous choice of animal. Until that time, scurvy had not been observed in any organism apart from humans and had been considered an exclusively human disease. Certain birds, mammals, and fish are susceptible to scurvy, but pigeons are unaffected since they can synthesize ascorbic acid internally. Holst and Frølich found they could cure scurvy in guinea pigs with the addition of various fresh foods and extracts. This discovery of an animal experimental model for scurvy, which was made even before the essential idea of "vitamins" in foods had been put forward, has been called the single most important piece of vitamin C research.
In 1915, New Zealand troops in the Gallipoli Campaign had a lack of vitamin C in their diet which caused many of the soldiers to contract scurvy.
Vilhjalmur Stefansson, an Arctic explorer who had lived among the Inuit, proved that the all-meat diet they consumed did not lead to vitamin deficiencies. He participated in a study in New York's Bellevue Hospital in February 1928, where he and a companion ate only meat for a year while under close medical observation, yet remained in good health.
In 1927, Hungarian biochemist Albert Szent-Györgyi isolated a compound he called "hexuronic acid". Szent-Györgyi suspected hexuronic acid, which he had isolated from adrenal glands, to be the antiscorbutic agent, but he could not prove it without an animal-deficiency model. In 1932, the connection between hexuronic acid and scurvy was finally proven by American researcher Charles Glen King of the University of Pittsburgh. King's laboratory was given some hexuronic acid by Szent-Györgyi and soon established that it was the sought-after anti-scorbutic agent. Because of this, hexuronic acid was subsequently renamed ascorbic acid.
21st century
Rates of scurvy in the developed world are low due to the greater access of vitamin C-rich foods. Those most commonly affected are malnourished people in the developing world and homeless people. There have been outbreaks of the condition in refugee camps. Case reports in the developing world of those with poorly healing wounds have occurred.
In 2020, the overall risk of scurvy in the US was about one in 4,000 people, up significantly from even a few years before. However, the risk is not evenly distributed, and about two-thirds of all scurvy is found in autistic people. Children and young people with autism are at risk of developing scurvy because some of them only eat a small number of foods (e.g., only rice and pasta). For some of them, the restricted diet takes the form of avoidant/restrictive food intake disorder (ARFID).
Human trials
Notable human dietary studies of experimentally induced scurvy were conducted on conscientious objectors during World War II in Britain and the United States on Iowa state prisoner volunteers in the late 1960s. These studies both found that all obvious symptoms of scurvy previously induced by an experimental scorbutic diet with extremely low vitamin C content could be completely reversed by additional vitamin C supplementation of only 10 mg per day. In these experiments, no clinical difference was noted between men given 70 mg vitamin C per day (which produced blood levels of vitamin C of about 0.55 mg/dl, about of tissue saturation levels), and those given 10 mg per day (which produced lower blood levels). Men in the prison study developed the first signs of scurvy about four weeks after starting the vitamin C-free diet, whereas in the British study, six to eight months were required, possibly because the subjects were pre-loaded with a 70 mg/day supplement for six weeks before the scorbutic diet was fed.
Men in both studies, on a diet devoid or nearly devoid of vitamin C, had blood levels of vitamin C too low to be accurately measured when they developed signs of scurvy, and in the Iowa study, at this time were estimated (by labeled vitamin C dilution) to have a body pool of less than 300 mg, with daily turnover of only 2.5 mg/day.
In other animals
Most animals and plants can synthesize vitamin C through a sequence of enzyme-driven steps, which convert monosaccharides to vitamin C. However, some mammals have lost the ability to synthesize vitamin C, notably simians and tarsiers. These make up one of two major primate suborders, haplorrhini, and this group includes humans. The strepsirrhini (non-tarsier prosimians) can make their vitamin C, and these include lemurs, lorises, pottos, and galagos. Ascorbic acid is also not synthesized by at least two species of caviidae, the capybara and the guinea pig. Certain birds and fish do not synthesize their vitamin C. All species that do not synthesize ascorbate require it in the diet. Deficiency causes scurvy in humans, and somewhat similar symptoms in other animals.
Animals that can contract scurvy all lack the L-gulonolactone oxidase (GULO) enzyme, which is required in the last step of vitamin C synthesis. The genomes of these species contain GULO as pseudogenes, which serve as insight into the evolutionary past of the species.
Name
In babies, scurvy is sometimes referred to as Barlow's disease, named after Thomas Barlow, a British physician who described it in 1883. However, Barlow's disease may also refer to mitral valve prolapse (Barlow's syndrome), first described by John Brereton Barlow in 1966.
| Biology and health sciences | Health and fitness: General | Health |
28267 | https://en.wikipedia.org/wiki/Sydney%20Harbour%20Bridge | Sydney Harbour Bridge | The Sydney Harbour Bridge is a steel through arch bridge in Sydney, New South Wales, Australia, spanning Sydney Harbour from the central business district (CBD) to the North Shore. The view of the bridge, the Harbour, and the nearby Sydney Opera House is widely regarded as an iconic image of Sydney, and of Australia itself. Nicknamed "the Coathanger" because of its arch-based design, the bridge carries rail, vehicular, bicycle and pedestrian traffic.
Under the direction of John Bradfield of the New South Wales Department of Public Works, the bridge was designed and built by British firm Dorman Long of Middlesbrough, and opened in 1932. The bridge's general design, which Bradfield tasked the NSW Department of Public Works with producing, was a rough copy of the Hell Gate Bridge in New York City. The design chosen from the tender responses was original work created by Dorman Long, who leveraged some of the design from its own Tyne Bridge.
It is the tenth-longest spanning-arch bridge in the world and the tallest steel arch bridge, measuring from top to water level. It was also the world's widest long-span bridge, at wide, until construction of the new Port Mann Bridge in Vancouver was completed in 2012.
Structure
The southern end of the bridge is located at Dawes Point in The Rocks area, and the northern end at Milsons Point on the lower North Shore. There are six original lanes of road traffic through the main roadway, plus an additional two lanes of road traffic on its eastern side, using lanes that were formerly tram tracks. Adjacent to the road traffic, a path for pedestrian use runs along the eastern side of the bridge, whilst a dedicated path for bicycle use runs along the western side. Between the main roadway and the western bicycle path lies the North Shore railway line.
The main roadway across the bridge is known as the Bradfield Highway and is about long, making it one of the shortest highways in Australia.
Arch
The arch is composed of two 28-panel arch trusses; their heights vary from at the centre of the arch, to at the ends next to the pylons.
The arch has a span of , and its summit is above mean sea level; expansion of the steel structure on hot days can increase the height of the arch by . The total weight of the steelwork of the bridge, including the arch and approach spans, is , with the arch itself weighing . About 79% of the steel, specifically those technical sections constituting the curve of the arch, was imported pre-formed from England, with the rest being sourced from the Newcastle Steelworks. On site, Dorman Long & Co set up two workshops at Milsons Point, at the site of the present day Luna Park, and fabricated the steel into the girders and other required parts.
The bridge is held together by six million Australian-made hand-driven rivets supplied by the McPherson company of Melbourne, the last being driven through the deck on 21 January 1932. The rivets were heated red-hot and inserted into the plates; the headless end was immediately rounded over with a large pneumatic rivet gun. The largest of the rivets used weighed and was long. The practice of riveting large steel structures, rather than welding, was, at the time, a proven and understood construction technique, whilst structural welding had not at that stage been adequately developed for use on the bridge.
Pylons
At each end of the arch stands a pair of concrete pylons, faced with granite. The pylons were designed by the Scottish architect Thomas S. Tait, a partner in the architectural firm John Burnet & Partners.
Some 250 Australian, Scottish, and Italian stonemasons and their families relocated to a temporary settlement at Moruya, south of Sydney, where they quarried around of granite for the bridge pylons. The stonemasons cut, dressed, and numbered the blocks, which were then transported to Sydney on three ships built specifically for this purpose. The Moruya quarry was managed by John Gilmore, a Scottish stonemason who emigrated with his young family to Australia in 1924, at the request of the project managers. The concrete used was also Australian-made and supplied from Kandos.
Abutments at the base of the pylons are essential to support the loads from the arch and hold its span firmly in place, but the pylons themselves have no structural purpose. They were included to provide a frame for the arch panels and to give better visual balance to the bridge. The pylons were not part of the original design, and were only added to allay public concern about the structural integrity of the bridge.
Although originally added to the bridge solely for their aesthetic value, all four pylons have now been put to use. The south-eastern pylon contains a museum and tourist centre, with a 360° lookout at the top providing views across the Harbour and city. The south-western pylon is used by Transport for NSW to support its CCTV cameras overlooking the bridge and the roads around that area. The two pylons on the north shore include venting chimneys for fumes from the Sydney Harbour Tunnel, with the base of the southern pylon containing the Transport for NSW maintenance shed for the bridge, and the base of the northern pylon containing the traffic management shed for tow trucks and safety vehicles used on the bridge.
In 1942, the pylons were modified to include parapets and anti-aircraft guns designed to assist in both Australia's defence and general war effort.
History
Early proposals
There had been plans to build a bridge as early as 1814, when convict and noted architect Francis Greenway reputedly proposed to Governor Lachlan Macquarie that a bridge be built from the northern to the southern shore of the harbour. In 1825, Greenway wrote a letter to the then "The Australian" newspaper stating that such a bridge would "give an idea of strength and magnificence that would reflect credit and glory on the colony and the Mother Country".
Nothing came of Greenway's suggestions, but the idea remained alive, and many further suggestions were made during the nineteenth century. In 1840, naval architect Robert Brindley proposed that a floating bridge be built. Engineer Peter Henderson produced one of the earliest known drawings of a bridge across the harbour around 1857. A suggestion for a truss bridge was made in 1879, and in 1880 a high-level bridge estimated at £850,000 was proposed.
In 1900, the Lyne government committed to building a new Central railway station and organised a worldwide competition for the design and construction of a harbour bridge, overseen by Minister for Public Works Edward William O'Sullivan. G.E.W. Cruttwell, a London based engineer, was awarded the first prize of £1,000. Local engineer Norman Selfe submitted a design for a suspension bridge and won the second prize of £500. In 1902, when the outcome of the first competition became mired in controversy, Selfe won a second competition outright, with a design for a steel cantilever bridge. The selection board were unanimous, commenting that, "The structural lines are correct and in true proportion, and... the outline is graceful". However due to an economic downturn and a change of government at the 1904 NSW State election construction never began.
A unique three-span bridge was proposed in 1922 by Ernest Stowe with connections at Balls Head, Millers Point, and Balmain with a memorial tower and hub on Goat Island.
Planning
In 1914, John Bradfield was appointed Chief Engineer of Sydney Harbour Bridge and Metropolitan Railway Construction, and his work on the project over many years earned him the legacy as the father of the bridge. Bradfield's preference at the time was for a cantilever bridge without piers, and in 1916 the NSW Legislative Assembly passed a bill for such a construction, however it did not proceed as the Legislative Council rejected the legislation on the basis that the money would be better spent on the war effort.
Following World War I, plans to build the bridge again built momentum. Bradfield persevered with the project, fleshing out the details of the specifications and financing for his cantilever bridge proposal, and in 1921 he travelled overseas to investigate tenders. His confidential secretary Kathleen M. Butler handled all the international correspondence during his absence, her title belying her role as project manager as well as a technical adviser. On return from his travels Bradfield decided that an arch design would also be suitable and he and officers of the NSW Department of Public Works prepared a general design for a single-arch bridge based upon New York City's Hell Gate Bridge. In 1922 the government of George Fuller passed the Sydney Harbour Act 1922, specifying the construction of a high-level cantilever or arch bridge across the Harbour between Dawes Point and Milsons Point, along with construction of necessary approaches and electric railway lines, and worldwide tenders were invited for the project.
As a result of the tendering process, the government received twenty proposals from six companies; on 24 March 1924 the contract was awarded to Dorman Long & Co of Middlesbrough, England well known as the contractors who later built the similar Tyne Bridge in Newcastle Upon Tyne, for an arch bridge at a quoted price of AU£4,217,721 11s 10d. The arch design was cheaper than alternative cantilever and suspension bridge proposals, and also provided greater rigidity making it better suited for the heavy loads expected. In 1924, Kathleen Butler travelled to London to set up the project office within those of Dorman, Long & Co., "attending the most difficult and technical questions and technical questions in regard to the contract, and dealing with a mass of correspondence".
Bradfield and his staff were ultimately to oversee the bridge design and building process as it was executed by Dorman Long and Co, whose Consulting Engineer, Sir Ralph Freeman of Sir Douglas Fox and Partners, and his associate Georges Imbault, carried out the detailed design and erection process of the bridge. Architects for the contractors were from the British firm John Burnet & Partners of Glasgow, Scotland. Lawrence Ennis, of Dorman Long, served as Director of Construction and primary onsite supervisor throughout the entire build, alongside Edward Judge, Dorman Long's Chief Technical Engineer, who functioned as Consulting and Designing Engineer.
The building of the bridge coincided with the construction of a system of underground railways beneath Sydney's CBD, known today as the City Circle, and the bridge was designed with this in mind. The bridge was designed to carry six lanes of road traffic, flanked on each side by two railway tracks and a footpath. Both sets of rail tracks were linked into the underground Wynyard railway station on the south (city) side of the bridge by symmetrical ramps and tunnels. The eastern-side railway tracks were intended for use by a planned rail link to the Northern Beaches; in the interim they were used to carry trams from the North Shore into a terminal within Wynyard station, and when tram services were discontinued in 1958, they were converted into extra traffic lanes. The Bradfield Highway, which is the main roadway section of the bridge and its approaches, is named in honour of Bradfield's contribution to the bridge.
Construction
Bradfield visited the site sporadically throughout the eight years it took Dorman Long to complete the bridge. Despite having originally championed a cantilever construction and the fact that his own arched general design was used in neither the tender process nor as input to the detailed design specification (and was anyway a rough copy of the Devil's Gate bridge produced by the NSW Works Department), Bradfield subsequently attempted to claim personal credit for Dorman Long's design. This led to a bitter argument, with Dorman Long maintaining that instructing other people to produce a copy of an existing design in a document not subsequently used to specify the final construction did not constitute personal design input on Bradfield's part. This friction ultimately led to a large contemporary brass plaque being bolted very tightly to the side of one of the granite columns of the bridge to makes things clear.
The official ceremony to mark the turning of the first sod occurred on 28 July 1923, on the spot at Milsons Point where two workshops to assist in building the bridge were to be constructed.
An estimated 469 buildings on the north shore, both private homes and commercial operations, were demolished to allow construction to proceed, with little or no compensation being paid. Work on the bridge itself commenced with the construction of approaches and approach spans, and by September 1926 concrete piers to support the approach spans were in place on each side of the harbour.
As construction of the approaches took place, work was also started on preparing the foundations required to support the enormous weight of the arch and loadings. Concrete and granite faced abutment towers were constructed, with the angled foundations built into their sides.
Once work had progressed sufficiently on the support structures, a giant creeper crane was erected on each side of the harbour. These cranes were fitted with a cradle, and then used to hoist men and materials into position to allow for erection of the steelwork. To stabilise works while building the arches, tunnels were excavated on each shore with steel cables passed through them and then fixed to the upper sections of each half-arch to stop them collapsing as they extended outwards.
Arch construction itself began on 26 October 1928. The southern end of the bridge was worked on ahead of the northern end, to detect any errors and to help with alignment. The cranes would "creep" along the arches as they were constructed, eventually meeting up in the middle. In less than two years, on 19 August 1930, the two halves of the arch touched for the first time. Workers riveted both top and bottom sections of the arch together, and the arch became self-supporting, allowing the support cables to be removed. On 20 August 1930 the joining of the arches was celebrated by flying the flags of Australia and the United Kingdom from the jibs of the creeper cranes.
Once the arch was completed, the creeper cranes were then worked back down the arches, allowing the roadway and other parts of the bridge to be constructed from the centre out. The vertical hangers were attached to the arch, and these were then joined with horizontal crossbeams. The deck for the roadway and railway were built on top of the crossbeams, with the deck itself being completed by June 1931, and the creeper cranes were dismantled. Rails for trains and trams were laid, and road was surfaced using concrete topped with asphalt. Power and telephone lines, and water, gas, and drainage pipes were also all added to the bridge in 1931.
The pylons were built atop the abutment towers, with construction advancing rapidly from July 1931. Carpenters built wooden scaffolding, with concreters and masons then setting the masonry and pouring the concrete behind it. Gangers built the steelwork in the towers, while day labourers manually cleaned the granite with wire brushes. The last stone of the north-west pylon was set in place on 15 January 1932, and the timber towers used to support the cranes were removed.
On 19 January 1932, the first test train, a steam locomotive, safely crossed the bridge. Load testing of the bridge took place in February 1932, with the four rail tracks being loaded with as many as 96 New South Wales Government Railways steam locomotives positioned end-to-end. The bridge underwent testing for three weeks, after which it was declared safe and ready to be opened. The first trial run of an electric train over the bridge was successfully completed on both lines 11 March 1932. On 19 March 1932, 632 people were the first fare-paying passengers to cross the bridge by rail, paying a premium of 10 s. for the privilege, but they were not the first members of the public to do so. That distinction fell to a pair of clergymen who inadvertently boarded the test train of the previous day, and were discovered too late to be ejected. The construction worksheds were demolished after the bridge was completed, and the land that they were on is now occupied by Luna Park.
The standards of industrial safety during construction were poor by today's standards. Sixteen workers died during construction, but surprisingly only two from falling off the bridge. Several more were injured from unsafe working practices undertaken whilst heating and inserting its rivets, and the deafness experienced by many of the workers in later years was blamed on the project. Henri Mallard between 1930 and 1932 produced hundreds of stills and film footage which reveal at close quarters the bravery of the workers in tough Depression-era conditions.
Interviews were conducted between 1982-1989 with a variety of tradesmen who worked on the building of the bridge. Among the tradesmen interviewed were drillers, riveters, concrete packers, boilermakers, riggers, ironworkers, plasterers, stonemasons, an official photographer, sleepcutters, engineers and draughtsmen.
The total financial cost of the bridge was AU£6.25 million, which was not paid off in full until 1988.
Official opening ceremony
The bridge was formally opened on Saturday, 19 March 1932. Among those who attended and gave speeches were the Governor of New South Wales, Sir Philip Game, and the Minister for Public Works, Lawrence Ennis. The Premier of New South Wales, Jack Lang, was to open the bridge by cutting a ribbon at its southern end.
However, just as Lang was about to cut the ribbon, a man in military uniform rode up on a horse, slashing the ribbon with his sword and opening the Sydney Harbour Bridge in the name of the people of New South Wales before the official ceremony began. He was promptly arrested. The ribbon was hurriedly retied and Lang performed the official opening ceremony and Game thereafter inaugurated the name of the bridge as Sydney Harbour Bridge and the associated roadway as the Bradfield Highway. After they did so, there was a 21-gun salute and an Royal Australian Air Force flypast. The intruder was identified as Francis de Groot. He was convicted of offensive behaviour and fined £5 after a psychiatric test proved he was sane, but this verdict was reversed on appeal. De Groot then successfully sued the Commissioner of Police for wrongful arrest and was awarded an undisclosed out of court settlement. De Groot was a member of a right-wing paramilitary group called the New Guard, opposed to Lang's leftist policies and resentful of the fact that a member of the British royal family had not been asked to open the bridge. De Groot was not a member of the regular army but his uniform allowed him to blend in with the real cavalry. This incident was one of several involving Lang and the New Guard during that year.
A similar ribbon-cutting ceremony on the bridge's northern side by North Sydney's mayor, Alderman Primrose, was carried out without incident. It was later discovered that Primrose was also a New Guard member but his role in and knowledge of the de Groot incident, if any, are unclear. The pair of golden scissors used in the ribbon cutting ceremonies on both sides of the bridge was also used to cut the ribbon at the dedication of the Bayonne Bridge, which had opened between Bayonne, New Jersey, and New York City the year before.
Despite the bridge opening in the midst of the Great Depression, opening celebrations were organised by the Citizens of Sydney Organising Committee, an influential body of prominent men and politicians that formed in 1931 under the chairmanship of the lord mayor to oversee the festivities. The celebrations included an array of decorated floats, a procession of passenger ships sailing below the bridge, and a Venetian Carnival. A message from a primary school in Tottenham, away in rural New South Wales, arrived at the bridge on the day and was presented at the opening ceremony. It had been carried all the way from Tottenham to the bridge by relays of school children, with the final relay being run by two children from the nearby Fort Street Boys' and Girls' schools.
After the official ceremonies, the public was allowed to walk across the bridge on the deck, something that would not be repeated until the 50th anniversary celebrations. Estimates suggest that between 300,000 and one million people took part in the opening festivities, a phenomenal number given that the entire population of Sydney at the time was estimated to be 1,256,000.
There had also been numerous preparatory arrangements. On 14 March 1932, three postage stamps were issued to commemorate the imminent opening of the bridge. Several songs were composed for the occasion. In the year of the opening, there was a steep rise in babies being named Archie and Bridget in honour of the bridge. One of three microphones used at the opening ceremony was signed by 10 local dignitaries who officiated at the event, Philip Game, John Lang, MA Davidson, Samuel Walder, D Clyne, H Primrose, Ben Howe, John Bradfield, Lawrence Ennis and Roland Kitson. It was supplied by Amalgamated Wireless Australasia, who organised the ceremony's broadcast and collected by Philip Geeves, the AWA announcer on the day. The radio is now in the collection of the Powerhouse Museum.
The bridge itself was regarded as a triumph over Depression times, earning the nickname "the Iron Lung", as it kept many Depression-era workers employed.
Operations
In 2010, the average daily traffic included 204 trains, 160,435 vehicles and 1650 bicycles.
Road
From the Sydney CBD side, motor vehicle access to the bridge is via Grosvenor Street, Clarence Street, Kent Street, the Cahill Expressway, or the Western Distributor. Drivers on the northern side will find themselves on the Warringah Freeway, though it is easy to turn off the freeway to drive westwards into North Sydney or eastwards to Neutral Bay and beyond upon arrival on the northern side.
The bridge originally only had four wider traffic lanes occupying the central space which now has six, as photos taken soon after the opening clearly show. In 1958 tram services across the bridge were withdrawn and the tracks replaced by two extra road lanes; these lanes are now the leftmost southbound lanes on the bridge and are separated from the other six road lanes by a median strip. Lanes 7 and 8 now connect the bridge to the elevated Cahill Expressway that carries traffic to the Eastern Distributor.
In 1988, work began to build a tunnel to complement the bridge. It was determined that the bridge could no longer support the increased traffic flow of the 1980s. The Sydney Harbour Tunnel was completed in August 1992 and carries only motor vehicles.
The Bradfield Highway is designated as a Travelling Stock Route which means that it is permissible to herd livestock across the bridge, but only between midnight and dawn, and after giving notice of intention to do so. In practice, the last time livestock crossed the bridge was in 1999 for the Gelbvieh Cattle Congress.
Tidal flow
The bridge is equipped for tidal flow operation, permitting the direction of traffic flow on the bridge to be altered to better suit the morning and evening peak hour traffic patterns.
The bridge has eight lanes, numbered one to eight from west to east. Lanes three, four and five are reversible. One and two always flow north. Six, seven, and eight always flow south. The default is four each way. For the morning peak hour, the lane changes on the bridge also require changes to the Warringah Freeway, with its inner western reversible carriageway directing traffic to the bridge lane numbers three and four southbound. Until September 1982, during the evening peak the tidal flow was set as six northbound and two southbound lanes.
The bridge has a series of overhead gantries which indicate the direction of flow for each traffic lane. A green arrow pointing down to a traffic lane means the lane is open. A flashing red "X" indicates the lane is closing, but is not yet in use for traffic travelling in the other direction. A static red "X" means the lane is in use for oncoming traffic. This arrangement was introduced in January 1986, replacing a slow operation where lane markers were manually moved to mark the centre median.
It is possible to see odd arrangements of flow during night periods when maintenance occurs, which may involve completely closing some lanes. Normally this is done between midnight and dawn, because of the enormous traffic demands placed on the bridge outside these hours.
When the Sydney Harbour Tunnel opened in August 1992, lane 7 became a bus lane.
Tolls
The vehicular traffic lanes on the bridge are operated as a toll road. Since January 2009, there is a variable tolling system for all vehicles headed into the CBD (southbound). The toll paid is dependent on the time of day in which the vehicle passes through the toll plaza. The toll varies from a minimum value of $2.50 to a maximum value of $4. There is no toll for northbound traffic (though taxis travelling north may charge passengers the toll in anticipation of the toll the taxi must pay on the return journey). In 2017, the Bradfield Highway northern toll plaza infrastructure was removed and replaced with new overhead gantries to service all southbound traffic. Following on from this upgrade, in 2018 all southern toll plaza infrastructure was also removed. Only the Cahill Expressway toll plaza infrastructure remains. The toll was originally placed on travel across the bridge, in both directions, to recoup the cost of its construction. This was paid off in 1988, but the toll has been kept (indeed increased) to recoup the costs of the Sydney Harbour Tunnel.
Originally it cost a car or motorcycle six pence to cross, a horse and rider being three pence. Use of the bridge by bicycle riders (provided that they use the cycleway) and by pedestrians is free. Later governments capped the fee for motorcycles at one-quarter of the passenger-vehicle cost, but now it is again the same as the cost for a passenger vehicle, although quarterly flat-fee passes are available which are much cheaper for frequent users. Originally there were six toll booths at the southern end of the bridge, these were replaced by 16 booths in 1950. The toll was charged in both directions until 4 July 1970 when changed to only be applied to southbound traffic.
After the decision to build the Sydney Harbour Tunnel was made in the early 1980s, the toll was increased (from 20 cents to $1, then to $1.50 in March 1989, and finally to $2 by the time the tunnel opened) to pay for its construction. The tunnel also had an initial toll of $2 southbound. After the increase to $1, a concrete barrier on the bridge separating the Bradfield Highway from the Cahill Expressway was increased in height, because of the large numbers of drivers crossing it illegally from lane 6 to 7, to avoid the toll.
The southbound toll was increased to $2.20 in July 2000 to account for the newly imposed goods and services tax (GST). The toll increased again to $3 in January 2002.
In July 2008, a new electronic tolling system called e-TAG was introduced. The Sydney Harbour Tunnel was converted to this new tolling system while the Sydney Harbour Bridge itself had several cash lanes. The electronic system as of 12 January 2009 has now replaced all booths with E-tag lanes. In January 2017 work commenced to remove the southern toll booths. In August 2020, the remaining toll booths at Milsons Point were removed. Tolls rose in October 2023 for the first time in 14 years.
Pedestrians
The pedestrian-only footway is located on the east side of the bridge. Access from the northern side involves climbing an easily spotted flight of stairs, located on the east side of the bridge at Broughton Street, Kirribilli. Pedestrian access on the southern side is more complicated, but signposts in The Rocks area now direct pedestrians to the long and sheltered flight of stairs that leads to the bridge's southern end. These stairs are located near Gloucester Street and Cumberland Street.
The bridge can also be approached from the south by accessing Cahill Walk, which runs along the Cahill Expressway. Pedestrians can access this walkway from the east end of Circular Quay by a flight of stairs or a lift. Alternatively it can be accessed from the Botanic Gardens.
Cyclists
The bike-only cycleway is located on the western side of the bridge. Access from the northern side involves carrying or pushing a bicycle up a staircase, consisting of 55 steps, located on the western side of the bridge at Burton Street, Milsons Point. A wide smooth concrete strip in the centre of the stairs permits cycles to be wheeled up and down from the bridge deck whilst the rider is dismounted. A campaign to eliminate the steps on this popular cycling route to the CBD has been running since at least 2008. On 7 December 2016 the NSW Roads Minister Duncan Gay confirmed that the northern stairway would be replaced with a 20 million ramp alleviating the needs for cyclists to dismount. At the same time the NSW Government announced plans to upgrade the southern ramp at a projected cost of 20 million. Both projects are expected to be completed by late 2020. Access to the cycleway on the southern side is via the northern end of the Kent Street cycleway and/or Upper Fort Street in The Rocks.
Rail
The bridge lies between Milsons Point and Wynyard railway stations, located on the north and south shores respectively, with two tracks running along the western side of the bridge. These tracks are part of the North Shore railway line.
In 1958, tram services across the bridge were withdrawn and the tracks they had used were removed and replaced by two extra road lanes; these lanes are now the leftmost southbound lanes on the bridge and are still clearly distinguishable from the other six road lanes. The original ramp that took the trams into a terminus at the underground Wynyard railway station is still visible at the southern end of the main walkway under lanes 7 and 8, although around 1964, the former tram tunnels and station were converted for use as a carpark for the Menzies Hotel and as public parking. One of the tunnels was converted for use as a storage facility after reportedly being used by the NSW police as a pistol firing range.
Maintenance
The Sydney Harbour Bridge requires constant inspections and other maintenance work to keep it safe for the public, and to protect from corrosion. Among the trades employed on the bridge are painters, ironworkers, boilermakers, fitters, electricians, plasterers, carpenters, plumbers, and riggers.
The most noticeable maintenance work on the bridge involves painting. The steelwork of the bridge that needs to be painted is a combined , the equivalent of sixty football fields. Each coat on the bridge requires some of paint. A special fast-drying paint is used, so that any paint drops have dried before reaching the vehicles or bridge surface. One notable identity from previous bridge-painting crews is Australian comedian and actor Paul Hogan, who worked as a bridge rigger before rising to media fame in the 1970s.
In 2003 the Roads & Traffic Authority began completely repainting the southern approach spans of the bridge. This involved removing the old lead-based paint, and repainting the of steel below the deck. Workers operated from self-contained platforms below the deck, with each platform having an air extraction system to filter airborne particles. An abrasive blasting was used, with the lead waste collected and safely removed from the site for disposal.
Between December 2006 and March 2010 the bridge was subject to works designed to ensure its longevity. The work included some strengthening.
Since 2013, two grit-blasting robots specially developed with the University of Technology, Sydney have been employed to help with the paint stripping operation on the bridge. The robots, nicknamed Rosie and Sandy, are intended to reduce workers' potential exposure to dangerous lead paint and asbestos and the blasting equipment which has enough force to cut through clothes and skin.
Tourism
South-east pylon
Even during its construction, the bridge was such a prominent feature of Sydney that it would attract tourist interest. One of the ongoing tourist attractions of the bridge has been the south-east pylon, which is accessed via the pedestrian walkway across the bridge, and then a climb to the top of the pylon of about 200 steps.
Not long after the bridge's opening, commencing in 1934, Archer Whitford first converted this pylon into a tourist destination. He installed a number of attractions, including a café, a camera obscura, an Aboriginal museum, a "Mother's Nook" where visitors could write letters, and a "pashometer". The main attraction was the viewing platform, where "charming attendants" assisted visitors to use the telescopes available, and a copper cladding (still present) over the granite guard rails identified the suburbs and landmarks of Sydney at the time.
The outbreak of World War II in 1939 saw tourist activities on the bridge cease, as the military took over the four pylons and modified them to include parapets and anti-aircraft guns.
In 1948, Yvonne Rentoul opened the "All Australian Exhibition" in the pylon. This contained dioramas, and displays about Australian perspectives on subjects such as farming, sport, transport, mining, and the armed forces. An orientation table was installed at the viewing platform, along with a wall guide and binoculars. The owner kept several white cats in a rooftop cattery, which also served as an attraction, and there was a souvenir shop and postal outlet. Rentoul's lease expired in 1971, and the pylon and its lookout remained closed to the public for over a decade.
The pylon was reopened in 1982, with a new exhibition celebrating the bridge's 50th anniversary. In 1987 a "Bicentennial Exhibition" was opened to mark the 200th anniversary of European settlement in Australia in 1988.
The pylon was closed from April to November 2000 for the Roads & Traffic Authority and BridgeClimb to create a new exhibition called "Proud Arch". The exhibition focussed on Bradfield, and included a glass direction finder on the observation level, and various important heritage items.
The pylon again closed for four weeks in 2003 for the installation of an exhibit called "Dangerous Works", highlighting the dangerous conditions experienced by the original construction workers on the bridge, and two stained glass feature windows in memory of the workers.
BridgeClimb
In the 1950s and 1960s, there were occasional newspaper reports of climbers who had made illegal arch traversals of the bridge by night. In 1973 Philippe Petit walked across a wire between the two pylons at the southern end of the Sydney Harbour Bridge. Since 1998, BridgeClimb has made it possible for tourists to legally climb the southern half of the bridge. Tours run throughout the day, from dawn to night, and are only cancelled for electrical storms or high wind.
Groups of climbers are provided with protective clothing appropriate to the prevailing weather conditions, and are given an orientation briefing before climbing. During the climb, attendees are secured to the bridge by a wire lifeline. Each climb begins on the eastern side of the bridge and ascends to the top. At the summit, the group crosses to the western side of the arch for the descent. Each climb takes three-and-a-half-hours, including the preparations.
In December 2006, BridgeClimb launched an alternative to climbing the upper arches of the bridge. The Discovery Climb allows climbers to ascend the lower chord of the bridge and view its internal structure. From the apex of the lower chord, climbers ascend a staircase to a platform at the summit.
Celebrations and protests
Since the opening, the bridge has been the focal point of much tourism, national pride and even protests
50th Anniversary celebrations
In 1982, the 50th anniversary of the opening of the bridge was celebrated. For the first time since its opening in 1932, the bridge was closed to most vehicles with the exception of vintage vehicles, and pedestrians were allowed full access for the day. The celebrations were attended by Edward Judge, who represented Dorman Long.
Bicentennial Australia Day celebrations
Australia's bicentennial celebrations on 26 January 1988 attracted large crowds in the bridge's vicinity as merrymakers flocked to the foreshores to view the events on the harbour. The highlight was the biggest parade of sail ever held in Sydney, square-riggers from all over the world, surrounded by hundreds of smaller craft of every description, passing majestically under the Sydney Harbour Bridge. The day's festivities culminated in a fireworks display in which the bridge was the focal point of the finale, with fireworks streaming from the arch and roadway. This was to become the pattern for later firework displays.
Sydney New Year's Eve
The Harbour Bridge has been an integral part of the Sydney New Year's Eve celebrations, generally being used in spectacular ways during the fireworks displays at 9pm and midnight. In recent times, the bridge has included a ropelight display on a framework in the centre of the eastern arch, which is used to complement the fireworks. The scaffolding and framework were clearly visible for some weeks before the event, revealing the outline of the design.
During the millennium celebrations in 2000, the Sydney Harbour Bridge was lit up with the word "Eternity", as a tribute to the legacy of Arthur Stace a Sydney artist who for many years inscribed said word on pavements in chalk in copperplate writing despite the fact that he was illiterate.
The effects have been as follows:
NYE1997: Smiley face
NYE1999: The word "Eternity" in copperplate writing
NYE2000: Rainbow Serpent and Federation Star
NYE2001: Uluru, the Southern Cross and the Dove of Peace
NYE2002: Dove of Peace and the word "PEACE"
NYE2003: Light show
NYE2004: "Fanfare"
NYE2005: Three concentric hearts
NYE2006: Coathanger and a diamond
NYE2007: Mandala
NYE2008: The Sun
NYE2009: Taijitu Symbol, a Blue moon and a ring of fire
NYE2010: Handprint, "X" Mark and a Spot
NYE2011: Thought Bubble, Sun and Endless Rainbow
NYE2012: Butterfly and a Lip
NYE2013: Eye
NYE2014: Light bulb
NYE2015 onwards: Light shows
The numbers for the New Year's Eve countdown also appear on the Bridge pylons.
Walk for Reconciliation
In May 2000, the bridge was closed to vehicular access for a day to allow a special reconciliation march—the "Walk for Reconciliation" – to take place. This was part of a response to an Aboriginal Stolen Generations inquiry, which found widespread suffering had taken place amongst Australian Aboriginal children forcibly placed into the care of white parents in a little-publicised state government scheme. Between 200,000 and 300,000 people were estimated to have walked the bridge in a symbolic gesture of crossing a divide.
Sydney 2000 Olympics
During the Sydney 2000 Olympics in September and October 2000, the bridge was adorned with the Olympic Rings. It was included in the Olympic torch's route to the Olympic stadium. The men's and women's Olympic marathon events likewise included the bridge as part of their route to the Olympic stadium. A fireworks display at the end of the closing ceremony ended at the bridge. The east-facing side of the bridge has been used several times since as a framework from which to hang static fireworks, especially during the elaborate New Year's Eve displays.
Formula One promotion
In 2005 Mark Webber drove a Williams-BMW Formula One car across the bridge.
75th anniversary
In 2007, the 75th anniversary of its opening was commemorated with an exhibition at the Museum of Sydney, called "Bridging Sydney". An initiative of the Historic Houses Trust, the exhibition featured dramatic photographs and paintings with rare and previously unseen alternative bridge and tunnel proposals, plans and sketches.
On 18 March 2007, the 75th anniversary of the Sydney Harbour Bridge was celebrated. The occasion was marked with a ribbon-cutting ceremony by the governor, Marie Bashir and the Premier of New South Wales, Morris Iemma. The bridge was subsequently open to the public to walk southward from Milsons Point or North Sydney. Several major roads, mainly in the CBD, were closed for the day. An Aboriginal smoking ceremony was held at 7:00pm.
Approximately 250,000 people (50,000 more than were registered) took part in the event. Bright green souvenir caps were distributed to walkers. A series of speakers placed at intervals along the bridge formed a sound installation. Each group of speakers broadcast sound and music from a particular era (e.g. King Edward VIII's abdication speech; Gough Whitlam's speech at Parliament House in 1975), the overall effect being that the soundscape would "flow" through history as walkers proceeded along the bridge. A light-show began after sunset and continued late into the night, the bridge being bathed in constantly changing, multi-coloured lighting, designed to highlight structural features of the bridge. In the evening the bright yellow caps were replaced by orange caps with a small, bright LED attached. The bridge was closed to walkers at about 8:30pm.
Breakfast on the Bridge
On 25 October 2009 turf was laid across the eight lanes of bitumen, and 6,000 people celebrated a picnic on the bridge accompanied by live music. The event was repeated in 2010. Although originally scheduled again in 2011, this event was moved to Bondi Beach due to traffic concerns about the prolonged closing of the bridge.
80th anniversary
On 19 March 2012, the 80th anniversary of the Sydney Harbour Bridge was celebrated with a picnic dedicated to the stories of people with personal connections to the bridge. In addition, Google dedicated its Google Doodle on the 19th to the event.
The proposal to upgrade the bridge tolling equipment was announced by the NSW Roads Minister Duncan Gay.
Protests
Various protests have caused disruptions on the Sydney Harbour Bridge. In 2019, Greenpeace activists scaled the bridge and they were arrested soon after. In 2021, a number of truck and bus drivers clogged the bridge for a number of hours; they were protesting the COVID-19 lockdown.
Violet Coco blocked one lane of traffic in 2022 as part of a climate change protest.
Flags
Historically the flags of Australia and New South Wales had been flown above the bridge with the Aboriginal flag flown for nineteen days a year. In February 2022, Premier Dominic Perrottet announced that the Australian, New South Wales and the Aboriginal flags were to permanently fly with a third pole erected. In July 2022, it was announced that the Aboriginal flag would replace the New South Wales flag, which was given a prominent location within the Macquarie Street East redevelopment, near the Royal Mint and Hyde Park Barracks.
Quotations
Heritage listing
At the time of construction and until recently, the bridge was the longest single span steel arch bridge in the world. The bridge, its pylons and its approaches are all important elements in townscape of areas both near and distant from it. The curved northern approach gives a grand sweeping entrance to the bridge with continually changing views of the bridge and harbour. The bridge has been an important factor in the pattern of growth of metropolitan Sydney, particularly in residential development in post World War II years. In the 1960s and 1970s the Central Business District had extended to the northern side of the bridge at North Sydney which has been due in part to the easy access provided by the bridge and also to the increasing traffic problems associated with the bridge.
Sydney Harbour Bridge was listed on the New South Wales State Heritage Register on 25 June 1999 having satisfied the following criteria.
The place is important in demonstrating the course, or pattern, of cultural or natural history in New South Wales.
The bridge is one of the most remarkable feats of bridge construction. At the time of construction and until recently it was the longest single span steel arch bridge in the world and is still in a general sense the largest.
Bradfield Park North (Sandstone Walls)
"The archaeological remains are demonstrative of an earlier phase of urban development within Milsons Point and the wider North Sydney precinct. The walls are physical evidence that a number of 19th century residences existed on the site which were resumed and demolished as part of the Sydney Harbour Bridge construction".
The place is important in demonstrating aesthetic characteristics and/or a high degree of creative or technical achievement in New South Wales.
The bridge, its pylons and its approaches are all important elements in townscape of areas both near and distant from it. The curved northern approach gives a grand sweeping entrance to the bridge with continually changing views of the bridge and harbour.
The place has a strong or special association with a particular community or cultural group in New South Wales for social, cultural or spiritual reasons.
The bridge has been an important factor in the pattern of growth of metropolitan Sydney, particularly in residential development in post World War II years. In the 1960s and 1970s the Central Business District had extended to the northern side of the bridge at North Sydney which has been due in part to the easy access provided by the bridge and also to the increasing traffic problems associated with the bridge.
The place has potential to yield information that will contribute to an understanding of the cultural or natural history of New South Wales.
Bradfield Park North (Sandstone Walls)
"The archaeological remains have some potential to yield information about the previous residential and commercial occupation of Milsons Point prior to the construction of the Sydney Harbour Bridge transport link".
Engineering heritage award
The bridge was listed as a National Engineering Landmark by Engineers Australia in 1988, as part of its Engineering Heritage Recognition Program.
| Technology | Bridges | null |
28284 | https://en.wikipedia.org/wiki/Switch | Switch | In electrical engineering, a switch is an electrical component that can disconnect or connect the conducting path in an electrical circuit, interrupting the electric current or diverting it from one conductor to another. The most common type of switch is an electromechanical device consisting of one or more sets of movable electrical contacts connected to external circuits. When a pair of contacts is touching current can pass between them, while when the contacts are separated no current can flow.
Switches are made in many different configurations; they may have multiple sets of contacts controlled by the same knob or actuator, and the contacts may operate simultaneously, sequentially, or alternately. A switch may be operated manually, for example, a light switch or a keyboard button, or may function as a sensing element to sense the position of a machine part, liquid level, pressure, or temperature, such as a thermostat. Many specialized forms exist, such as the toggle switch, rotary switch, mercury switch, push-button switch, reversing switch, relay, and circuit breaker. A common use is control of lighting, where multiple switches may be wired into one circuit to allow convenient control of light fixtures. Switches in high-powered circuits must have special construction to prevent destructive arcing when they are opened.
Description
The most familiar form of switch is a manually operated electromechanical device with one or more sets of electrical contacts, which are connected to external circuits. Each set of contacts can be in one of two states: either "closed" meaning the contacts are touching and electricity can flow between them, or "open", meaning the contacts are separated and the switch is nonconducting. The mechanism actuating the transition between these two states (open or closed) is usually (there are other types of actions) either an "alternate action" (flip the switch for continuous "on" or "off") or "momentary" (push for "on" and release for "off") type.
A switch may be directly manipulated by a human as a control signal to a system, such as a computer keyboard button, or to control power flow in a circuit, such as a light switch. Automatically operated switches can be used to control the motions of machines, for example, to indicate that a garage door has reached its full open position or that a machine tool is in a position to accept another workpiece. Switches may be operated by process variables such as pressure, temperature, flow, current, voltage, and force, acting as sensors in a process and used to automatically control a system. For example, a thermostat is a temperature-operated switch used to control a heating process. A switch that is operated by another electrical circuit is called a relay. Large switches may be remotely operated by a motor drive mechanism. Some switches are used to isolate electric power from a system, providing a visible point of isolation that can be padlocked if necessary to prevent accidental operation of a machine during maintenance, or to prevent electric shock.
An ideal switch would have no voltage drop when closed, and would have no limits on voltage or current rating. It would have zero rise time and fall time during state changes, and would change state without "bouncing" between on and off positions.
Practical switches fall short of this ideal; as the result of roughness and oxide films, they exhibit contact resistance, limits on the current and voltage they can handle, finite switching time, etc. The ideal switch is often used in circuit analysis as it greatly simplifies the system of equations to be solved, but this can lead to a less accurate solution. Theoretical treatment of the effects of non-ideal properties is required in the design of large networks of switches, as for example used in telephone exchanges.
Contacts
In the simplest case, a switch has two conductive pieces, often metal, called contacts, connected to an external circuit, that touch to complete (make) the circuit, and separate to open (break) the circuit. The contact material is chosen for its resistance to corrosion, because most metals form insulating oxides that would prevent the switch from working. Contact materials are also chosen on the basis of electrical conductivity, hardness (resistance to abrasive wear), mechanical strength, low cost and low toxicity. The formation of oxide layers at contact surface, as well as surface roughness and contact pressure, determine the contact resistance, and wetting current of a mechanical switch. Sometimes the contacts are plated with noble metals, for their excellent conductivity and resistance to corrosion. They may be designed to wipe against each other to clean off any contamination. Nonmetallic conductors, such as conductive plastic, are sometimes used. To prevent the formation of insulating oxides, a minimum wetting current may be specified for a given switch design.
Contact terminology
In electronics, switches are classified according to the arrangement of their contacts. A pair of contacts is said to be "closed" when current can flow from one to the other. When the contacts are separated by an insulating air gap, they are said to be "open", and no current can flow between them at normal voltages. The terms "make" for closure of contacts and "break" for opening of contacts are also widely used.
The terms pole and throw are also used to describe switch contact variations. The number of "poles" is the number of electrically separate switches which are controlled by a single physical actuator. For example, a "2-pole" switch has two separate, parallel sets of contacts that open and close in unison via the same mechanism. The number of "throws" is the number of separate wiring path choices other than "open" that the switch can adopt for each pole. A single-throw switch has one pair of contacts that can either be closed or open. A double-throw switch has a contact that can be connected to either of two other contacts, a triple-throw has a contact which can be connected to one of three other contacts, etc.
In a switch where the contacts remain in one state unless actuated, such as a push-button switch, the contacts can either be normally open (abbreviated "n.o." or "no") until closed by operation of the switch, or normally closed ("n.c." or "nc") and opened by the switch action. A switch with both types of contact is called a changeover switch or double-throw switch. These may be "make-before-break" ("MBB" or shorting) which momentarily connects both circuits, or may be "break-before-make" ("BBM" or non-shorting) which interrupts one circuit before closing the other.
These terms have given rise to abbreviations for the types of switch which are used in the electronics industry such as "single-pole, single-throw" (SPST) (the simplest type, "on or off") or "single-pole, double-throw" (SPDT), connecting either of two terminals to the common terminal. In electrical power wiring (i.e., house and building wiring by electricians), names generally involve the suffix "-way"; however, these terms differ between British English and American English (i.e., the terms two way and three way are used with different meanings).
Switches with larger numbers of poles or throws can be described by replacing the "S" or "D" with a number (e.g. 3PST, SP4T, etc.) or in some cases the letter "T" (for "triple") or "Q" (for "quadruple"). In the rest of this article the terms SPST, SPDT and intermediate will be used to avoid the ambiguity.
Contact bounce
Bounce
Contact bounce (also called chatter) is a common problem with mechanical switches, relays and battery contacts, which arises as the result of electrical contact resistance (ECR) phenomena at interfaces. Switch and relay contacts are usually made of springy metals. When the contacts strike together, their momentum and elasticity act together to cause them to bounce apart one or more times before making steady contact. The result is a rapidly pulsed electric current instead of a clean transition from zero to full current. The effect is usually unimportant in power circuits, but causes problems in some analogue and logic circuits that respond fast enough to misinterpret the on‑off pulses as a data stream. In the design of micro-contacts, controlling surface structure (surface roughness) and minimizing the formation of passivated layers on metallic surfaces are instrumental in inhibiting chatter.
In the Hammond organ, multiple wires are pressed together under the piano keys of the manuals. Their bouncing and non-synchronous closing of the switches is known as Hammond Click and compositions exist that use and emphasize this feature. Some electronic organs have a switchable replica of this sound effect.
Debouncing
The effects of contact bounce can be eliminated by:
Use of mercury-wetted contacts, but these are now infrequently used because of the hazards of mercury.
Alternatively, contact circuit voltages can be low-pass filtered to reduce or eliminate multiple pulses from appearing.
In digital systems, multiple samples of the contact state can be taken at a low rate and examined for a steady sequence, so that contacts can settle before the contact level is considered reliable and acted upon. See .
Bounce in SPDT ("single-pole, double-throw") switch contacts signals can be filtered out using an SR flip-flop (latch) or Schmitt trigger.
All of these methods are referred to as 'debouncing'.
Arcs and quenching
When the power being switched is sufficiently large, the electron flow across opening switch contacts is sufficient to ionize the air molecules across the tiny gap between the contacts as the switch is opened, forming a gas plasma, also known as an electric arc. The plasma is of low resistance and is able to sustain power flow, even with the separation distance between the switch contacts steadily increasing. The plasma is also very hot and is capable of eroding the metal surfaces of the switch contacts (the same true for vacuum switches). Electric current arcing causes significant degradation of the contacts and also significant electromagnetic interference (EMI), requiring the use of arc suppression methods.
Where the voltage is sufficiently high, an arc can also form as the switch is closed and the contacts approach. If the voltage potential is sufficient to exceed the breakdown voltage of the air separating the contacts, an arc forms which is sustained until the switch closes completely and the switch surfaces make contact.
In either case, the standard method for minimizing arc formation and preventing contact damage is to use a fast-moving switch mechanism, typically using a spring-operated tipping-point mechanism to assure quick motion of switch contacts, regardless of the speed at which the switch control is operated by the user. Movement of the switch control lever applies tension to a spring until a tipping point is reached, and the contacts suddenly snap open or closed as the spring tension is released.
As the power being switched increases, other methods are used to minimize or prevent arc formation. A plasma is hot and will rise due to convection air currents. The arc can be quenched with a series of non-conductive blades spanning the distance between switch contacts, and as the arc rises, its length increases as it forms ridges rising into the spaces between the blades, until the arc is too long to stay sustained and is extinguished. A puffer may be used to blow a sudden high velocity burst of gas across the switch contacts, which rapidly extends the length of the arc to extinguish it quickly.
Extremely large switches often have switch contacts surrounded by something other than air to more rapidly extinguish the arc. For example, the switch contacts may operate in a vacuum, immersed in mineral oil, or in sulfur hexafluoride.
In AC power service, the current periodically passes through zero; this effect makes it harder to sustain an arc on opening. Manufacturers may rate switches with lower voltage or current rating when used in DC circuits.
Power switching
When a switch is designed to switch significant power, the transitional state of the switch as well as the ability to withstand continuous operating currents must be considered. When a switch is in the on state, its resistance is near zero and very little power is dropped in the contacts; when a switch is in the off state, its resistance is extremely high and even less power is dropped in the contacts. However, when the switch is flicked, the resistance must pass through a state where a quarter of the load's rated power (or worse if the load is not purely resistive) is briefly dropped in the switch.
For this reason, power switches intended to interrupt a load current have spring mechanisms to make sure the transition between on and off is as short as possible regardless of the speed at which the user moves the rocker.
Power switches usually come in two types. A momentary on‑off switch (such as on a laser pointer) usually takes the form of a button and only closes the circuit when the button is depressed. A regular on‑off switch (such as on a flashlight) has a constant on-off feature. Dual-action switches incorporate both of these features.
Inductive loads
When a strongly inductive load such as an electric motor is switched off, the current cannot drop instantaneously to zero; a spark will jump across the opening contacts. Switches for inductive loads must be rated to handle these cases. The spark will cause electromagnetic interference if not suppressed; a snubber network of a resistor and capacitor in series will quell the spark.
Incandescent loads
When turned on, an incandescent lamp draws a large inrush current of about ten times the steady-state current; as the filament heats up, its resistance rises and the current decreases to a steady-state value. A switch designed for an incandescent lamp load can withstand this inrush current.
Wetting current
Wetting current is the minimum current needing to flow through a mechanical switch while it is operated to break through any film of oxidation that may have been deposited on the switch contacts. The film of oxidation occurs often in areas with high humidity. Providing a sufficient amount of wetting current is a crucial step in designing systems that use delicate switches with small contact pressure as sensor inputs. Failing to do this might result in switches remaining electrically "open" due to contact oxidation.
Actuator
The moving part that applies the operating force to the contacts is called the actuator, and may be a toggle or dolly, a rocker, a push-button or any type of mechanical linkage (see photo).
Biased switches
A switch normally maintains its set position once operated. A biased switch contains a mechanism that springs it into another position when released by an operator. The momentary push-button switch is a type of biased switch. The most common type is a "push-to-make" (or normally-open or NO) switch, which makes contact when the button is pressed and breaks when the button is released. Each key of a computer keyboard, for example, is a normally-open "push-to-make" switch. A "push-to-break" (or normally-closed or NC) switch, on the other hand, breaks contact when the button is pressed and makes contact when it is released. An example of a push-to-break switch is a button used to release a door held closed by an electromagnet. The interior lamp of a household refrigerator is controlled by a switch that is held open when the door is closed.
Rotary switch
A rotary switch operates with a twisting motion of the operating handle with at least two positions. One or more positions of the switch may be momentary (biased with a spring), requiring the operator to hold the switch in the position. Other positions may have a detent to hold the position when released. A rotary switch may have multiple levels or "decks" in order to allow it to control multiple circuits.
One form of rotary switch consists of a spindle or "rotor" that has a contact arm or "spoke" which projects from its surface like a cam. It has an array of terminals, arranged in a circle around the rotor, each of which serves as a contact for the "spoke" through which any one of a number of different electrical circuits can be connected to the rotor. The switch is layered to allow the use of multiple poles, each layer is equivalent to one pole. Usually such a switch has a detent mechanism so it "clicks" from one active position to another rather than stalls in an intermediate position. Thus a rotary switch provides greater pole and throw capabilities than simpler switches do.
Other types use a cam mechanism to operate multiple independent sets of contacts.
Rotary switches were used as channel selectors on television receivers until the early 1970s, as range selectors on electrical metering equipment, as band selectors on multi-band radios and other similar purposes. In industry, rotary switches are used for control of measuring instruments, switchgear, or in control circuits. For example, a radio controlled overhead crane may have a large multi-circuit rotary switch to transfer hard-wired control signals from the local manual controls in the cab to the outputs of the remote control receiver.
Toggle switch
A toggle switch or tumbler switch is a class of electrical switches that are manually actuated by a mechanical lever, handle, or rocking mechanism.
Toggle switches are available in many different styles and sizes, and are used in numerous applications. Many are designed to provide the simultaneous actuation of multiple sets of electrical contacts, or the control of large amounts of electric current or mains voltages.
The word "toggle" is a reference to a kind of mechanism or joint consisting of two arms, which are almost in line with each other, connected with an elbow-like pivot. However, the phrase "toggle switch" is applied to a switch with a short handle and a positive snap-action, whether it actually contains a toggle mechanism or not. Similarly, a switch where a definitive click is heard, is called a "positive on-off switch". A very common use of this type of switch is to switch lights or other electrical equipment on or off. Multiple toggle switches may be mechanically interlocked to prevent forbidden combinations.
In some contexts, particularly computing, a toggle switch, or the action of toggling, is understood in the different sense of a mechanical or software switch that alternates between two states each time it is activated, regardless of mechanical construction. For example, the caps lock key on a computer causes all letters to be generated in capitals after it is pressed once; pressing it again reverts to lower-case letters.
Special types
Switches can be designed to respond to any type of mechanical stimulus: for example, vibration (the trembler switch), tilt, air pressure, fluid level (a float switch), the turning of a key (key switch), linear or rotary movement (a limit switch or microswitch), or presence of a magnetic field (the reed switch). Many switches are operated automatically by changes in some environmental condition or by motion of machinery. A limit switch is used, for example, in machine tools to interlock operation with the proper position of tools. In heating or cooling systems a sail switch ensures that air flow is adequate in a duct. Pressure switches respond to fluid pressure.
Mercury tilt switch
The mercury switch consists of a drop of mercury inside a glass bulb with two or more contacts. The two contacts pass through the glass, and are connected by the mercury when the bulb is tilted to make the mercury roll on to them.
This type of switch performs much better than the ball tilt switch, as the liquid metal connection is unaffected by dirt, debris and oxidation, it wets the contacts ensuring a very low resistance bounce-free connection, and movement and vibration do not produce a poor contact. These types can be used for precision works.
It can also be used where arcing is dangerous (such as in the presence of explosive vapour) as the entire unit is sealed.
Knife switch
Knife switches consist of a flat metal blade, hinged at one end, with an insulating handle for operation, and a fixed contact. When the switch is closed, current flows through the hinged pivot and blade and through the fixed contact. Such switches are usually not enclosed. The knife and contacts are typically formed of copper, steel, or brass, depending on the application. Fixed contacts may be backed up with a spring. Several parallel blades can be operated at the same time by one handle. The parts may be mounted on an insulating base with terminals for wiring, or may be directly bolted to an insulated switch board in a large assembly. Since the electrical contacts are exposed, the switch is used only where people cannot accidentally come in contact with the switch or where the voltage is so low as to not present a hazard.
Knife switches are made in many sizes from miniature switches to large devices used to carry thousands of amperes. In electrical transmission and distribution, gang-operated switches are used in circuits up to the highest voltages.
The disadvantages of the knife switch are the slow opening speed and the proximity of the operator to exposed live parts. Metal-enclosed safety disconnect switches are used for isolation of circuits in industrial power distribution. Sometimes spring-loaded auxiliary blades are fitted which momentarily carry the full current during opening, then quickly part to rapidly extinguish the arc.
Reversing switch
A DPDT switch has six connections, but since polarity reversal is a very common usage of DPDT switches, some variations of the DPDT switch are internally wired specifically for polarity reversal. These crossover switches only have four terminals rather than six. Two of the terminals are inputs and two are outputs. When connected to a battery or other DC source, the 4-way switch selects from either normal or reversed polarity. Such switches can also be used as intermediate switches in a multiway switching system for control of lamps by more than two switches.
Light switches
In building wiring, light switches are installed at convenient locations to control lighting and occasionally other circuits. By use of multiple-pole switches, multiway switching control of a lamp can be obtained from two or more places, such as the ends of a corridor or stairwell. A wireless light switch allows remote control of lamps for convenience; some lamps include a touch switch which electronically controls the lamp if touched anywhere. In public buildings several types of vandal resistant switches are used to prevent unauthorized use.
Slide switches
Slide switches are mechanical switches using a slider that moves (slides) from the open (off) position to the closed (on) position.
Electronic switches
The term switch has since spread to a variety of solid state electronics that perform a switching function, but which are controlled electronically by active devices rather than purely mechanically. These are categorized in the article electronic switch. Electromechanical switches (such as the traditional relay, electromechanical crossbar, and Strowger switch) bridge the categorization.
Other switches
Centrifugal switch
Company switch
Crossbar switch
Dead man's switch
Fireman's switch
Hall-effect switch
Inertial switch
Isolator switch
Key switch
Kill switch
Latching switch
Light switch
Load control switch
Membrane switch
MEMS switch
Optical switch
Piezo switch
Pull switch
Push switch
Sense switch
Slotted optical switch
Stepping switch
Strowger switch
Thermal switch
Time switch
Touch switch
Transfer switch
Zero speed switch
| Technology | Components | null |
28299 | https://en.wikipedia.org/wiki/Steradian | Steradian | The steradian (symbol: sr) or square radian is the unit of solid angle in the International System of Units (SI). It is used in three dimensional geometry, and is analogous to the radian, which quantifies planar angles. A solid angle in the form of a right circular cone can be projected onto a sphere, defining a spherical cap where the cone intersects the sphere. The magnitude of the solid angle expressed in steradians is defined as the quotient of the surface area of the spherical cap and the square of the sphere's radius. This is analogous to the way a plane angle projected onto a circle defines a circular arc on the circumference, whose length is proportional to the angle. Steradians can be used to measure a solid angle of any shape. The solid angle subtended is the same as that of a cone with the same projected area. A solid angle of one steradian subtends a cone aperture of approximately 1.144 radians or 65.54 degrees.
In the SI, solid angle is considered to be a dimensionless quantity, the ratio of the area projected onto a surrounding sphere and the square of the sphere's radius. This is the number of square radians in the solid angle. This means that the SI steradian is the number of square radians in a solid angle equal to one square radian, which of course is the number one. It is useful to distinguish between dimensionless quantities of a different kind, such as the radian (in the SI, a ratio of quantities of dimension length), so the symbol sr is used. For example, radiant intensity can be measured in watts per steradian (W⋅sr−1). The steradian was formerly an SI supplementary unit, but this category was abolished in 1995 and the steradian is now considered an SI derived unit.
The name steradian is derived from the Greek 'solid' + radian.
Definition
A steradian can be defined as the solid angle subtended at the centre of a unit sphere by a unit area (of any shape) on its surface. For a general sphere of radius , any portion of its surface with area subtends one steradian at its centre.
A solid angle in the form of a circular cone is related to the area it cuts out of a sphere:
where
is the solid angle
is the surface area of the spherical cap, ,
is the radius of the sphere,
is the height of the cap, and
sr is the unit, steradian, sr = rad.
Because the surface area of a sphere is , the definition implies that a sphere subtends steradians (≈ 12.56637 sr) at its centre, or that a steradian subtends of a sphere. By the same argument, the maximum solid angle that can be subtended at any point is .
Other properties
The area of a spherical cap is , where is the "height" of the cap. If , then . From this, one can compute the cone aperture (a plane angle) of the cross-section of a simple spherical cone whose solid angle equals one steradian:
giving 0.572 rad 32.77° and aperture 1.144 rad 65.54°.
The solid angle of a spherical cone whose cross-section subtends the angle is:
A steradian is also equal to of a complete sphere (spat), to 3282.80635 square degrees, and to the spherical area of a polygon having an angle excess of 1 radian.
SI multiples
Millisteradians (msr) and microsteradians (μsr) are occasionally used to describe light and particle beams. Other multiples are rarely used.
| Physical sciences | Angle | null |
28305 | https://en.wikipedia.org/wiki/String%20theory | String theory | In physics, string theory is a theoretical framework in which the point-like particles of particle physics are replaced by one-dimensional objects called strings. String theory describes how these strings propagate through space and interact with each other. On distance scales larger than the string scale, a string acts like a particle, with its mass, charge, and other properties determined by the vibrational state of the string. In string theory, one of the many vibrational states of the string corresponds to the graviton, a quantum mechanical particle that carries the gravitational force. Thus, string theory is a theory of quantum gravity.
String theory is a broad and varied subject that attempts to address a number of deep questions of fundamental physics. String theory has contributed a number of advances to mathematical physics, which have been applied to a variety of problems in black hole physics, early universe cosmology, nuclear physics, and condensed matter physics, and it has stimulated a number of major developments in pure mathematics. Because string theory potentially provides a unified description of gravity and particle physics, it is a candidate for a theory of everything, a self-contained mathematical model that describes all fundamental forces and forms of matter. Despite much work on these problems, it is not known to what extent string theory describes the real world or how much freedom the theory allows in the choice of its details.
String theory was first studied in the late 1960s as a theory of the strong nuclear force, before being abandoned in favor of quantum chromodynamics. Subsequently, it was realized that the very properties that made string theory unsuitable as a theory of nuclear physics made it a promising candidate for a quantum theory of gravity. The earliest version of string theory, bosonic string theory, incorporated only the class of particles known as bosons. It later developed into superstring theory, which posits a connection called supersymmetry between bosons and the class of particles called fermions. Five consistent versions of superstring theory were developed before it was conjectured in the mid-1990s that they were all different limiting cases of a single theory in eleven dimensions known as M-theory. In late 1997, theorists discovered an important relationship called the anti-de Sitter/conformal field theory correspondence (AdS/CFT correspondence), which relates string theory to another type of physical theory called a quantum field theory.
One of the challenges of string theory is that the full theory does not have a satisfactory definition in all circumstances. Another issue is that the theory is thought to describe an enormous landscape of possible universes, which has complicated efforts to develop theories of particle physics based on string theory. These issues have led some in the community to criticize these approaches to physics, and to question the value of continued research on string theory unification.
Fundamentals
Overview
In the 20th century, two theoretical frameworks emerged for formulating the laws of physics. The first is Albert Einstein's general theory of relativity, a theory that explains the force of gravity and the structure of spacetime at the macro-level. The other is quantum mechanics, a completely different formulation, which uses known probability principles to describe physical phenomena at the micro-level. By the late 1970s, these two frameworks had proven to be sufficient to explain most of the observed features of the universe, from elementary particles to atoms to the evolution of stars and the universe as a whole.
In spite of these successes, there are still many problems that remain to be solved. One of the deepest problems in modern physics is the problem of quantum gravity. The general theory of relativity is formulated within the framework of classical physics, whereas the other fundamental forces are described within the framework of quantum mechanics. A quantum theory of gravity is needed in order to reconcile general relativity with the principles of quantum mechanics, but difficulties arise when one attempts to apply the usual prescriptions of quantum theory to the force of gravity.
String theory is a theoretical framework that attempts to address these questions.
The starting point for string theory is the idea that the point-like particles of particle physics can also be modeled as one-dimensional objects called strings. String theory describes how strings propagate through space and interact with each other. In a given version of string theory, there is only one kind of string, which may look like a small loop or segment of ordinary string, and it can vibrate in different ways. On distance scales larger than the string scale, a string will look just like an ordinary particle consistent with non-string models of elementary particles, with its mass, charge, and other properties determined by the vibrational state of the string. String theory's application as a form of quantum gravity proposes a vibrational state responsible for the graviton, a yet unproven quantum particle that is theorized to carry gravitational force.
One of the main developments of the past several decades in string theory was the discovery of certain 'dualities', mathematical transformations that identify one physical theory with another. Physicists studying string theory have discovered a number of these dualities between different versions of string theory, and this has led to the conjecture that all consistent versions of string theory are subsumed in a single framework known as M-theory.
Studies of string theory have also yielded a number of results on the nature of black holes and the gravitational interaction. There are certain paradoxes that arise when one attempts to understand the quantum aspects of black holes, and work on string theory has attempted to clarify these issues. In late 1997 this line of work culminated in the discovery of the anti-de Sitter/conformal field theory correspondence or AdS/CFT. This is a theoretical result that relates string theory to other physical theories which are better understood theoretically. The AdS/CFT correspondence has implications for the study of black holes and quantum gravity, and it has been applied to other subjects, including nuclear and condensed matter physics.
Since string theory incorporates all of the fundamental interactions, including gravity, many physicists hope that it will eventually be developed to the point where it fully describes our universe, making it a theory of everything. One of the goals of current research in string theory is to find a solution of the theory that reproduces the observed spectrum of elementary particles, with a small cosmological constant, containing dark matter and a plausible mechanism for cosmic inflation. While there has been progress toward these goals, it is not known to what extent string theory describes the real world or how much freedom the theory allows in the choice of details.
One of the challenges of string theory is that the full theory does not have a satisfactory definition in all circumstances. The scattering of strings is most straightforwardly defined using the techniques of perturbation theory, but it is not known in general how to define string theory nonperturbatively. It is also not clear whether there is any principle by which string theory selects its vacuum state, the physical state that determines the properties of our universe. These problems have led some in the community to criticize these approaches to the unification of physics and question the value of continued research on these problems.
Strings
The application of quantum mechanics to physical objects such as the electromagnetic field, which are extended in space and time, is known as quantum field theory. In particle physics, quantum field theories form the basis for our understanding of elementary particles, which are modeled as excitations in the fundamental fields.
In quantum field theory, one typically computes the probabilities of various physical events using the techniques of perturbation theory. Developed by Richard Feynman and others in the first half of the twentieth century, perturbative quantum field theory uses special diagrams called Feynman diagrams to organize computations. One imagines that these diagrams depict the paths of point-like particles and their interactions.
The starting point for string theory is the idea that the point-like particles of quantum field theory can also be modeled as one-dimensional objects called strings. The interaction of strings is most straightforwardly defined by generalizing the perturbation theory used in ordinary quantum field theory. At the level of Feynman diagrams, this means replacing the one-dimensional diagram representing the path of a point particle by a two-dimensional (2D) surface representing the motion of a string. Unlike in quantum field theory, string theory does not have a full non-perturbative definition, so many of the theoretical questions that physicists would like to answer remain out of reach.
In theories of particle physics based on string theory, the characteristic length scale of strings is assumed to be on the order of the Planck length, or meters, the scale at which the effects of quantum gravity are believed to become significant. On much larger length scales, such as the scales visible in physics laboratories, such objects would be indistinguishable from zero-dimensional point particles, and the vibrational state of the string would determine the type of particle. One of the vibrational states of a string corresponds to the graviton, a quantum mechanical particle that carries the gravitational force.
The original version of string theory was bosonic string theory, but this version described only bosons, a class of particles that transmit forces between the matter particles, or fermions. Bosonic string theory was eventually superseded by theories called superstring theories. These theories describe both bosons and fermions, and they incorporate a theoretical idea called supersymmetry. In theories with supersymmetry, each boson has a counterpart which is a fermion, and vice versa.
There are several versions of superstring theory: type I, type IIA, type IIB, and two flavors of heterotic string theory ( and ). The different theories allow different types of strings, and the particles that arise at low energies exhibit different symmetries. For example, the type I theory includes both open strings (which are segments with endpoints) and closed strings (which form closed loops), while types IIA, IIB and heterotic include only closed strings.
Extra dimensions
In everyday life, there are three familiar dimensions (3D) of space: height, width and length. Einstein's general theory of relativity treats time as a dimension on par with the three spatial dimensions; in general relativity, space and time are not modeled as separate entities but are instead unified to a four-dimensional (4D) spacetime. In this framework, the phenomenon of gravity is viewed as a consequence of the geometry of spacetime.
In spite of the fact that the Universe is well described by 4D spacetime, there are several reasons why physicists consider theories in other dimensions. In some cases, by modeling spacetime in a different number of dimensions, a theory becomes more mathematically tractable, and one can perform calculations and gain general insights more easily. There are also situations where theories in two or three spacetime dimensions are useful for describing phenomena in condensed matter physics. Finally, there exist scenarios in which there could actually be more than 4D of spacetime which have nonetheless managed to escape detection.
String theories require extra dimensions of spacetime for their mathematical consistency. In bosonic string theory, spacetime is 26-dimensional, while in superstring theory it is 10-dimensional, and in M-theory it is 11-dimensional. In order to describe real physical phenomena using string theory, one must therefore imagine scenarios in which these extra dimensions would not be observed in experiments.
Compactification is one way of modifying the number of dimensions in a physical theory. In compactification, some of the extra dimensions are assumed to "close up" on themselves to form circles. In the limit where these curled up dimensions become very small, one obtains a theory in which spacetime has effectively a lower number of dimensions. A standard analogy for this is to consider a multidimensional object such as a garden hose. If the hose is viewed from a sufficient distance, it appears to have only one dimension, its length. However, as one approaches the hose, one discovers that it contains a second dimension, its circumference. Thus, an ant crawling on the surface of the hose would move in two dimensions.
Compactification can be used to construct models in which spacetime is effectively four-dimensional. However, not every way of compactifying the extra dimensions produces a model with the right properties to describe nature. In a viable model of particle physics, the compact extra dimensions must be shaped like a Calabi–Yau manifold. A Calabi–Yau manifold is a special space which is typically taken to be six-dimensional in applications to string theory. It is named after mathematicians Eugenio Calabi and Shing-Tung Yau.
Another approach to reducing the number of dimensions is the so-called brane-world scenario. In this approach, physicists assume that the observable universe is a four-dimensional subspace of a higher dimensional space. In such models, the force-carrying bosons of particle physics arise from open strings with endpoints attached to the four-dimensional subspace, while gravity arises from closed strings propagating through the larger ambient space. This idea plays an important role in attempts to develop models of real-world physics based on string theory, and it provides a natural explanation for the weakness of gravity compared to the other fundamental forces.
Dualities
A notable fact about string theory is that the different versions of the theory all turn out to be related in highly nontrivial ways. One of the relationships that can exist between different string theories is called S-duality. This is a relationship that says that a collection of strongly interacting particles in one theory can, in some cases, be viewed as a collection of weakly interacting particles in a completely different theory. Roughly speaking, a collection of particles is said to be strongly interacting if they combine and decay often and weakly interacting if they do so infrequently. Type I string theory turns out to be equivalent by S-duality to the heterotic string theory. Similarly, type IIB string theory is related to itself in a nontrivial way by S-duality.
Another relationship between different string theories is T-duality. Here one considers strings propagating around a circular extra dimension. T-duality states that a string propagating around a circle of radius is equivalent to a string propagating around a circle of radius in the sense that all observable quantities in one description are identified with quantities in the dual description. For example, a string has momentum as it propagates around a circle, and it can also wind around the circle one or more times. The number of times the string winds around a circle is called the winding number. If a string has momentum and winding number in one description, it will have momentum and winding number in the dual description. For example, type IIA string theory is equivalent to type IIB string theory via T-duality, and the two versions of heterotic string theory are also related by T-duality.
In general, the term duality refers to a situation where two seemingly different physical systems turn out to be equivalent in a nontrivial way. Two theories related by a duality need not be string theories. For example, Montonen–Olive duality is an example of an S-duality relationship between quantum field theories. The AdS/CFT correspondence is an example of a duality that relates string theory to a quantum field theory. If two theories are related by a duality, it means that one theory can be transformed in some way so that it ends up looking just like the other theory. The two theories are then said to be dual to one another under the transformation. Put differently, the two theories are mathematically different descriptions of the same phenomena.
Branes
In string theory and other related theories, a brane is a physical object that generalizes the notion of a point particle to higher dimensions. For instance, a point particle can be viewed as a brane of dimension zero, while a string can be viewed as a brane of dimension one. It is also possible to consider higher-dimensional branes. In dimension p, these are called p-branes. The word brane comes from the word "membrane" which refers to a two-dimensional brane.
Branes are dynamical objects which can propagate through spacetime according to the rules of quantum mechanics. They have mass and can have other attributes such as charge. A p-brane sweeps out a (p+1)-dimensional volume in spacetime called its worldvolume. Physicists often study fields analogous to the electromagnetic field which live on the worldvolume of a brane.
In string theory, D-branes are an important class of branes that arise when one considers open strings. As an open string propagates through spacetime, its endpoints are required to lie on a D-brane. The letter "D" in D-brane refers to a certain mathematical condition on the system known as the Dirichlet boundary condition. The study of D-branes in string theory has led to important results such as the AdS/CFT correspondence, which has shed light on many problems in quantum field theory.
Branes are frequently studied from a purely mathematical point of view, and they are described as objects of certain categories, such as the derived category of coherent sheaves on a complex algebraic variety, or the Fukaya category of a symplectic manifold. The connection between the physical notion of a brane and the mathematical notion of a category has led to important mathematical insights in the fields of algebraic and symplectic geometry and representation theory.
M-theory
Prior to 1995, theorists believed that there were five consistent versions of superstring theory (type I, type IIA, type IIB, and two versions of heterotic string theory). This understanding changed in 1995 when Edward Witten suggested that the five theories were just special limiting cases of an eleven-dimensional theory called M-theory. Witten's conjecture was based on the work of a number of other physicists, including Ashoke Sen, Chris Hull, Paul Townsend, and Michael Duff. His announcement led to a flurry of research activity now known as the second superstring revolution.
Unification of superstring theories
In the 1970s, many physicists became interested in supergravity theories, which combine general relativity with supersymmetry. Whereas general relativity makes sense in any number of dimensions, supergravity places an upper limit on the number of dimensions. In 1978, work by Werner Nahm showed that the maximum spacetime dimension in which one can formulate a consistent supersymmetric theory is eleven. In the same year, Eugene Cremmer, Bernard Julia, and Joël Scherk of the École Normale Supérieure showed that supergravity not only permits up to eleven dimensions but is in fact most elegant in this maximal number of dimensions.
Initially, many physicists hoped that by compactifying eleven-dimensional supergravity, it might be possible to construct realistic models of our four-dimensional world. The hope was that such models would provide a unified description of the four fundamental forces of nature: electromagnetism, the strong and weak nuclear forces, and gravity. Interest in eleven-dimensional supergravity soon waned as various flaws in this scheme were discovered. One of the problems was that the laws of physics appear to distinguish between clockwise and counterclockwise, a phenomenon known as chirality. Edward Witten and others observed this chirality property cannot be readily derived by compactifying from eleven dimensions.
In the first superstring revolution in 1984, many physicists turned to string theory as a unified theory of particle physics and quantum gravity. Unlike supergravity theory, string theory was able to accommodate the chirality of the standard model, and it provided a theory of gravity consistent with quantum effects. Another feature of string theory that many physicists were drawn to in the 1980s and 1990s was its high degree of uniqueness. In ordinary particle theories, one can consider any collection of elementary particles whose classical behavior is described by an arbitrary Lagrangian. In string theory, the possibilities are much more constrained: by the 1990s, physicists had argued that there were only five consistent supersymmetric versions of the theory.
Although there were only a handful of consistent superstring theories, it remained a mystery why there was not just one consistent formulation. However, as physicists began to examine string theory more closely, they realized that these theories are related in intricate and nontrivial ways. They found that a system of strongly interacting strings can, in some cases, be viewed as a system of weakly interacting strings. This phenomenon is known as S-duality. It was studied by Ashoke Sen in the context of heterotic strings in four dimensions and by Chris Hull and Paul Townsend in the context of the type IIB theory. Theorists also found that different string theories may be related by T-duality. This duality implies that strings propagating on completely different spacetime geometries may be physically equivalent.
At around the same time, as many physicists were studying the properties of strings, a small group of physicists were examining the possible applications of higher dimensional objects. In 1987, Eric Bergshoeff, Ergin Sezgin, and Paul Townsend showed that eleven-dimensional supergravity includes two-dimensional branes. Intuitively, these objects look like sheets or membranes propagating through the eleven-dimensional spacetime. Shortly after this discovery, Michael Duff, Paul Howe, Takeo Inami, and Kellogg Stelle considered a particular compactification of eleven-dimensional supergravity with one of the dimensions curled up into a circle. In this setting, one can imagine the membrane wrapping around the circular dimension. If the radius of the circle is sufficiently small, then this membrane looks just like a string in ten-dimensional spacetime. Duff and his collaborators showed that this construction reproduces exactly the strings appearing in type IIA superstring theory.
Speaking at a string theory conference in 1995, Edward Witten made the surprising suggestion that all five superstring theories were in fact just different limiting cases of a single theory in eleven spacetime dimensions. Witten's announcement drew together all of the previous results on S- and T-duality and the appearance of higher-dimensional branes in string theory. In the months following Witten's announcement, hundreds of new papers appeared on the Internet confirming different parts of his proposal. Today this flurry of work is known as the second superstring revolution.
Initially, some physicists suggested that the new theory was a fundamental theory of membranes, but Witten was skeptical of the role of membranes in the theory. In a paper from 1996, Hořava and Witten wrote "As it has been proposed that the eleven-dimensional theory is a supermembrane theory but there are some reasons to doubt that interpretation, we will non-committally call it the M-theory, leaving to the future the relation of M to membranes." In the absence of an understanding of the true meaning and structure of M-theory, Witten has suggested that the M should stand for "magic", "mystery", or "membrane" according to taste, and the true meaning of the title should be decided when a more fundamental formulation of the theory is known.
Matrix theory
In mathematics, a matrix is a rectangular array of numbers or other data. In physics, a matrix model is a particular kind of physical theory whose mathematical formulation involves the notion of a matrix in an important way. A matrix model describes the behavior of a set of matrices within the framework of quantum mechanics.
One important example of a matrix model is the BFSS matrix model proposed by Tom Banks, Willy Fischler, Stephen Shenker, and Leonard Susskind in 1997. This theory describes the behavior of a set of nine large matrices. In their original paper, these authors showed, among other things, that the low energy limit of this matrix model is described by eleven-dimensional supergravity. These calculations led them to propose that the BFSS matrix model is exactly equivalent to M-theory. The BFSS matrix model can therefore be used as a prototype for a correct formulation of M-theory and a tool for investigating the properties of M-theory in a relatively simple setting.
The development of the matrix model formulation of M-theory has led physicists to consider various connections between string theory and a branch of mathematics called noncommutative geometry. This subject is a generalization of ordinary geometry in which mathematicians define new geometric notions using tools from noncommutative algebra. In a paper from 1998, Alain Connes, Michael R. Douglas, and Albert Schwarz showed that some aspects of matrix models and M-theory are described by a noncommutative quantum field theory, a special kind of physical theory in which spacetime is described mathematically using noncommutative geometry. This established a link between matrix models and M-theory on the one hand, and noncommutative geometry on the other hand. It quickly led to the discovery of other important links between noncommutative geometry and various physical theories.
Black holes
In general relativity, a black hole is defined as a region of spacetime in which the gravitational field is so strong that no particle or radiation can escape. In the currently accepted models of stellar evolution, black holes are thought to arise when massive stars undergo gravitational collapse, and many galaxies are thought to contain supermassive black holes at their centers. Black holes are also important for theoretical reasons, as they present profound challenges for theorists attempting to understand the quantum aspects of gravity. String theory has proved to be an important tool for investigating the theoretical properties of black holes because it provides a framework in which theorists can study their thermodynamics.
Bekenstein–Hawking formula
In the branch of physics called statistical mechanics, entropy is a measure of the randomness or disorder of a physical system. This concept was studied in the 1870s by the Austrian physicist Ludwig Boltzmann, who showed that the thermodynamic properties of a gas could be derived from the combined properties of its many constituent molecules. Boltzmann argued that by averaging the behaviors of all the different molecules in a gas, one can understand macroscopic properties such as volume, temperature, and pressure. In addition, this perspective led him to give a precise definition of entropy as the natural logarithm of the number of different states of the molecules (also called microstates) that give rise to the same macroscopic features.
In the twentieth century, physicists began to apply the same concepts to black holes. In most systems such as gases, the entropy scales with the volume. In the 1970s, the physicist Jacob Bekenstein suggested that the entropy of a black hole is instead proportional to the surface area of its event horizon, the boundary beyond which matter and radiation are lost to its gravitational attraction. When combined with ideas of the physicist Stephen Hawking, Bekenstein's work yielded a precise formula for the entropy of a black hole. The Bekenstein–Hawking formula expresses the entropy as
where is the speed of light, is the Boltzmann constant, is the reduced Planck constant, is Newton's constant, and is the surface area of the event horizon.
Like any physical system, a black hole has an entropy defined in terms of the number of different microstates that lead to the same macroscopic features. The Bekenstein–Hawking entropy formula gives the expected value of the entropy of a black hole, but by the 1990s, physicists still lacked a derivation of this formula by counting microstates in a theory of quantum gravity. Finding such a derivation of this formula was considered an important test of the viability of any theory of quantum gravity such as string theory.
Derivation within string theory
In a paper from 1996, Andrew Strominger and Cumrun Vafa showed how to derive the Bekenstein–Hawking formula for certain black holes in string theory. Their calculation was based on the observation that D-branes—which look like fluctuating membranes when they are weakly interacting—become dense, massive objects with event horizons when the interactions are strong. In other words, a system of strongly interacting D-branes in string theory is indistinguishable from a black hole. Strominger and Vafa analyzed such D-brane systems and calculated the number of different ways of placing D-branes in spacetime so that their combined mass and charge is equal to a given mass and charge for the resulting black hole. Their calculation reproduced the Bekenstein–Hawking formula exactly, including the factor of . Subsequent work by Strominger, Vafa, and others refined the original calculations and gave the precise values of the "quantum corrections" needed to describe very small black holes.
The black holes that Strominger and Vafa considered in their original work were quite different from real astrophysical black holes. One difference was that Strominger and Vafa considered only extremal black holes in order to make the calculation tractable. These are defined as black holes with the lowest possible mass compatible with a given charge. Strominger and Vafa also restricted attention to black holes in five-dimensional spacetime with unphysical supersymmetry.
Although it was originally developed in this very particular and physically unrealistic context in string theory, the entropy calculation of Strominger and Vafa has led to a qualitative understanding of how black hole entropy can be accounted for in any theory of quantum gravity. Indeed, in 1998, Strominger argued that the original result could be generalized to an arbitrary consistent theory of quantum gravity without relying on strings or supersymmetry. In collaboration with several other authors in 2010, he showed that some results on black hole entropy could be extended to non-extremal astrophysical black holes.
AdS/CFT correspondence
One approach to formulating string theory and studying its properties is provided by the anti-de Sitter/conformal field theory (AdS/CFT) correspondence. This is a theoretical result that implies that string theory is in some cases equivalent to a quantum field theory. In addition to providing insights into the mathematical structure of string theory, the AdS/CFT correspondence has shed light on many aspects of quantum field theory in regimes where traditional calculational techniques are ineffective. The AdS/CFT correspondence was first proposed by Juan Maldacena in late 1997. Important aspects of the correspondence were elaborated in articles by Steven Gubser, Igor Klebanov, and Alexander Markovich Polyakov, and by Edward Witten. By 2010, Maldacena's article had over 7000 citations, becoming the most highly cited article in the field of high energy physics.
Overview of the correspondence
In the AdS/CFT correspondence, the geometry of spacetime is described in terms of a certain vacuum solution of Einstein's equation called anti-de Sitter space. In very elementary terms, anti-de Sitter space is a mathematical model of spacetime in which the notion of distance between points (the metric) is different from the notion of distance in ordinary Euclidean geometry. It is closely related to hyperbolic space, which can be viewed as a disk as illustrated on the left. This image shows a tessellation of a disk by triangles and squares. One can define the distance between points of this disk in such a way that all the triangles and squares are the same size and the circular outer boundary is infinitely far from any point in the interior.
One can imagine a stack of hyperbolic disks where each disk represents the state of the universe at a given time. The resulting geometric object is three-dimensional anti-de Sitter space. It looks like a solid cylinder in which any cross section is a copy of the hyperbolic disk. Time runs along the vertical direction in this picture. The surface of this cylinder plays an important role in the AdS/CFT correspondence. As with the hyperbolic plane, anti-de Sitter space is curved in such a way that any point in the interior is actually infinitely far from this boundary surface.
This construction describes a hypothetical universe with only two space dimensions and one time dimension, but it can be generalized to any number of dimensions. Indeed, hyperbolic space can have more than two dimensions and one can "stack up" copies of hyperbolic space to get higher-dimensional models of anti-de Sitter space.
An important feature of anti-de Sitter space is its boundary (which looks like a cylinder in the case of three-dimensional anti-de Sitter space). One property of this boundary is that, within a small region on the surface around any given point, it looks just like Minkowski space, the model of spacetime used in non-gravitational physics. One can therefore consider an auxiliary theory in which "spacetime" is given by the boundary of anti-de Sitter space. This observation is the starting point for AdS/CFT correspondence, which states that the boundary of anti-de Sitter space can be regarded as the "spacetime" for a quantum field theory. The claim is that this quantum field theory is equivalent to a gravitational theory, such as string theory, in the bulk anti-de Sitter space in the sense that there is a "dictionary" for translating entities and calculations in one theory into their counterparts in the other theory. For example, a single particle in the gravitational theory might correspond to some collection of particles in the boundary theory. In addition, the predictions in the two theories are quantitatively identical so that if two particles have a 40 percent chance of colliding in the gravitational theory, then the corresponding collections in the boundary theory would also have a 40 percent chance of colliding.
Applications to quantum gravity
The discovery of the AdS/CFT correspondence was a major advance in physicists' understanding of string theory and quantum gravity. One reason for this is that the correspondence provides a formulation of string theory in terms of quantum field theory, which is well understood by comparison. Another reason is that it provides a general framework in which physicists can study and attempt to resolve the paradoxes of black holes.
In 1975, Stephen Hawking published a calculation which suggested that black holes are not completely black but emit a dim radiation due to quantum effects near the event horizon. At first, Hawking's result posed a problem for theorists because it suggested that black holes destroy information. More precisely, Hawking's calculation seemed to conflict with one of the basic postulates of quantum mechanics, which states that physical systems evolve in time according to the Schrödinger equation. This property is usually referred to as unitarity of time evolution. The apparent contradiction between Hawking's calculation and the unitarity postulate of quantum mechanics came to be known as the black hole information paradox.
The AdS/CFT correspondence resolves the black hole information paradox, at least to some extent, because it shows how a black hole can evolve in a manner consistent with quantum mechanics in some contexts. Indeed, one can consider black holes in the context of the AdS/CFT correspondence, and any such black hole corresponds to a configuration of particles on the boundary of anti-de Sitter space. These particles obey the usual rules of quantum mechanics and in particular evolve in a unitary fashion, so the black hole must also evolve in a unitary fashion, respecting the principles of quantum mechanics. In 2005, Hawking announced that the paradox had been settled in favor of information conservation by the AdS/CFT correspondence, and he suggested a concrete mechanism by which black holes might preserve information.
Applications to nuclear physics
In addition to its applications to theoretical problems in quantum gravity, the AdS/CFT correspondence has been applied to a variety of problems in quantum field theory. One physical system that has been studied using the AdS/CFT correspondence is the quark–gluon plasma, an exotic state of matter produced in particle accelerators. This state of matter arises for brief instants when heavy ions such as gold or lead nuclei are collided at high energies. Such collisions cause the quarks that make up atomic nuclei to deconfine at temperatures of approximately two trillion kelvin, conditions similar to those present at around seconds after the Big Bang.
The physics of the quark–gluon plasma is governed by a theory called quantum chromodynamics, but this theory is mathematically intractable in problems involving the quark–gluon plasma. In an article appearing in 2005, Đàm Thanh Sơn and his collaborators showed that the AdS/CFT correspondence could be used to understand some aspects of the quark-gluon plasma by describing it in the language of string theory. By applying the AdS/CFT correspondence, Sơn and his collaborators were able to describe the quark-gluon plasma in terms of black holes in five-dimensional spacetime. The calculation showed that the ratio of two quantities associated with the quark-gluon plasma, the shear viscosity and volume density of entropy, should be approximately equal to a certain universal constant. In 2008, the predicted value of this ratio for the quark-gluon plasma was confirmed at the Relativistic Heavy Ion Collider at Brookhaven National Laboratory.
Applications to condensed matter physics
The AdS/CFT correspondence has also been used to study aspects of condensed matter physics. Over the decades, experimental condensed matter physicists have discovered a number of exotic states of matter, including superconductors and superfluids. These states are described using the formalism of quantum field theory, but some phenomena are difficult to explain using standard field theoretic techniques. Some condensed matter theorists including Subir Sachdev hope that the AdS/CFT correspondence will make it possible to describe these systems in the language of string theory and learn more about their behavior.
So far some success has been achieved in using string theory methods to describe the transition of a superfluid to an insulator. A superfluid is a system of electrically neutral atoms that flows without any friction. Such systems are often produced in the laboratory using liquid helium, but recently experimentalists have developed new ways of producing artificial superfluids by pouring trillions of cold atoms into a lattice of criss-crossing lasers. These atoms initially behave as a superfluid, but as experimentalists increase the intensity of the lasers, they become less mobile and then suddenly transition to an insulating state. During the transition, the atoms behave in an unusual way. For example, the atoms slow to a halt at a rate that depends on the temperature and on the Planck constant, the fundamental parameter of quantum mechanics, which does not enter into the description of the other phases. This behavior has recently been understood by considering a dual description where properties of the fluid are described in terms of a higher dimensional black hole.
Phenomenology
In addition to being an idea of considerable theoretical interest, string theory provides a framework for constructing models of real-world physics that combine general relativity and particle physics. Phenomenology is the branch of theoretical physics in which physicists construct realistic models of nature from more abstract theoretical ideas. String phenomenology is the part of string theory that attempts to construct realistic or semi-realistic models based on string theory.
Partly because of theoretical and mathematical difficulties and partly because of the extremely high energies needed to test these theories experimentally, there is so far no experimental evidence that would unambiguously point to any of these models being a correct fundamental description of nature. This has led some in the community to criticize these approaches to unification and question the value of continued research on these problems.
Particle physics
The currently accepted theory describing elementary particles and their interactions is known as the standard model of particle physics. This theory provides a unified description of three of the fundamental forces of nature: electromagnetism and the strong and weak nuclear forces. Despite its remarkable success in explaining a wide range of physical phenomena, the standard model cannot be a complete description of reality. This is because the standard model fails to incorporate the force of gravity and because of problems such as the hierarchy problem and the inability to explain the structure of fermion masses or dark matter.
String theory has been used to construct a variety of models of particle physics going beyond the standard model. Typically, such models are based on the idea of compactification. Starting with the ten- or eleven-dimensional spacetime of string or M-theory, physicists postulate a shape for the extra dimensions. By choosing this shape appropriately, they can construct models roughly similar to the standard model of particle physics, together with additional undiscovered particles. One popular way of deriving realistic physics from string theory is to start with the heterotic theory in ten dimensions and assume that the six extra dimensions of spacetime are shaped like a six-dimensional Calabi–Yau manifold. Such compactifications offer many ways of extracting realistic physics from string theory. Other similar methods can be used to construct realistic or semi-realistic models of our four-dimensional world based on M-theory.
Cosmology
The Big Bang theory is the prevailing cosmological model for the universe from the earliest known periods through its subsequent large-scale evolution. Despite its success in explaining many observed features of the universe including galactic redshifts, the relative abundance of light elements such as hydrogen and helium, and the existence of a cosmic microwave background, there are several questions that remain unanswered. For example, the standard Big Bang model does not explain why the universe appears to be the same in all directions, why it appears flat on very large distance scales, or why certain hypothesized particles such as magnetic monopoles are not observed in experiments.
Currently, the leading candidate for a theory going beyond the Big Bang is the theory of cosmic inflation. Developed by Alan Guth and others in the 1980s, inflation postulates a period of extremely rapid accelerated expansion of the universe prior to the expansion described by the standard Big Bang theory. The theory of cosmic inflation preserves the successes of the Big Bang while providing a natural explanation for some of the mysterious features of the universe. The theory has also received striking support from observations of the cosmic microwave background, the radiation that has filled the sky since around 380,000 years after the Big Bang.
In the theory of inflation, the rapid initial expansion of the universe is caused by a hypothetical particle called the inflaton. The exact properties of this particle are not fixed by the theory but should ultimately be derived from a more fundamental theory such as string theory. Indeed, there have been a number of attempts to identify an inflaton within the spectrum of particles described by string theory and to study inflation using string theory. While these approaches might eventually find support in observational data such as measurements of the cosmic microwave background, the application of string theory to cosmology is still in its early stages.
Connections to mathematics
In addition to influencing research in theoretical physics, string theory has stimulated a number of major developments in pure mathematics. Like many developing ideas in theoretical physics, string theory does not at present have a mathematically rigorous formulation in which all of its concepts can be defined precisely. As a result, physicists who study string theory are often guided by physical intuition to conjecture relationships between the seemingly different mathematical structures that are used to formalize different parts of the theory. These conjectures are later proved by mathematicians, and in this way, string theory serves as a source of new ideas in pure mathematics.
Mirror symmetry
After Calabi–Yau manifolds had entered physics as a way to compactify extra dimensions in string theory, many physicists began studying these manifolds. In the late 1980s, several physicists noticed that given such a compactification of string theory, it is not possible to reconstruct uniquely a corresponding Calabi–Yau manifold. Instead, two different versions of string theory, type IIA and type IIB, can be compactified on completely different Calabi–Yau manifolds giving rise to the same physics. In this situation, the manifolds are called mirror manifolds, and the relationship between the two physical theories is called mirror symmetry.
Regardless of whether Calabi–Yau compactifications of string theory provide a correct description of nature, the existence of the mirror duality between different string theories has significant mathematical consequences. The Calabi–Yau manifolds used in string theory are of interest in pure mathematics, and mirror symmetry allows mathematicians to solve problems in enumerative geometry, a branch of mathematics concerned with counting the numbers of solutions to geometric questions.
Enumerative geometry studies a class of geometric objects called algebraic varieties which are defined by the vanishing of polynomials. For example, the Clebsch cubic illustrated on the right is an algebraic variety defined using a certain polynomial of degree three in four variables. A celebrated result of nineteenth-century mathematicians Arthur Cayley and George Salmon states that there are exactly 27 straight lines that lie entirely on such a surface.
Generalizing this problem, one can ask how many lines can be drawn on a quintic Calabi–Yau manifold, such as the one illustrated above, which is defined by a polynomial of degree five. This problem was solved by the nineteenth-century German mathematician Hermann Schubert, who found that there are exactly 2,875 such lines. In 1986, geometer Sheldon Katz proved that the number of curves, such as circles, that are defined by polynomials of degree two and lie entirely in the quintic is 609,250.
By the year 1991, most of the classical problems of enumerative geometry had been solved and interest in enumerative geometry had begun to diminish. The field was reinvigorated in May 1991 when physicists Philip Candelas, Xenia de la Ossa, Paul Green, and Linda Parkes showed that mirror symmetry could be used to translate difficult mathematical questions about one Calabi–Yau manifold into easier questions about its mirror. In particular, they used mirror symmetry to show that a six-dimensional Calabi–Yau manifold can contain exactly 317,206,375 curves of degree three. In addition to counting degree-three curves, Candelas and his collaborators obtained a number of more general results for counting rational curves which went far beyond the results obtained by mathematicians.
Originally, these results of Candelas were justified on physical grounds. However, mathematicians generally prefer rigorous proofs that do not require an appeal to physical intuition. Inspired by physicists' work on mirror symmetry, mathematicians have therefore constructed their own arguments proving the enumerative predictions of mirror symmetry. Today mirror symmetry is an active area of research in mathematics, and mathematicians are working to develop a more complete mathematical understanding of mirror symmetry based on physicists' intuition. Major approaches to mirror symmetry include the homological mirror symmetry program of Maxim Kontsevich and the SYZ conjecture of Andrew Strominger, Shing-Tung Yau, and Eric Zaslow.
Monstrous moonshine
Group theory is the branch of mathematics that studies the concept of symmetry. For example, one can consider a geometric shape such as an equilateral triangle. There are various operations that one can perform on this triangle without changing its shape. One can rotate it through 120°, 240°, or 360°, or one can reflect in any of the lines labeled , , or in the picture. Each of these operations is called a symmetry, and the collection of these symmetries satisfies certain technical properties making it into what mathematicians call a group. In this particular example, the group is known as the dihedral group of order 6 because it has six elements. A general group may describe finitely many or infinitely many symmetries; if there are only finitely many symmetries, it is called a finite group.
Mathematicians often strive for a classification (or list) of all mathematical objects of a given type. It is generally believed that finite groups are too diverse to admit a useful classification. A more modest but still challenging problem is to classify all finite simple groups. These are finite groups that may be used as building blocks for constructing arbitrary finite groups in the same way that prime numbers can be used to construct arbitrary whole numbers by taking products. One of the major achievements of contemporary group theory is the classification of finite simple groups, a mathematical theorem that provides a list of all possible finite simple groups.
This classification theorem identifies several infinite families of groups as well as 26 additional groups which do not fit into any family. The latter groups are called the "sporadic" groups, and each one owes its existence to a remarkable combination of circumstances. The largest sporadic group, the so-called monster group, has over elements, more than a thousand times the number of atoms in the Earth.
A seemingly unrelated construction is the -function of number theory. This object belongs to a special class of functions called modular functions, whose graphs form a certain kind of repeating pattern. Although this function appears in a branch of mathematics that seems very different from the theory of finite groups, the two subjects turn out to be intimately related. In the late 1970s, mathematicians John McKay and John Thompson noticed that certain numbers arising in the analysis of the monster group (namely, the dimensions of its irreducible representations) are related to numbers that appear in a formula for the -function (namely, the coefficients of its Fourier series). This relationship was further developed by John Horton Conway and Simon Norton who called it monstrous moonshine because it seemed so far fetched.
In 1992, Richard Borcherds constructed a bridge between the theory of modular functions and finite groups and, in the process, explained the observations of McKay and Thompson. Borcherds' work used ideas from string theory in an essential way, extending earlier results of Igor Frenkel, James Lepowsky, and Arne Meurman, who had realized the monster group as the symmetries of a particular version of string theory. In 1998, Borcherds was awarded the Fields medal for his work.
Since the 1990s, the connection between string theory and moonshine has led to further results in mathematics and physics. In 2010, physicists Tohru Eguchi, Hirosi Ooguri, and Yuji Tachikawa discovered connections between a different sporadic group, the Mathieu group , and a certain version of string theory. Miranda Cheng, John Duncan, and Jeffrey A. Harvey proposed a generalization of this moonshine phenomenon called umbral moonshine, and their conjecture was proved mathematically by Duncan, Michael Griffin, and Ken Ono. Witten has also speculated that the version of string theory appearing in monstrous moonshine might be related to a certain simplified model of gravity in three spacetime dimensions.
History
Early results
Some of the structures reintroduced by string theory arose for the first time much earlier as part of the program of classical unification started by Albert Einstein. The first person to add a fifth dimension to a theory of gravity was Gunnar Nordström in 1914, who noted that gravity in five dimensions describes both gravity and electromagnetism in four. Nordström attempted to unify electromagnetism with his theory of gravitation, which was however superseded by Einstein's general relativity in 1919. Thereafter, German mathematician Theodor Kaluza combined the fifth dimension with general relativity, and only Kaluza is usually credited with the idea. In 1926, the Swedish physicist Oskar Klein gave a physical interpretation of the unobservable extra dimension—it is wrapped into a small circle. Einstein introduced a non-symmetric metric tensor, while much later Brans and Dicke added a scalar component to gravity. These ideas would be revived within string theory, where they are demanded by consistency conditions.
String theory was originally developed during the late 1960s and early 1970s as a never completely successful theory of hadrons, the subatomic particles like the proton and neutron that feel the strong interaction. In the 1960s, Geoffrey Chew and Steven Frautschi discovered that the mesons make families called Regge trajectories with masses related to spins in a way that was later understood by Yoichiro Nambu, Holger Bech Nielsen and Leonard Susskind to be the relationship expected from rotating strings. Chew advocated making a theory for the interactions of these trajectories that did not presume that they were composed of any fundamental particles, but would construct their interactions from self-consistency conditions on the S-matrix. The S-matrix approach was started by Werner Heisenberg in the 1940s as a way of constructing a theory that did not rely on the local notions of space and time, which Heisenberg believed break down at the nuclear scale. While the scale was off by many orders of magnitude, the approach he advocated was ideally suited for a theory of quantum gravity.
Working with experimental data, R. Dolen, D. Horn and C. Schmid developed some sum rules for hadron exchange. When a particle and antiparticle scatter, virtual particles can be exchanged in two qualitatively different ways. In the s-channel, the two particles annihilate to make temporary intermediate states that fall apart into the final state particles. In the t-channel, the particles exchange intermediate states by emission and absorption. In field theory, the two contributions add together, one giving a continuous background contribution, the other giving peaks at certain energies. In the data, it was clear that the peaks were stealing from the background—the authors interpreted this as saying that the t-channel contribution was dual to the s-channel one, meaning both described the whole amplitude and included the other.
The result was widely advertised by Murray Gell-Mann, leading Gabriele Veneziano to construct a scattering amplitude that had the property of Dolen–Horn–Schmid duality, later renamed world-sheet duality. The amplitude needed poles where the particles appear, on straight-line trajectories, and there is a special mathematical function whose poles are evenly spaced on half the real line—the gamma function—which was widely used in Regge theory. By manipulating combinations of gamma functions, Veneziano was able to find a consistent scattering amplitude with poles on straight lines, with mostly positive residues, which obeyed duality and had the appropriate Regge scaling at high energy. The amplitude could fit near-beam scattering data as well as other Regge type fits and had a suggestive integral representation that could be used for generalization.
Over the next years, hundreds of physicists worked to complete the bootstrap program for this model, with many surprises. Veneziano himself discovered that for the scattering amplitude to describe the scattering of a particle that appears in the theory, an obvious self-consistency condition, the lightest particle must be a tachyon. Miguel Virasoro and Joel Shapiro found a different amplitude now understood to be that of closed strings, while Ziro Koba and Holger Nielsen generalized Veneziano's integral representation to multiparticle scattering. Veneziano and Sergio Fubini introduced an operator formalism for computing the scattering amplitudes that was a forerunner of world-sheet conformal theory, while Virasoro understood how to remove the poles with wrong-sign residues using a constraint on the states. Claud Lovelace calculated a loop amplitude, and noted that there is an inconsistency unless the dimension of the theory is 26. Charles Thorn, Peter Goddard and Richard Brower went on to prove that there are no wrong-sign propagating states in dimensions less than or equal to 26.
In 1969–1970, Yoichiro Nambu, Holger Bech Nielsen, and Leonard Susskind recognized that the theory could be given a description in space and time in terms of strings. The scattering amplitudes were derived systematically from the action principle by Peter Goddard, Jeffrey Goldstone, Claudio Rebbi, and Charles Thorn, giving a space-time picture to the vertex operators introduced by Veneziano and Fubini and a geometrical interpretation to the Virasoro conditions.
In 1971, Pierre Ramond added fermions to the model, which led him to formulate a two-dimensional supersymmetry to cancel the wrong-sign states. John Schwarz and André Neveu added another sector to the fermi theory a short time later. In the fermion theories, the critical dimension was 10. Stanley Mandelstam formulated a world sheet conformal theory for both the bose and fermi case, giving a two-dimensional field theoretic path-integral to generate the operator formalism. Michio Kaku and Keiji Kikkawa gave a different formulation of the bosonic string, as a string field theory, with infinitely many particle types and with fields taking values not on points, but on loops and curves.
In 1974, Tamiaki Yoneya discovered that all the known string theories included a massless spin-two particle that obeyed the correct Ward identities to be a graviton. John Schwarz and Joël Scherk came to the same conclusion and made the bold leap to suggest that string theory was a theory of gravity, not a theory of hadrons. They reintroduced Kaluza–Klein theory as a way of making sense of the extra dimensions. At the same time, quantum chromodynamics was recognized as the correct theory of hadrons, shifting the attention of physicists and apparently leaving the bootstrap program in the dustbin of history.
String theory eventually made it out of the dustbin, but for the following decade, all work on the theory was completely ignored. Still, the theory continued to develop at a steady pace thanks to the work of a handful of devotees. Ferdinando Gliozzi, Joël Scherk, and David Olive realized in 1977 that the original Ramond and Neveu Schwarz-strings were separately inconsistent and needed to be combined. The resulting theory did not have a tachyon and was proven to have space-time supersymmetry by John Schwarz and Michael Green in 1984. The same year, Alexander Polyakov gave the theory a modern path integral formulation, and went on to develop conformal field theory extensively. In 1979, Daniel Friedan showed that the equations of motions of string theory, which are generalizations of the Einstein equations of general relativity, emerge from the renormalization group equations for the two-dimensional field theory. Schwarz and Green discovered T-duality, and constructed two superstring theories—IIA and IIB related by T-duality, and type I theories with open strings. The consistency conditions had been so strong, that the entire theory was nearly uniquely determined, with only a few discrete choices.
First superstring revolution
In the early 1980s, Edward Witten discovered that most theories of quantum gravity could not accommodate chiral fermions like the neutrino. This led him, in collaboration with Luis Álvarez-Gaumé, to study violations of the conservation laws in gravity theories with anomalies, concluding that type I string theories were inconsistent. Green and Schwarz discovered a contribution to the anomaly that Witten and Alvarez-Gaumé had missed, which restricted the gauge group of the type I string theory to be SO(32). In coming to understand this calculation, Edward Witten became convinced that string theory was truly a consistent theory of gravity, and he became a high-profile advocate. Following Witten's lead, between 1984 and 1986, hundreds of physicists started to work in this field, and this is sometimes called the first superstring revolution.
During this period, David Gross, Jeffrey Harvey, Emil Martinec, and Ryan Rohm discovered heterotic strings. The gauge group of these closed strings was two copies of E8, and either copy could easily and naturally include the standard model. Philip Candelas, Gary Horowitz, Andrew Strominger and Edward Witten found that the Calabi–Yau manifolds are the compactifications that preserve a realistic amount of supersymmetry, while Lance Dixon and others worked out the physical properties of orbifolds, distinctive geometrical singularities allowed in string theory. Cumrun Vafa generalized T-duality from circles to arbitrary manifolds, creating the mathematical field of mirror symmetry. Daniel Friedan, Emil Martinec and Stephen Shenker further developed the covariant quantization of the superstring using conformal field theory techniques. David Gross and Vipul Periwal discovered that string perturbation theory was divergent. Stephen Shenker showed it diverged much faster than in field theory suggesting that new non-perturbative objects were missing.
In the 1990s, Joseph Polchinski discovered that the theory requires higher-dimensional objects, called D-branes and identified these with the black-hole solutions of supergravity. These were understood to be the new objects suggested by the perturbative divergences, and they opened up a new field with rich mathematical structure. It quickly became clear that D-branes and other p-branes, not just strings, formed the matter content of the string theories, and the physical interpretation of the strings and branes was revealed—they are a type of black hole. Leonard Susskind had incorporated the holographic principle of Gerardus 't Hooft into string theory, identifying the long highly excited string states with ordinary thermal black hole states. As suggested by 't Hooft, the fluctuations of the black hole horizon, the world-sheet or world-volume theory, describes not only the degrees of freedom of the black hole, but all nearby objects too.
Second superstring revolution
In 1995, at the annual conference of string theorists at the University of Southern California (USC), Edward Witten gave a speech on string theory that in essence united the five string theories that existed at the time, and giving birth to a new 11-dimensional theory called M-theory. M-theory was also foreshadowed in the work of Paul Townsend at approximately the same time. The flurry of activity that began at this time is sometimes called the second superstring revolution.
During this period, Tom Banks, Willy Fischler, Stephen Shenker and Leonard Susskind formulated matrix theory, a full holographic description of M-theory using IIA D0 branes. This was the first definition of string theory that was fully non-perturbative and a concrete mathematical realization of the holographic principle. It is an example of a gauge-gravity duality and is now understood to be a special case of the AdS/CFT correspondence. Andrew Strominger and Cumrun Vafa calculated the entropy of certain configurations of D-branes and found agreement with the semi-classical answer for extreme charged black holes. Petr Hořava and Witten found the eleven-dimensional formulation of the heterotic string theories, showing that orbifolds solve the chirality problem. Witten noted that the effective description of the physics of D-branes at low energies is by a supersymmetric gauge theory, and found geometrical interpretations of mathematical structures in gauge theory that he and Nathan Seiberg had earlier discovered in terms of the location of the branes.
In 1997, Juan Maldacena noted that the low energy excitations of a theory near a black hole consist of objects close to the horizon, which for extreme charged black holes looks like an anti-de Sitter space. He noted that in this limit the gauge theory describes the string excitations near the branes. So he hypothesized that string theory on a near-horizon extreme-charged black-hole geometry, an anti-de Sitter space times a sphere with flux, is equally well described by the low-energy limiting gauge theory, the N = 4 supersymmetric Yang–Mills theory. This hypothesis, which is called the AdS/CFT correspondence, was further developed by Steven Gubser, Igor Klebanov and Alexander Polyakov, and by Edward Witten, and it is now well-accepted. It is a concrete realization of the holographic principle, which has far-reaching implications for black holes, locality and information in physics, as well as the nature of the gravitational interaction. Through this relationship, string theory has been shown to be related to gauge theories like quantum chromodynamics and this has led to a more quantitative understanding of the behavior of hadrons, bringing string theory back to its roots.
Criticism
Number of solutions
To construct models of particle physics based on string theory, physicists typically begin by specifying a shape for the extra dimensions of spacetime. Each of these different shapes corresponds to a different possible universe, or "vacuum state", with a different collection of particles and forces. String theory as it is currently understood has an enormous number of vacuum states, typically estimated to be around , and these might be sufficiently diverse to accommodate almost any phenomenon that might be observed at low energies.
Many critics of string theory have expressed concerns about the large number of possible universes described by string theory. In his book Not Even Wrong, Peter Woit, a lecturer in the mathematics department at Columbia University, has argued that the large number of different physical scenarios renders string theory vacuous as a framework for constructing models of particle physics. According to Woit,
Some physicists believe this large number of solutions is actually a virtue because it may allow a natural anthropic explanation of the observed values of physical constants, in particular the small value of the cosmological constant. The anthropic principle is the idea that some of the numbers appearing in the laws of physics are not fixed by any fundamental principle but must be compatible with the evolution of intelligent life. In 1987, Steven Weinberg published an article in which he argued that the cosmological constant could not have been too large, or else galaxies and intelligent life would not have been able to develop. Weinberg suggested that there might be a huge number of possible consistent universes, each with a different value of the cosmological constant, and observations indicate a small value of the cosmological constant only because humans happen to live in a universe that has allowed intelligent life, and hence observers, to exist.
String theorist Leonard Susskind has argued that string theory provides a natural anthropic explanation of the small value of the cosmological constant. According to Susskind, the different vacuum states of string theory might be realized as different universes within a larger multiverse. The fact that the observed universe has a small cosmological constant is just a tautological consequence of the fact that a small value is required for life to exist. Many prominent theorists and critics have disagreed with Susskind's conclusions. According to Woit, "in this case [anthropic reasoning] is nothing more than an excuse for failure. Speculative scientific ideas fail not just when they make incorrect predictions, but also when they turn out to be vacuous and incapable of predicting anything."
Compatibility with dark energy
It remains unknown whether string theory is compatible with a metastable, positive cosmological constant.
Some putative examples of such solutions do exist, such as the model described by Kachru et al. in 2003. In 2018, a group of four physicists advanced a controversial conjecture which would imply that no such universe exists. This is contrary to some popular models of dark energy such as Λ-CDM, which requires a positive vacuum energy. However, string theory is likely compatible with certain types of quintessence, where dark energy is caused by a new field with exotic properties.
Background independence
One of the fundamental properties of Einstein's general theory of relativity is that it is background independent, meaning that the formulation of the theory does not in any way privilege a particular spacetime geometry.
One of the main criticisms of string theory from early on is that it is not manifestly background-independent. In string theory, one must typically specify a fixed reference geometry for spacetime, and all other possible geometries are described as perturbations of this fixed one. In his book The Trouble With Physics, physicist Lee Smolin of the Perimeter Institute for Theoretical Physics claims that this is the principal weakness of string theory as a theory of quantum gravity, saying that string theory has failed to incorporate this important insight from general relativity.
Others have disagreed with Smolin's characterization of string theory. In a review of Smolin's book, string theorist Joseph Polchinski writes
Polchinski notes that an important open problem in quantum gravity is to develop holographic descriptions of gravity which do not require the gravitational field to be asymptotically anti-de Sitter. Smolin has responded by saying that the AdS/CFT correspondence, as it is currently understood, may not be strong enough to resolve all concerns about background independence.
Sociology of science
Since the superstring revolutions of the 1980s and 1990s, string theory has been one of the dominant paradigms of high energy theoretical physics. Some string theorists have expressed the view that there does not exist an equally successful alternative theory addressing the deep questions of fundamental physics. In an interview from 1987, Nobel laureate David Gross made the following controversial comments about the reasons for the popularity of string theory:
Several other high-profile theorists and commentators have expressed similar views, suggesting that there are no viable alternatives to string theory.
Many critics of string theory have commented on this state of affairs. In his book criticizing string theory, Peter Woit views the status of string theory research as unhealthy and detrimental to the future of fundamental physics. He argues that the extreme popularity of string theory among theoretical physicists is partly a consequence of the financial structure of academia and the fierce competition for scarce resources. In his book The Road to Reality, mathematical physicist Roger Penrose expresses similar views, stating "The often frantic competitiveness that this ease of communication engenders leads to bandwagon effects, where researchers fear to be left behind if they do not join in." Penrose also claims that the technical difficulty of modern physics forces young scientists to rely on the preferences of established researchers, rather than forging new paths of their own. Lee Smolin expresses a slightly different position in his critique, claiming that string theory grew out of a tradition of particle physics which discourages speculation about the foundations of physics, while his preferred approach, loop quantum gravity, encourages more radical thinking. According to Smolin,
Smolin goes on to offer a number of prescriptions for how scientists might encourage a greater diversity of approaches to quantum gravity research.
| Physical sciences | Particle physics: General | null |
28347 | https://en.wikipedia.org/wiki/Sling%20%28weapon%29 | Sling (weapon) | A sling is a projectile weapon typically used to hand-throw a blunt projectile such as a stone, clay, or lead "sling-bullet". It is also known as the shepherd's sling or slingshot (in British English, although elsewhere it means something else). Someone who specializes in using slings is called a slinger.
A sling has a small cradle or pouch in the middle of two retention cords, where a projectile is placed. There is a loop on the end of one side of the retention cords. Depending on the design of the sling, either the middle finger or the wrist is placed through a loop on the end of one cord, and a tab at the end of the other cord is placed between the thumb and forefinger. The sling is swung in an arc, and the tab released at a precise moment. This action releases the projectile to fly inertially and ballistically towards the target. By its double-pendulum kinetics, the sling enables stones (or spears) to be thrown much further than they could be by hand alone.
The sling is inexpensive and easy to build. Historically it has been used for hunting game and in combat. Today the sling is of interest as a wilderness survival tool and an improvised weapon.
The sling in antiquity
Origins
The sling is an ancient weapon known to Neolithic peoples around the Mediterranean, but is likely to be much older. It is possible that the sling was invented during the Upper Palaeolithic at a time when new technologies such as the spear-thrower and the bow and arrow were beginning to emerge.
Archaeology
Whereas stones and clay objects thought by many archaeologists to be sling-bullets are common finds in the archaeological record, slings themselves are rare. This is both because a sling's materials are biodegradable and because slings were lower-status weapons, rarely preserved in a wealthy person's grave.
The oldest-known surviving slings—radiocarbon dated to —were recovered from South American archaeological sites on the coast of Peru. The oldest-known surviving North American sling—radiocarbon dated to —was recovered from Lovelock Cave, Nevada.
The oldest known extant slings from the Old World were found in the tomb of Tutankhamun, who died . A pair of finely plaited slings were found with other weapons. The sling was probably intended for the departed pharaoh to use for hunting game.
Another Egyptian sling was excavated in El-Lahun in Al Fayyum Egypt in 1914 by William Matthew Flinders Petrie, and is now in the Petrie Museum of Egyptian Archaeology—Petrie dated it to . It was found alongside an iron spearhead. The remains are broken into three sections. Although fragile, the construction is clear: it is made of bast fibre (almost certainly flax) twine; the cords are braided in a 10-strand elliptical sennit and the cradle seems to have been woven from the same lengths of twine used to form the cords.
Ancient representations
Representations of slingers can be found on artifacts from all over the ancient world, including Assyrian and Egyptian reliefs, the columns of Trajan and Marcus Aurelius, on coins, and on the Bayeux Tapestry.
The oldest representation of a slinger in art may be from Çatalhöyük, from , though it is the only such depiction at the site, despite numerous depictions of archers.
Written history
Many European, Middle Eastern, Asian, and African peoples were users of slings. Thucydides and others authors talk about its usage by Greeks and Romans, and Strabo also extends it to the Iberians, Lusitanians and even some Gauls (which Caesar describes further in his account of the siege of Bibrax). He also mentions Persians and Arabs among those who used them. For his part, Diodorus includes Libyans and Phoenicians. Britons were frequent users of slings too.
Livy mentions some of the most famous of ancient sling experts: the people of the Balearic Islands, who often worked as mercenaries. Of Balearic slingers Strabo writes: "And their training in the use of slings used to be such, from childhood up, that they would not so much as give bread to their children unless they first hit it with the sling."
Classical accounts
The sling is mentioned as early as in the writings of Homer, where several characters kill enemies by hurling stones at them.
Balearic slingers were amongst the specialist mercenaries extensively employed by Carthage against the Romans and other enemies. These light troops used three sizes of sling, according to the distance of their opponents. The weapons were made of vegetable fibre and animal sinew, launching either stones or lead missiles with devastating impact.
Xenophon in his history of the retreat of the Ten Thousand, 401 BC, relates that the Greeks suffered severely from the slingers in the army of Artaxerxes II of Persia, while they themselves had neither cavalry nor slingers, and were unable to reach the enemy with their arrows and javelins. This deficiency was rectified when a company of 200 Rhodians, who understood the use of leaden sling-bullets, was formed. They were able, says Xenophon, to project their missiles twice as far as the Persian slingers, who used large stones.
Various Greeks enjoyed a reputation for skill with the sling. Thucydides mentions the Acarnanians and Livy refers to the inhabitants of three Greek cities on the northern coast of the Peloponnesus as expert slingers.
Greek armies would also use mounted slingers (ἀκροβολισταί).
Roman skirmishers armed with slings and javelins were established by Servius Tullius. The late Roman writer Vegetius, in his work De Re Militari, wrote:
Biblical accounts
The sling is mentioned in the Bible, which provides what is believed to be the oldest textual reference to a sling in the Book of Judges, 20:16. This text was thought to have been written , but refers to events several centuries earlier.
The Bible provides a famous slinger account, the battle between David and Goliath from the First Book of Samuel 17:34–36, probably written in the 7th or 6th century BC, describing events that might have occurred . The sling, easily produced, was the weapon of choice for shepherds fending off animals. Due to this, the sling was a commonly used weapon by the Israelite militia. Goliath was a tall, well equipped and experienced warrior. In this account, the shepherd David persuades Saul to let him fight Goliath on behalf of the Israelites. Unarmoured and equipped only with a sling, five smooth rocks, and his staff, David defeats the champion Goliath with a well-aimed shot to the head.
Use of the sling is also mentioned in Second Kings 3:25, First Chronicles 12:2, and Second Chronicles 26:14 to further illustrate Israelite use.
Combat
Ancient peoples used the sling in combat—armies included both specialist slingers and regular soldiers equipped with slings. As a weapon, the sling had several advantages; a sling bullet lobbed in a high trajectory can achieve ranges in excess of . Modern authorities vary widely in their estimates of the effective range of ancient weapons. A bow and arrow could also have been used to produce a long range arcing trajectory, but ancient writers repeatedly stress the sling's advantage of range. The sling was light to carry and cheap to produce; ammunition in the form of stones was readily available and often to be found near the site of battle. The ranges the sling could achieve with moulded lead sling-bullets was surpassed only by the strong composite bow.
Caches of sling ammunition have been found at the sites of Iron Age hill forts of Europe; some 22,000 sling stones were found at Maiden Castle, Dorset. It is proposed that Iron Age hill forts of Europe were designed to maximize the effective defence by slingers.
The hilltop location of the wooden forts would have given the defending slingers the advantage of range over the attackers, and multiple concentric ramparts, each higher than the other, would allow a large number of men to create a hailstorm of stone. Consistent with this, it has been noted that defences are generally narrow where the natural slope is steep, and wider where the slope is more gradual.
Construction
A classic sling is braided from non-elastic material. The traditional materials are flax, hemp or wool. Slings by Balearic islanders were said to be made from a rush. Flax and hemp resist rotting, but wool is softer and more comfortable. Polyester is often used for modern slings, because it does not rot or stretch and is soft and free of splinters.
Braided cords are used in preference to twisted rope, as a braid resists twisting when stretched. This improves accuracy.
The overall length of a sling can vary. A slinger may have slings of different lengths. A longer sling is used when greater range is required. A length of about is typical.
At the centre of the sling, a cradle or pouch is constructed. This may be formed by making a wide braid from the same material as the cords or by inserting a piece of a different material such as leather. The cradle is typically diamond shaped (although some take the form of a net), and will fold around the projectile in use. Some cradles have a hole or slit that allows the material to wrap around the projectile slightly, thereby holding it more securely.
At the end of one cord (called the retention cord) a finger-loop is formed. At the end of the other cord (the release cord), it is a common practice to form a knot or a tab. The release cord will be held between finger and thumb to be released at just the right moment, and may have a complex braid to add bulk to the end. This makes the knot easier to hold, and the extra weight allows the loose end of a discharged sling to be recovered with a flick of the wrist.
Braided construction resists stretching, and therefore produces an accurate sling. Modern slings are begun by plaiting the cord for the finger loop in the centre of a double-length set of cords. The cords are then folded to form the finger-loop. The retained cord is then plaited away from the loop as a single cord up to the pocket. The pocket is then plaited, most simply as another pair of cords, or with flat braids or a woven net. The remainder of the sling, the released cord, is plaited as a single cord, and then finished with a knot or plaited tab.
Impact
Ancient poets wrote that sling-bullets could penetrate armour, and that lead projectiles, heated by their passage through the air, would melt in flight. In the first instance, it seems likely that the authors were indicating that slings could cause injury through armour by a percussive effect (i.e., the energy of a sling-bullet delivered at high velocity causing blunt trauma injury upon impact) rather than by penetration. In the latter case, it has been proposed that they were impressed by the degree of deformation suffered by lead sling-bullet after hitting a hard target.
According to description of Procopius, the sling had an effective range further than a Hun bow and arrow. In his book Wars of Justinian, he recorded the felling of a Hun warrior by a slinger:
Ammunition
The simplest projectile was a stone, preferably well-rounded. Suitable ammunition is frequently from a river or a beach. The size of the projectiles can vary dramatically, from pebbles massing no more than to fist-sized stones massing or more. The use of such stones as projectiles is well attested in the ethnographic record.
Possible projectiles were also purpose-made from clay; this allowed a very high consistency of size and shape to aid range and accuracy. Many examples have been found in the archaeological record.
The best ammunition was cast from lead. Leaden sling-bullets were widely used in the Greek and Roman world. For a given mass, lead, being very dense, offers the minimum size and therefore minimum air resistance. In addition, leaden sling-bullets are small and difficult to see in flight; their concentrated impact is also a better armour-piercer and better able to penetrate a body.
In some cases, the lead would be cast in a simple open mould made by pushing a finger or thumb into sand and pouring molten metal into the hole. However, sling-bullets were more frequently cast in two-part moulds. Such sling-bullets come in a number of shapes including an ellipsoidal form closely resembling an acorn; this could be the origin of the Latin word for a leaden sling-bullet: glandes plumbeae (literally 'leaden acorns') or simply glandes (meaning 'acorns', singular glans).
Other shapes include spherical and (by far the most common) biconical, which resembles the shape of the shell of an almond nut or a flattened American football.
The ancients do not seem to have taken advantage of the manufacturing process to produce consistent results; leaden sling-bullets vary significantly. The reason why the almond shape was favoured is not clear: it is possible that there is some aerodynamic advantage, but it seems equally likely that there is some more prosaic reason, such as the shape being easy to extract from a mould, or the fact that it will rest in a sling cradle with little danger of rolling out. It is possible as well that the almond, non-circular shape made the bullet spin in flight in a helicopter or disc like effect adding to the flight distance.
Almond-shaped leaden sling-bullets were typically long, wide, and weighs . Very often, symbols or writings were moulded into lead sling-bullets. Many examples have been found including a collection of about 80 sling-bullets from the siege of Perusia in Etruria from 41 BC, to be found in the museum of modern Perugia. Examples of symbols include a stylized lightning bolt, a snake, and a scorpion – reminders of how a sling might strike without warning. Writing might include the name of the owning military unit or commander or might be more imaginative: "Take this", "Ouch", "get pregnant with this" and even "For Pompey's backside" added insult to injury, whereas dexai ('take this' or 'catch!') is merely sarcastic. In Yavne, a sling bullet with the Greek inscription "Victory of Heracles and Hauronas" was discovered, the two gods were the patrons of the city during the Hellenistic period.
Julius Caesar writes in De bello Gallico, book 5, about clay shot being heated before slinging, so that it might set fire to thatch.
"Whistling" bullets
Some bullets have been found with holes drilled in them. It was thought the holes were to contain poison. John Reid of the Trimontium Trust, finding holed Roman bullets excavated at the Burnswark hillfort, has proposed that the holes would cause the bullets to "whistle" in flight and the sound would intimidate opponents. The holed bullets were generally small and thus not particularly dangerous. Several could fit into a pouch and a single slinger could produce a terrorizing barrage. Experiments with modern copies demonstrate they produce a whooshing sound in flight.
The sling in medieval period
Europe
The Bayeux Tapestry of the 1070s portrays the use of slings in a hunting context. Frederick I, Holy Roman Emperor employed slingers during the Siege of Tortona in 1155 to suppress the garrison while his own men built siege engines. Indeed, slings seem to have been a fairly common weapon in Italy during the 11th and 12th centuries. Slings were also used by the Byzantines. On the Iberian Peninsula, the Spanish and Portuguese infantry favoured it against light and agile Moorish troops. The staff sling continued to be used in sieges and the sling was used as a part of large siege engines.
The Americas
The sling was known throughout the Americas.
In ancient Andean civilizations such as the Inca Empire, slings were made from llama wool. These slings typically have a cradle that is long and thin and features a relatively long slit. Andean slings were constructed from contrasting colours of wool; complex braids and fine workmanship can result in beautiful patterns. Ceremonial slings were also made; these were large, non-functional and generally lacked a slit. To this day, ceremonial slings are used in parts of the Andes as accessories in dances and in mock battles. They are also used by llama herders; the animals will move away from the sound of a stone landing. The stones are not slung to hit the animals, but to persuade them to move in the desired direction.
The sling was also used in the Americas for hunting and warfare. One notable use was in Incan resistance against the conquistadors. These slings were apparently very powerful; in 1491: New Revelations of the Americas Before Columbus, author Charles C. Mann quoted a conquistador as saying that an Incan sling "could break a sword in two pieces" and "kill a horse". Some
slings spanned as much as long and weighed an impressive .
Guam
Unique amongst most Pacific Islanders, the Chamorro reached a terrific competency with a weapon as witness by 17th century Belgian missionary, Pedro Coomans:
"Their offensive weapons include the sling, which they aim very skillfully at the head. Out of small ropes they weave a sort of net-bag, in which to carry stones with an oblong shape, some formed out of a marble stone, and others of clay, hardened in either the sun or fire. They whirl and shoot those so violently. Should it make an impact upon a more delicate part, like the heart, or the head, the man is flattened on the spot. Then, if envy would make them want to burn a house from a distance, they would stuff the perforated side of it with tow burning with a very ferocious fire, which, with a swift movement became a flame, and sail away to seek shelter in enemy houses."
The sling stone (in its "almond"/ovoid shape) is a vital cultural artifact of Chamorro culture, enough so, that it was adopted for the Guamian flag and state seal.
Variants
Staff sling
The staff sling, also known as the stave sling, fustibalus (Latin), and fustibale (French), consists of a staff (a length of wood) with a short sling at one end. One cord of the sling is firmly attached to the stave and the other end has a loop that can slide off and release the projectile. Staff slings are extremely powerful because the stave can be made as long as two meters, creating a powerful lever. Ancient art shows slingers holding staff slings by one end, with the pocket behind them, and using both hands to throw the staves forward over their heads.
The staff sling has a similar or superior range to the shepherd's sling, and can be as accurate in practiced hands. It is generally suited for heavier missiles and siege situations as staff slings can achieve very steep trajectories for slinging over obstacles such as castle walls. The staff itself can become a close combat weapon in a melee. The staff sling is able to throw heavy projectiles a much greater distance and at a higher arc than a hand sling. Staff slings were in use well into the age of gunpowder as grenade launchers, and were used in ship-to-ship combat to throw incendiaries.
Piao Shi (whirlwind stone)
Piao Shi (飃石, lit. 'whirlwind stone'), also known as Shou Pao (手砲, lit. hand cannon) during the Song period, is the Chinese name for staff sling. It consists of a short cord tied to one end of a five chi bamboo pole, and is usually employed in siege defense alongside larger stone throwers. It is depicted and described in the Ji Xiao Xin Shu (紀效新書).
Kestros
The kestros (also known as the kestrosphendone, cestrus, or cestrosphendone) is a sling weapon mentioned by Livy and Polybius. It seems to have been a heavy dart flung from a leather sling. It was invented in 168 BC and was employed by some of the Macedonian troops of King Perseus in the Third Macedonian war.
Siege engines
The traction trebuchet was a siege engine which uses the power of men pulling on ropes or the energy stored in a raised weight to rotate what was, again, a staff sling. It was designed so that, when the throwing arm of the trebuchet had swung forward sufficiently, one end of the sling would automatically become detached and release the projectile. Some trebuchets were small and operated by a very small crew; however, unlike the onager, it was possible to build the trebuchet on a gigantic scale: such giants could hurl enormous rocks at huge ranges. Trebuchets are, in essence, mechanized slings.
Hand-trebuchet
The hand-trebuchet () was a staff sling mounted on a pole using a lever mechanism to propel projectiles.
Today
Traditional slinging is still practiced as it always has been in the Balearic Islands, and competitions and leagues are common. In the rest of the world, the sling is primarily a hobby weapon, and a growing number of people make and practice with them. In recent years 'slingfests' have been held in Wyoming, USA, in September 2007 and in Staffordshire, England, in June 2008.
According to Guinness World Records, the current record for the greatest distance achieved in hurling an object from a sling is , using a long sling and a dart, set by David Engvall at Baldwin Lake, California, on September 13, 1992.
The principles of the sling may find use on a larger scale in the future; proposals exist for tether propulsion of spacecraft, which functionally is an oversized sling to propel a spaceship.
The sling is used today as a weapon primarily by protestors, to launch either stones or incendiary devices, such as Molotov cocktails. Classic woolen slings are still in use in the Middle East by Arab nomads and Bedouins to ward off jackals and hyenas. International Brigades used slings to throw grenades during the Spanish Civil War. Similarly, the Finns made use of sling-launched Molotov cocktails in the Winter War against Soviet tanks. Slings were used during the various Palestinian riots against modern army personnel and riot police. They were also used in the 2008 disturbances in Kenya.
| Technology | Projectile weapons | null |
28362 | https://en.wikipedia.org/wiki/Freight%20transport | Freight transport | Freight transport, also referred to as freight forwarding, is the physical process of transporting commodities and merchandise goods and cargo. The term shipping originally referred to transport by sea but in American English, it has been extended to refer to transport by land or air (International English: "carriage") as well. "Logistics", a term borrowed from the military environment, is also used in the same sense.
Modes of shipment
In 2015, 108 trillion tonne-kilometers were transported worldwide (anticipated to grow by 3.4% per year until 2050 (128 Trillion in 2020)): 70% by sea, 18% by road, 9% by rail, 2% by inland waterways and less than 0.25% by air.
Grounds
Land or "ground" shipping can be made by train or by truck (British English: lorry). Ground transport is typically more affordable than air, but more expensive than sea, especially in developing countries, where inland infrastructure may not be efficient. In air and sea shipments, ground transport is required to take the cargo from its place of origin to the airport or seaport and then to its destination because it is not always possible to establish a production facility near ports due to the limited coastlines of countries.
Ship
Much freight transport is done by cargo ships. An individual nation's fleet and the people that crew it are referred to as its merchant navy or merchant marine. According to a 2018 report from the United Nations Conference on Trade and Development (UNCTAD), merchant shipping (or seaborne trade) carries 80-90% of international trade and 60-70% by value. On rivers and canals, barges are often used to carry bulk cargo.
Air
Cargo is transported by air in specialized cargo aircraft and in the luggage compartments of passenger aircraft. Air freight is typically the fastest mode for long-distance freight transport, but it is also the most expensive.
Space
Multimodal
Cargo is exchanged between different modes of transportation via transport hubs, also known as transport interchanges or Nodes (e.g. train stations, airports, etc.). Cargo is shipped under a single contract but performed using at least two different modes of transport (e.g. ground and air). Cargo may not be containerized.
Intermodal
Multimodal transport featuring containerized cargo (or intermodal container) that is easily transferred between ship, rail, plane and truck.
For example, a shipper works together with both ground and air transportation to ship an item overseas. Intermodal freight transport is used to plan the route and carry out the shipping service from the manufacturer to the door of the recipient.
Terms of shipment
The Incoterms (or International Commercial Terms) published by the International Chamber of Commerce (ICC) are accepted by governments, legal authorities, and practitioners worldwide for the interpretation of the most commonly used terms in international trade. Common terms include:
Free on Board (FOB)
Cost and Freight (CFR, C&F, CNF)
Cost, Insurance and Freight (CIF)
The term "best way" generally implies that the shipper will choose the carrier that offers the lowest rate (to the shipper) for the shipment. In some cases, however, other factors, such as better insurance or faster transit time, will cause the shipper to choose an option other than the lowest bidder.
Door-to-door shipping
Door-to-door (DTD or D2D) shipping refers to the domestic or international shipment of cargo from the point of origin (POI) to the destination while generally remaining on the same piece of equipment and avoiding multiple transactions, trans-loading, and cross-docking without interim storage.
International DTD is a service provided by many international shipping companies and may feature intermodal freight transport using containerized cargo. The quoted price of this service includes all shipping, handling, import and customs duties, making it a hassle-free option for customers to import goods from one jurisdiction to another. This is compared to standard shipping, the price of which typically includes only the expenses incurred by the shipping company in transferring the object from one place to another. Customs fees, import taxes and other tariffs may contribute substantially to this base price before the item ever arrives.
| Technology | Basics_11 | null |
28368 | https://en.wikipedia.org/wiki/Spamming | Spamming | Spamming is the use of messaging systems to send multiple unsolicited messages (spam) to large numbers of recipients for the purpose of commercial advertising, non-commercial proselytizing, or any prohibited purpose (especially phishing), or simply repeatedly sending the same message to the same user. While the most widely recognized form of spam is email spam, the term is applied to similar abuses in other media: instant messaging spam, Usenet newsgroup spam, Web search engine spam, spam in blogs, wiki spam, online classified ads spam, mobile phone messaging spam, Internet forum spam, junk fax transmissions, social spam, spam mobile apps, television advertising and file sharing spam. It is named after Spam, a luncheon meat, by way of a Monty Python sketch about a restaurant that has Spam in almost every dish in which Vikings annoyingly sing "Spam" repeatedly.
Spamming remains economically viable because advertisers have no operating costs beyond the management of their mailing lists, servers, infrastructures, IP ranges, and domain names, and it is difficult to hold senders accountable for their mass mailings. The costs, such as lost productivity and fraud, are borne by the public and by Internet service providers, which have added extra capacity to cope with the volume. Spamming has been the subject of legislation in many jurisdictions.
A person who creates spam is called a spammer.
Etymology
The term spam is derived from the 1970 "Spam" sketch of the BBC sketch comedy television series Monty Python's Flying Circus. The sketch, set in a cafe, has a waitress reading out a menu where every item but one includes the Spam canned luncheon meat. As the waitress recites the Spam-filled menu, a chorus of Viking patrons drown out all conversations with a song, repeating "Spam, Spam, Spam, Spam… Lovely Spam! Wonderful Spam!".
In the 1980s the term was adopted to describe certain abusive users who frequented BBSs and MUDs, who would repeat "Spam" a huge number of times to scroll other users' text off the screen. In early chat-room services like PeopleLink and the early days of Online America (later known as America Online or AOL), they actually flooded the screen with quotes from the Monty Python sketch. This was used as a tactic by insiders of a group that wanted to drive newcomers out of the room so the usual conversation could continue. It was also used to prevent members of rival groups from chatting—for instance, Star Wars fans often invaded Star Trek chat rooms, filling the space with blocks of text until the Star Trek fans left.
It later came to be used on Usenet to mean excessive multiple posting—the repeated posting of the same message. The unwanted message would appear in many, if not all newsgroups, just as Spam appeared in all the menu items in the Monty Python sketch. One of the earliest people to use "spam" in this sense was Joel Furr. This use had also become established—to "spam" Usenet was to flood newsgroups with junk messages. The word was also attributed to the flood of "Make Money Fast" messages that clogged many newsgroups during the 1990s. In 1998, the New Oxford Dictionary of English, which had previously only defined "spam" in relation to the trademarked food product, added a second definition to its entry for "spam": "Irrelevant or inappropriate messages sent on the Internet to a large number of newsgroups or users."
There was also an effort to differentiate between types of newsgroup spam. Messages that were crossposted to too many newsgroups at once, as opposed to those that were posted too frequently, were called "velveeta" (after a cheese product), but this term did not persist.
History
Pre-Internet
In the late 19th century, Western Union allowed telegraphic messages on its network to be sent to multiple destinations. The first recorded instance of a mass unsolicited commercial telegram is from May 1864, when some British politicians received an unsolicited telegram advertising a dentist.
History
The earliest documented spam (although the term had not yet been coined) was a message advertising the availability of a new model of Digital Equipment Corporation computers sent by Gary Thuerk to 393 recipients on ARPANET on May 3, 1978. Rather than send a separate message to each person, which was the standard practice at the time, he had an assistant, Carl Gartley, write a single mass email. Reaction from the net community was fiercely negative, but the spam did generate some sales.
Spamming had been practiced as a prank by participants in multi-user dungeon games, to fill their rivals' accounts with unwanted electronic junk.
The first major commercial spam incident started on March 5, 1994, when a husband and wife team of lawyers, Laurence Canter and Martha Siegel, began using bulk Usenet posting to advertise immigration law services. The incident was commonly termed the "Green Card spam", after the subject line of the postings. Defiant in the face of widespread condemnation, the attorneys claimed their detractors were hypocrites or "zealots", claimed they had a free speech right to send unwanted commercial messages, and labeled their opponents "anti-commerce radicals". The couple wrote a controversial book entitled How to Make a Fortune on the Information Superhighway.
An early example of nonprofit fundraising bulk posting via Usenet also occurred in 1994 on behalf of CitiHope, an NGO attempting to raise funds to rescue children at risk during the Bosnian War. However, as it was a violation of their terms of service, the ISP Panix deleted all of the bulk posts from Usenet, only missing three copies.
Within a few years, the focus of spamming (and anti-spam efforts) moved chiefly to email, where it remains today. By 1999, Khan C. Smith, a well known hacker at the time, had begun to commercialize the bulk email industry and rallied thousands into the business by building more friendly bulk email software and providing internet access illegally hacked from major ISPs such as Earthlink and Botnets.
By 2009 the majority of spam sent around the World was in the English language; spammers began using automatic translation services to send spam in other languages.
In different media
Email
Email spam, also known as unsolicited bulk email (UBE), or junk mail, is the practice of sending unwanted email messages, frequently with commercial content, in large quantities. Spam in email started to become a problem when the Internet was opened for commercial use in the mid-1990s. It grew exponentially over the following years, and by 2007 it constituted about 80% to 85% of all e-mail, by a conservative estimate. Pressure to make email spam illegal has resulted in legislation in some jurisdictions, but less so in others. The efforts taken by governing bodies, security systems and email service providers seem to be helping to reduce the volume of email spam. According to "2014 Internet Security Threat Report, Volume 19" published by Symantec Corporation, spam volume dropped to 66% of all email traffic.
An industry of email address harvesting is dedicated to collecting email addresses and selling compiled databases. Some of these address-harvesting approaches rely on users not reading the fine print of agreements, resulting in their agreeing to send messages indiscriminately to their contacts. This is a common approach in social networking spam such as that generated by the social networking site Quechup.
Instant messaging
Instant messaging spam makes use of instant messaging systems. Although less prevalent than its e-mail counterpart, according to a report from Ferris Research, 500 million spam IMs were sent in 2003, twice the level of 2002.
Newsgroup and forum
Newsgroup spam is a type of spam where the targets are Usenet newsgroups. Spamming of Usenet newsgroups actually pre-dates e-mail spam. Usenet convention defines spamming as excessive multiple posting, that is, the repeated posting of a message (or substantially similar messages). The prevalence of Usenet spam led to the development of the Breidbart Index as an objective measure of a message's "spamminess".
Forum spam is the creation of advertising messages on Internet forums. It is generally done by automated spambots. Most forum spam consists of links to external sites, with the dual goals of increasing search engine visibility in highly competitive areas such as weight loss, pharmaceuticals, gambling, pornography, real estate or loans, and generating more traffic for these commercial websites. Some of these links contain code to track the spambot's identity; if a sale goes through, the spammer behind the spambot earns a commission.
Mobile phone
Mobile phone spam is directed at the text messaging service of a mobile phone. This can be especially irritating to customers not only for the inconvenience, but also because of the fee they may be charged per text message received in some markets.
To comply with CAN-SPAM regulations in the US, SMS messages now must provide options of HELP and STOP, the latter to end communication with the advertiser via SMS altogether.
Despite the high number of phone users, there has not been so much phone spam, because there is a charge for sending SMS. Recently, there are also observations of mobile phone spam delivered via browser push notifications. These can be a result of allowing websites which are malicious or delivering malicious ads to send a user notifications.
Social networking spam
Facebook and Twitter are not immune to messages containing spam links. Spammers hack into accounts and send false links under the guise of a user's trusted contacts such as friends and family. As for Twitter, spammers gain credibility by following verified accounts such as that of Lady Gaga; when that account owner follows the spammer back, it legitimizes the spammer.
Twitter has studied what interest structures allow their users to receive interesting tweets and avoid spam, despite the site using the broadcast model, in which all tweets from a user are broadcast to all followers of the user. Spammers, out of malicious intent, post either unwanted (or irrelevant) information or spread misinformation on social media platforms.
Social spam
Spreading beyond the centrally managed social networking platforms, user-generated content increasingly appears on business, government, and nonprofit websites worldwide. Fake accounts and comments planted by computers programmed to issue social spam can infiltrate these websites.
Blog, wiki, and guestbook
Blog spam is spamming on weblogs. In 2003, this type of spam took advantage of the open nature of comments in the blogging software Movable Type by repeatedly placing comments to various blog posts that provided nothing more than a link to the spammer's commercial web site.
Similar attacks are often performed against wikis and guestbooks, both of which accept user contributions.
Another possible form of spam in blogs is the spamming of a certain tag on websites such as Tumblr.
Spam targeting video sharing sites
In actual video spam, the uploaded video is given a name and description with a popular figure or event that is likely to draw attention, or within the video a certain image is timed to come up as the video's thumbnail image to mislead the viewer, such as a still image from a feature film, purporting to be a part-by-part piece of a movie being pirated, e.g. Big Buck Bunny Full Movie Online - Part 1/10 HD, a link to a supposed keygen, trainer, ISO file for a video game, or something similar. The actual content of the video ends up being totally unrelated, a Rickroll, offensive, or simply on-screen text of a link to the site being promoted. In some cases, the link in question may lead to an online survey site, a password-protected archive file with instructions leading to the aforementioned survey (though the survey, and the archive file itself, is worthless and does not contain the file in question at all), or in extreme cases, malware. Others may upload videos presented in an infomercial-like format selling their product which feature actors and paid testimonials, though the promoted product or service is of dubious quality and would likely not pass the scrutiny of a standards and practices department at a television station or cable network.
VoIP Spam
VoIP spam is VoIP (Voice over Internet Protocol) spam, usually using SIP (Session Initiation Protocol). This is nearly identical to telemarketing calls over traditional phone lines. When the user chooses to receive the spam call, a pre-recorded spam message or advertisement is usually played back. This is generally easier for the spammer as VoIP services are cheap and easy to anonymize over the Internet, and there are many options for sending mass number of calls from a single location. Accounts or IP addresses being used for VoIP spam can usually be identified by a large number of outgoing calls, low call completion and short call length.
Academic search
Academic search engines enable researchers to find academic literature and are used to obtain citation data for calculating author-level metrics. Researchers from the University of California, Berkeley and OvGU demonstrated that most (web-based) academic search engines, especially Google Scholar are not capable of identifying spam attacks. The researchers manipulated the citation counts of articles, and managed to make Google Scholar index complete fake articles, some containing advertising.
Mobile apps
Spamming in mobile app stores include (i) apps that were automatically generated and as a result do not have any specific functionality or a meaningful description; (ii) multiple instances of the same app being published to obtain increased visibility in the app market; and (iii) apps that make excessive use of unrelated keywords to attract users through unintended searches.
Bluetooth
Bluespam, or the action of sending spam to Bluetooth-enabled devices, is another form of spam that has developed in recent years.
Noncommercial forms
E-mail and other forms of spamming have been used for purposes other than advertisements. Many early Usenet spams were religious or political. Serdar Argic, for instance, spammed Usenet with historical revisionist screeds. A number of evangelists have spammed Usenet and e-mail media with preaching messages. A growing number of criminals are also using spam to perpetrate various sorts of fraud.
Geographical origins
In 2011 the origins of spam were analyzed by Cisco Systems. They provided a report that shows spam volume originating from countries worldwide.
Trademark issues
Hormel Foods Corporation, the maker of SPAM luncheon meat, does not object to the Internet use of the term "spamming". However, they did ask that the capitalized word "Spam" be reserved to refer to their product and trademark.
Cost–benefit analyses
The European Union's Internal Market Commission estimated in 2001 that "junk email" cost Internet users €10 billion per year worldwide. The California legislature found that spam cost United States organizations alone more than $13 billion in 2007, including lost productivity and the additional equipment, software, and manpower needed to combat the problem. Spam's direct effects include the consumption of computer and network resources, and the cost in human time and attention of dismissing unwanted messages. Large companies who are frequent spam targets utilize numerous techniques to detect and prevent spam.
The cost to providers of search engines is significant: "The secondary consequence of spamming is that search engine indexes are inundated with useless pages, increasing the cost of each processed query". The costs of spam also include the collateral costs of the struggle between spammers and the administrators and users of the media threatened by spamming.
Email spam exemplifies a tragedy of the commons: spammers use resources (both physical and human), without bearing the entire cost of those resources. In fact, spammers commonly do not bear the cost at all. This raises the costs for everyone. In some ways spam is even a potential threat to the entire email system, as operated in the past. Since email is so cheap to send, a tiny number of spammers can saturate the Internet with junk mail. Although only a tiny percentage of their targets are motivated to purchase their products (or fall victim to their scams), the low cost may provide a sufficient conversion rate to keep the spamming alive. Furthermore, even though spam appears not to be economically viable as a way for a reputable company to do business, it suffices for professional spammers to convince a tiny proportion of gullible advertisers that it is viable for those spammers to stay in business. Finally, new spammers go into business every day, and the low costs allow a single spammer to do a lot of harm before finally realizing that the business is not profitable.
Some companies and groups "rank" spammers; spammers who make the news are sometimes referred to by these rankings.
General costs
In all cases listed above, including both commercial and non-commercial, "spam happens" because of a positive cost–benefit analysis result; if the cost to recipients is excluded as an externality the spammer can avoid paying.
Cost is the combination of:
Overhead: The costs and overhead of electronic spamming include bandwidth, developing or acquiring an email/wiki/blog spam tool, taking over or acquiring a host/zombie, etc.
Transaction cost: The incremental cost of contacting each additional recipient once a method of spamming is constructed, multiplied by the number of recipients (see CAPTCHA as a method of increasing transaction costs).
Risks: Chance and severity of legal and/or public reactions, including damages and punitive damages.
Damage: Impact on the community and/or communication channels being spammed (see Newsgroup spam).
Benefit is the total expected profit from spam, which may include any combination of the commercial and non-commercial reasons listed above. It is normally linear, based on the incremental benefit of reaching each additional spam recipient, combined with the conversion rate. The conversion rate for botnet-generated spam has recently been measured to be around one in 12,000,000 for pharmaceutical spam and one in 200,000 for infection sites as used by the Storm botnet. The authors of the study calculating those conversion rates noted, "After 26 days, and almost 350 million e-mail messages, only 28 sales resulted."
In crime
Spam can be used to spread computer viruses, trojan horses or other malicious software. The objective may be identity theft, or worse (e.g., advance fee fraud). Some spam attempts to capitalize on human greed, while some attempts to take advantage of the victims' inexperience with computer technology to trick them (e.g., phishing).
One of the world's most prolific spammers, Robert Alan Soloway, was arrested by US authorities on May 31, 2007. Described as one of the top ten spammers in the world, Soloway was charged with 35 criminal counts, including mail fraud, wire fraud, e-mail fraud, aggravated identity theft, and money laundering. Prosecutors allege that Soloway used millions of "zombie" computers to distribute spam during 2003. This is the first case in which US prosecutors used identity theft laws to prosecute a spammer for taking over someone else's Internet domain name.
In an attempt to assess potential legal and technical strategies for stopping illegal spam, a study cataloged three months of online spam data and researched website naming and hosting infrastructures. The study concluded that: 1) half of all spam programs have their domains and servers distributed over just eight percent or fewer of the total available hosting registrars and autonomous systems, with 80 percent of spam programs overall being distributed over just 20 percent of all registrars and autonomous systems; 2) of the 76 purchases for which the researchers received transaction information, there were only 13 distinct banks acting as credit card acquirers and only three banks provided the payment servicing for 95 percent of the spam-advertised goods in the study; and, 3) a "financial blacklist" of banking entities that do business with spammers would dramatically reduce monetization of unwanted e-mails. Moreover, this blacklist could be updated far more rapidly than spammers could acquire new banking resources, an asymmetry favoring anti-spam efforts.
Political issues
An ongoing concern expressed by parties such as the Electronic Frontier Foundation and the American Civil Liberties Union has to do with so-called "stealth blocking", a term for ISPs employing aggressive spam blocking without their users' knowledge. These groups' concern is that ISPs or technicians seeking to reduce spam-related costs may select tools that (either through error or design) also block non-spam e-mail from sites seen as "spam-friendly". Few object to the existence of these tools; it is their use in filtering the mail of users who are not informed of their use that draws fire.
Even though it is possible in some jurisdictions to treat some spam as unlawful merely by applying existing laws against trespass and conversion, some laws specifically targeting spam have been proposed. In 2004, United States passed the CAN-SPAM Act of 2003 that provided ISPs with tools to combat spam. This act allowed Yahoo! to successfully sue Eric Head who settled the lawsuit for several thousand U.S. dollars in June 2004. But the law is criticized by many for not being effective enough. Indeed, the law was supported by some spammers and organizations that support spamming, and opposed by many in the anti-spam community.
Court cases
United States
Earthlink won a $25 million judgment against one of the most notorious and active "spammers" Khan C. Smith in 2001 for his role in founding the modern spam industry which dealt billions in economic damage and established thousands of spammers into the industry. His email efforts were said to make up more than a third of all Internet email being sent from 1999 until 2002.
Sanford Wallace and Cyber Promotions were the target of a string of lawsuits, many of which were settled out of court, up through a 1998 Earthlink settlement that put Cyber Promotions out of business. Attorney Laurence Canter was disbarred by the Tennessee Supreme Court in 1997 for sending prodigious amounts of spam advertising his immigration law practice. In 2005, Jason Smathers, a former America Online employee, pleaded guilty to charges of violating the CAN-SPAM Act. In 2003, he sold a list of approximately 93 million AOL subscriber e-mail addresses to Sean Dunaway who sold the list to spammers.
In 2007, Robert Soloway lost a case in a federal court against the operator of a small Oklahoma-based Internet service provider who accused him of spamming. U.S. Judge Ralph G. Thompson granted a motion by plaintiff Robert Braver for a default judgment and permanent injunction against him. The judgment includes a statutory damages award of about $10 million under Oklahoma law.
In June 2007, two men were convicted of eight counts stemming from sending millions of e-mail spam messages that included hardcore pornographic images. Jeffrey A. Kilbride, 41, of Venice, California was sentenced to six years in prison, and James R. Schaffer, 41, of Paradise Valley, Arizona, was sentenced to 63 months. In addition, the two were fined $100,000, ordered to pay $77,500 in restitution to AOL, and ordered to forfeit more than $1.1 million, the amount of illegal proceeds from their spamming operation. The charges included conspiracy, fraud, money laundering, and transportation of obscene materials. The trial, which began on June 5, was the first to include charges under the CAN-SPAM Act of 2003, according to a release from the Department of Justice. The specific law that prosecutors used under the CAN-Spam Act was designed to crack down on the transmission of pornography in spam.
In 2005, Scott J. Filary and Donald E. Townsend of Tampa, Florida were sued by Florida Attorney General Charlie Crist for violating the Florida Electronic Mail Communications Act. The two spammers were required to pay $50,000 USD to cover the costs of investigation by the state of Florida, and a $1.1 million penalty if spamming were to continue, the $50,000 was not paid, or the financial statements provided were found to be inaccurate. The spamming operation was successfully shut down.
Edna Fiedler of Olympia, Washington, on June 25, 2008, pleaded guilty in a Tacoma court and was sentenced to two years imprisonment and five years of supervised release or probation in an Internet $1 million "Nigerian check scam." She conspired to commit bank, wire and mail fraud, against US citizens, specifically using Internet by having had an accomplice who shipped counterfeit checks and money orders to her from Lagos, Nigeria, the previous November. Fiedler shipped out $609,000 fake check and money orders when arrested and prepared to send additional $1.1 million counterfeit materials. Also, the U.S. Postal Service recently intercepted counterfeit checks, lottery tickets and eBay overpayment schemes with a value of $2.1 billion.
In a 2009 opinion, Gordon v. Virtumundo, Inc., 575 F.3d 1040, the Ninth Circuit assessed the standing requirements necessary for a private plaintiff to bring a civil cause of action against spam senders under the CAN-SPAM Act of 2003, as well as the scope of the CAN-SPAM Act's federal preemption clause.
United Kingdom
In the first successful case of its kind, Nigel Roberts from the Channel Islands won £270 against Media Logistics UK who sent junk e-mails to his personal account.
In January 2007, a Sheriff Court in Scotland awarded Mr. Gordon Dick £750 (the then maximum sum that could be awarded in a Small Claim action) plus expenses of £618.66, a total of £1368.66 against Transcom Internet Services Ltd. for breaching anti-spam laws. Transcom had been legally represented at earlier hearings, but were not represented at the proof, so Gordon Dick got his decree by default. It is the largest amount awarded in compensation in the United Kingdom since Roberts v Media Logistics case in 2005.
Despite the statutory tort that is created by the Regulations implementing the EC Directive, few other people have followed their example. As the Courts engage in active case management, such cases would probably now be expected to be settled by mediation and payment of nominal damages.
New Zealand
In October 2008, an international internet spam operation run from New Zealand was cited by American authorities as one of the world's largest, and for a time responsible for up to a third of all unwanted e-mails. In a statement the US Federal Trade Commission (FTC) named Christchurch's Lance Atkinson as one of the principals of the operation. New Zealand's Internal Affairs announced it had lodged a $200,000 claim in the High Court against Atkinson and his brother Shane Atkinson and courier Roland Smits, after raids in Christchurch. This marked the first prosecution since the Unsolicited Electronic Messages Act (UEMA) was passed in September 2007.
The FTC said it had received more than three million complaints about spam messages connected to this operation, and estimated that it may be responsible for sending billions of illegal spam messages. The US District Court froze the defendants' assets to preserve them for consumer redress pending trial.
U.S. co-defendant Jody Smith forfeited more than $800,000 and faces up to five years in prison for charges to which he pleaded guilty.
Bulgaria
Bulgaria allows spam messages as long as the message is clearly marked as spam according to the Bulgarian E-Commerce act. Spam messages can't be sent if the user opts out of them. When a user opts out of a message, their e-mail gets stored in a public registry. Companies sending spam messages to users registered in the registry have to pay a fine. Under the ECA, spam messages can't be sent if the address of the spam message is invalid or the user identity is unknown.
This made lawsuits against Bulgarian ISP's and public e-mail providers with antispam policy possible, as they are obstructing legal commerce activity and thus violate Bulgarian antitrust acts. While there are no such lawsuits until now, several cases of spam obstruction are currently awaiting decision in the Bulgarian Antitrust Commission (Комисия за защита на конкуренцията) and can end with serious fines for the ISPs in question.
The law contains other dubious provisions — for example, the creation of a nationwide public electronic register of e-mail addresses that do not want to receive spam. It is usually abused as the perfect source for e-mail address harvesting, because publishing invalid or incorrect information in such a register is a criminal offense in Bulgaria.
Newsgroups
news.admin.net-abuse.email
| Technology | Networks | null |
28394 | https://en.wikipedia.org/wiki/Stuttering | Stuttering | Stuttering, also known as stammering, is a speech disorder characterized externally by involuntary repetitions and prolongations of sounds, syllables, words, or phrases as well as involuntary silent pauses called blocks in which the person who stutters is unable to produce sounds. Almost 80 million people worldwide stutter, about 1% of the world's population.
Stuttering is not connected to the physical production of speech sounds or putting thoughts into words. Acute nervousness and stress do not cause stuttering, but they may trigger increased stuttering in people who have the speech disorder, and living with a stigmatized disability can result in anxiety and high allostatic stress load. Neither acute nor chronic stress, however, itself creates any predisposition to stuttering.
Characteristics
Audible disfluencies
Common stuttering behaviors are observable signs of speech disfluencies, for example: repeating sounds, syllables, words or phrases, silent blocks and prolongation of sounds.
Repeated movements
Syllable repetition—a single syllable word is repeated (for example: "on-on-on a chair") or a part of a word which is still a full syllable such as "un-un-under the ..." and "o-o-open".
Incomplete syllable repetition—an incomplete syllable is repeated, such as a consonant without a vowel, for example, "c-c-c-cold".
Multi-syllable repetition—more than one syllable such as a whole word, or more than one word is repeated, such as "I know-I know-I know a lot of information."
Prolongations
With audible airflow—prolongation of a sound occurs such as "mmmmmmmmmom".
Without audible airflow—such as a block of speech or a tense pause where no airflow occurs and no phonation occurs.
The disorder is variable, which means that in certain situations the stuttering might be more or less noticeable, such as speaking on the phone or in large groups. People who stutter often find that their stuttering fluctuates, sometimes at random.
The moment of stuttering often begins before the disfluency is produced, described as a moment of "anticipation"—where the person who stutters knows which word they are going to stutter on. The sensation of losing control and anticipation of a stutter can lead people who stutter to react in different ways including behavioral and cognitive reactions. Some behavioral reactions can manifest outwardly and be observed as physical tension or struggle anywhere in the body.
Outward physical behaviors
People who stutter may have reactions, avoidance behaviors, or secondary behaviors related to their stuttering that may look like struggle and tension in the body. These could range anywhere from tension in the head and neck, behaviors such as snapping or tapping, or facial grimacing.
Behavioral reactions
These behavioral reactions are those that might not be apparent to listeners and only be perceptible to people who stutter. Some people who stutter exhibit covert behaviors such as avoiding speaking situations, substituting words or phrases when they know they are going to stutter, or use other methods to hide their stutter.
Feelings and attitudes
Stuttering could have a significant negative cognitive and affective impact on the person who stutters. Joseph Sheehan described this in terms of an analogy to an iceberg, with the immediately visible and audible symptoms of stuttering above the waterline and a broader set of symptoms such as negative emotions hidden below the surface. Feelings of embarrassment, shame, frustration, fear, anger, and guilt are frequent in people who stutter, and may increase tension and effort. With time, continued negative experiences may crystallize into a negative self-concept and self-image. People who stutter may project their own attitudes onto others, believing that the others think them nervous or stupid. Such negative feelings and attitudes may need to be a major focus of a treatment program.
The impact of discrimination against stuttering can be severe. This may result in fears of stuttering in social situations, self-imposed isolation, anxiety, stress, shame, low self-esteem, being a possible target of bullying or discrimination, or feeling pressured to hide stuttering. In popular media, stuttering is sometimes seen as a symptom of anxiety, but there is no direct correlation in that direction.
Alternatively, there are those who embrace stuttering pride and encourage other stutterers to take pride in their stutter and to find how it has been beneficial for them.
According to adults who stutter, however, stuttering is defined as a "constellation of experiences" expanding beyond the external disfluencies that are apparent to the listener. Much of the experience of stuttering is internal and encompasses experiences beyond the external speech disfluencies, which are not observable by the listener.
Associated conditions
Stuttering can co-occur with other disabilities. These associated disabilities include:
attention deficit hyperactivity disorder (ADHD); the prevalence of ADHD in school-aged children who stutter is .
dyslexia; the prevalence rate of childhood stuttering in dyslexia is around 30–40%, while in adults the prevalence of dyslexia in adults who stutter is around 30–50%.
autism
intellectual disability
language or learning disability
seizure disorders
social anxiety disorder
speech sound disorders
other developmental disorders
Causes
The cause of developmental stuttering is complex. It is thought to be neurological with a genetic factor.
Various hypotheses suggest multiple factors contributing to stuttering. There is strong evidence that stuttering has a genetic basis. Children who have first-degree relatives who stutter are three times as likely to develop a stutter. In a 2010 article, three genes were found by Dennis Drayna and team to correlate with stuttering: GNPTAB, GNPTG, and NAGPA. Researchers estimated that alterations in these three genes were present in 9% of those who have a family history of stuttering.
There is evidence that stuttering is more common in children who also have concurrent speech, language, learning or motor difficulties. For some people who stutter, congenital factors may play a role. In others, there could be added impact due to stressful situations. However there is not evidence to suggest this as a cause.
Less common causes of stuttering include neurogenic stuttering (stuttering that occurs secondary to brain damage, such as after a stroke) and psychogenic stuttering (stuttering related to a psychological condition).
History of causes
Auditory processing deficits were proposed as a cause of stuttering due to differences in stuttering for deaf or Hard of Hearing individuals, as well as the impact of auditory feedback machines on some stuttering cases.
Some possibilities of linguistic processing between people who stutter and people who do not has been proposed. Brain scans of adult stutterers have found greater activation of the right hemisphere, than of the left hemisphere, which is associated with speech. In addition, reduced activation in the left auditory cortex has been observed.
The 'capacities and demands model' has been proposed to account for the heterogeneity of the disorder. Speech performance varies depending on the 'capacity' that the individual has for producing fluent speech, and the 'demands' placed upon the person by the speaking situation. Demands may be increased by internal factors or inadequate language skills or external factors. In stuttering, severity often increases when demands placed on the person's speech and language system increase. However, the precise nature of the capacity or incapacity has not been delineated. Stress, or demands, can impact many disorders without being a cause.
Another theory has been that adults who stutter have elevated levels of the neurotransmitter dopamine.
It was once thought that forcing a left-handed student to write with their right-hand caused stuttering due to bias against left-handed people, but this myth died out.
Diagnosis
Some characteristics of stuttered speech are not as easy for listeners to detect. As a result, diagnosing stuttering requires the skills of a licensed speech–language pathologist (SLP). Diagnosis of stuttering employs information both from direct observation of the individual and information about the individual's background, through a case history. The SLP may collect a case history on the individual through a detailed interview or conversation with the parents (if client is a child). They may also observe parent-child interactions and observe the speech patterns of the child's parents. The overall goal of assessment for the SLP will be (1) to determine whether a speech disfluency exists, and (2) assess if its severity warrants concern for further treatment.
During direct observation of the client, the SLP will observe various aspects of the individual's speech behaviors. In particular, the therapist might test for factors including the types of disfluencies present (using a test such as the Disfluency Type Index (DTI)), their frequency and duration (number of iterations, percentage of syllables stuttered (%SS)), and speaking rate (syllables per minute (SPM), words per minute (WPM)). They may also test for naturalness and fluency in speaking (naturalness rating scale (NAT), test of childhood stuttering (TOCS)) and physical concomitants during speech (Riley's Stuttering Severity Instrument Fourth Edition (SSI-4)). They might also employ a test to evaluate the severity of the stuttering and predictions for its course. One such test includes the stuttering prediction instrument for young children (SPI), which analyzes the child's case history, and stuttering frequency in order to determine the severity of the disfluency and its prognosis for chronicity for the future.
Stuttering is a multifaceted, complex disorder that can impact an individual's life in a variety of ways. Children and adults are monitored and evaluated for evidence of possible social, psychological or emotional signs of stress related to their disorder. Some common assessments of this type measure factors including: anxiety (Endler multidimensional anxiety scales (EMAS)), attitudes (personal report of communication apprehension (PRCA)), perceptions of self (self-rating of reactions to speech situations (SSRSS)), quality of life (overall assessment of the speaker's experience of stuttering (OASES)), behaviors (older adult self-report (OASR)), and mental health (composite international diagnostic interview (CIDI)).
Clinical psychologists with adequate expertise can also diagnose stuttering per the DSM-5 diagnostic codes. The DSM-5 describes "Childhood-Onset Fluency Disorder (Stuttering)" for developmental stuttering, and "Adult-onset Fluency Disorder". However, the specific rationale for this change from the DSM-IV is ill-documented in the APA's published literature, and is felt by some to promote confusion between the very different terms fluency and disfluency.
Other disfluencies
Preschool aged children often have difficulties with speech concerning motor planning and execution; this often manifests as disfluencies related to speech development (referred to as normal dysfluency or "other disfluencies"). This type of disfluency is a normal part of speech development and temporarily present in preschool-aged children who are learning to speak.
Classification
"Developmental stuttering" is stuttering that has on onset in early childhood, i.e. when a child is learning to speak. About 5-7% of children are said to stutter during this period. Despite its name, the onset itself is often sudden. This type of stutter may persists after the age of seven, which is classified as "persistent stuttering".
"Neurogenic stuttering" (stuttering that occurs secondary to brain damage, such as after a stroke) and "psychogenic stuttering" (stuttering related to a psychological condition) are less common and classified separately from developmental.
"Neurogenic stuttering" typically appears following some sort of injury or disease to the central nervous system. Injuries to the brain and spinal cord, including cortex, subcortex, cerebellum, and even the neural pathway regions.
It may also be called "acquired stuttering" and it may be acquired in adulthood as the result of a neurological event such as a head injury, tumour, stroke, or drug use. This stuttering has different characteristics from its developmental equivalent: it tends to be limited to part-word or sound repetitions, and is associated with a relative lack of anxiety and secondary stuttering behaviors. Techniques such as altered auditory feedback are not effective with the acquired type.
Finally, "psychogenic stuttering", which is less than 1% of all stuttering conditions, may also arise after a traumatic experience such as a death, the breakup of a relationship or as the psychological reaction to physical trauma. Its symptoms tend to be homogeneous: the stuttering is of sudden onset and associated with a significant event, it is constant and uninfluenced by different speaking situations, and there is little awareness or concern shown by the speaker.
Differential diagnosis
Other disorders with symptoms resembling stuttering, or associated disorders include autism, cluttering, Parkinson's disease, essential tremor, palilalia, spasmodic dysphonia, selective mutism, and apraxia of speech.
Treatment
While there is no cure for stuttering, several treatment options exist and the best option is dependent on the individual. Therapy should be individualized and tailored to the specific and unique needs of the client. The speech–language pathologist and the client typically work together to create achievable and realistic goals that target communication confidence, autonomy, managing emotions and stress related to their stutter, and working on disclosure.
Fluency shaping therapy
Fluency shaping therapy trains people who stutter to speak less disfluently by controlling their breathing, phonation, and articulation (lips, jaw, and tongue). It is based on operant conditioning techniques. This type of therapy is not considered best practice in the field of speech and language pathology and is potentially harmful and traumatic for clients.
Stuttering modification therapy
The goal of stuttering modification therapy is not to eliminate stuttering but to modify it so that stuttering is easier and less effortful. The most widely known approach was published by Charles Van Riper in 1973 and is also known as block modification therapy. Stuttering modification therapy should not be used to promote fluent speech or presented as a cure for stuttering.
Avoidance Reduction Therapy for Stuttering (ARTS) is an effective form of modification therapy. It is a framework based on theories developed by professor Joseph Sheehan and his wife Vivian Sheehan. This framework focuses on self-acceptance as someone who stutters, and efficient, spontaneous and joyful communication, essentially, minimizing quality-of-life impact due to stuttering.
Electronic fluency device
Altered auditory feedback effect can be produced by speaking in chorus with another person, by blocking out the voice of the person who stutters while they are talking (masking), by delaying slightly the voice of the person who stutters (delayed auditory feedback) or by altering the frequency of the feedback (frequency altered feedback). Studies of these techniques have had mixed results.
Medications
No medication is FDA-approved for stuttering. Some research suggests dopamine antagonists ecopipam and deutetrabenazine have the potential to treat stuttering.
Support
Self-help groups provide people who stutter a shared forum within which they can access resources and support from others facing the same challenges of stuttering.
Prognosis
Among ages 3–5, the prognosis for spontaneously recovery is about 65% to 87.5%. By 7 years of age or within the first two years of stuttering, and about 74% recover by their early teens. In particular, girls are shown to recover more often.
Prognosis is guarded with later age of onset: children who start stuttering at age 3½ years or later, and/or duration of greater than 6–12 months since onset, that is, once stuttering has become established, about 18% of children who stutter after five years recover spontaneously. Stuttering that persists after the age of seven is classified as persistent stuttering, and is associated with a much lower chance of recovery.
Epidemiology
The lifetime prevalence, or the proportion of individuals expected to stutter at one time in their lives, is about 5–6%, and overall males are affected two to five times more often than females. As seen in children who have just begun stuttering, there is an equivalent number of boys and girls who stutter. Still, the sex ratio appears to widen as children grow: among preschoolers, boys who stutter outnumber girls who stutter by about a two to one ratio, or less. This ratio widens to three to one during first grade, and five to one during fifth grade, as girls have higher recovery rates. the overall prevalence of stuttering is generally considered to be approximately 1%.
Cross cultural
Cross-cultural studies of stuttering prevalence were very active in early and mid-20th century, particularly under the influence of the works of Wendell Johnson, who claimed that the onset of stuttering was connected to the cultural expectations and the pressure put on young children by anxious parents, which has since been debunked. Later studies found that this claim was not supported by the facts, so the influence of cultural factors in stuttering research declined. It is generally accepted by contemporary scholars that stuttering is present in every culture and in every race, although the attitude towards the actual prevalence differs. Some believe stuttering occurs in all cultures and races at similar rates, about 1% of general population (and is about 5% among young children) all around the world. A US-based study indicated that there were no racial or ethnic differences in the incidence of stuttering in preschool children.
Different regions of the world are researched unevenly. The largest number of studies has been conducted in European countries and in North America, where the experts agree on the mean estimate to be about 1% of the general population. African populations, particularly from West Africa, might have the highest stuttering prevalence in the world—reaching in some populations 5%, 6% and even over 9%. Many regions of the world are not researched sufficiently, and for some major regions there are no prevalence studies at all.
Bilingual stuttering
Identification
Bilingualism is the ability to speak two languages. Many bilingual people have been exposed to more than one language since birth and throughout childhood. Since language and culture are relatively fluid factors in a person's understanding and production of language, bilingualism may be a feature that impacts speech fluency. There are several ways during which stuttering may be noticed in bilingual children including the following.
The child is mixing vocabulary (code-mixing) from both languages in one sentence. This is a normal process that helps the child increase their skills in the weaker language, but may trigger a temporary increase in disfluency.
The child is having difficulty finding the correct word to express ideas resulting in an increase in normal speech disfluency.
The child is having difficulty using grammatically complex sentences in one or both languages as compared to other children of the same age. Also, the child may make grammatical mistakes. Developing proficiency in both languages may be gradual, so development may be uneven between the two languages.
It was once believed that being bilingual would 'confuse' a child and cause stuttering, but research has debunked this myth.
Stuttering may present differently depending on the languages the individual uses. For example, morphological and other linguistic differences between languages may make presentation of disfluency appear to be more or less depending on the individual case.
History
Because of the unusual-sounding speech that is produced and the behaviors and attitudes that accompany a stutter, it has long been a subject of scientific interest and speculation as well as discrimination and ridicule. People who stutter can be traced back centuries to Demosthenes, who tried to control his disfluency by speaking with pebbles in his mouth. The Talmud interprets Bible passages to indicate that Moses also stuttered, and that placing a burning coal in his mouth had caused him to be "slow and hesitant of speech" (Exodus 4, v.10).
Galen's humoral theories were influential in Europe in the Middle Ages for centuries afterward. In this theory, stuttering was attributed to an imbalance of the four bodily humors—yellow bile, blood, black bile, and phlegm. Hieronymus Mercurialis, writing in the sixteenth century, proposed to redress the imbalance by changes in diet, reduced libido (in men only), and purging. Believing that fear aggravated stuttering, he suggested techniques to overcome this. Humoral manipulation continued to be a dominant treatment for stuttering until the eighteenth century. Partly due to a perceived lack of intelligence because of his stutter, the man who became the Roman emperor Claudius was initially shunned from the public eye and excluded from public office.
In and around eighteenth and nineteenth century Europe, surgical interventions for stuttering were recommended, including cutting the tongue with scissors, removing a triangular wedge from the posterior tongue, and cutting nerves, or neck and lip muscles. Others recommended shortening the uvula or removing the tonsils. All were abandoned due to the danger of bleeding to death and their failure to stop stuttering. Less drastically, Jean Marc Gaspard Itard placed a small forked golden plate under the tongue in order to support "weak" muscles.
Italian pathologist Giovanni Morgagni attributed stuttering to deviations in the hyoid bone, a conclusion he came to via autopsy. Blessed Notker of St. Gall ( – 912), called Balbulus ("The Stutterer") and described by his biographer as being "delicate of body but not of mind, stuttering of tongue but not of intellect, pushing boldly forward in things Divine," was invoked against stammering.
A royal Briton who stammered was King George VI. He went through years of speech therapy, most successfully under Australian speech therapist Lionel Logue, for his stammer. The Academy Award-winning film The King's Speech (2010) in which Colin Firth plays George VI, tells his story. The film is based on an original screenplay by David Seidler, who also stuttered until age 16.
Another British case was that of Prime Minister Winston Churchill. Churchill claimed, perhaps not directly discussing himself, that "[s]ometimes a slight and not unpleasing stammer or impediment has been of some assistance in securing the attention of the audience ..." However, those who knew Churchill and commented on his stutter believed that it was or had been a significant problem for him. His secretary Phyllis Moir commented that "Winston Churchill was born and grew up with a stutter" in her 1941 book I was Winston Churchill's Private Secretary. She related one example, "'It's s-s-simply s-s-splendid,' he stuttered—as he always did when excited." Louis J. Alber, who helped to arrange a lecture tour of the United States, wrote in Volume 55 of The American Mercury (1942) that "Churchill struggled to express his feelings but his stutter caught him in the throat and his face turned purple" and that "born with a stutter and a lisp, both caused in large measure by a defect in his palate, Churchill was at first seriously hampered in his public speaking. It is characteristic of the man's perseverance that, despite his staggering handicap, he made himself one of the greatest orators of our time."
For centuries "cures" such as consistently drinking water from a snail shell for the rest of one's life, "hitting a stutterer in the face when the weather is cloudy", strengthening the tongue as a muscle, and various herbal remedies were tried. Similarly, in the past people subscribed to odd theories about the causes of stuttering, such as tickling an infant too much, eating improperly during breastfeeding, allowing an infant to look in the mirror, cutting a child's hair before the child spoke his or her first words, having too small a tongue, or the "work of the devil".
Society and Culture
In popular culture
Stuttering community
Many countries have regular events and activities to bring people who stutter together for mutual support. These events take place at regional, national, and international levels. At a regional level, there may be stuttering support or chapter groups that look to provide a place for people who stutter in the local area to meet, discuss and learn from each other.
At a national level, stuttering organizations host conferences. Conferences vary in their focus and scope; some focus on the latest research developments, some focus on stuttering and the arts, and others simply look to provide a space for stutterers to come together.
There are two international meetings of stutterers. The International Stuttering Association World Congress primarily focuses on individuals who stutter. Meanwhile, the Joint World Congress on Stuttering and Cluttering brings together academics, researchers, speech-language pathologists, as well as people who stutter or clutter, with a focus on research and treatments for stuttering.
Historic advocacy and self-help
Self-help and advocacy organisations for people who stammer have reportedly been in existence since the 1920s. In 1921, a Philadelphia-based attorney who stammered, J. Stanley Smith, established the Kingsley Club. Designed to support people with a stammer in the Philadelphia area, the club took inspiration for its name from Charles Kingsley. Kingsley, a nineteenth-century English social reformer and author of Westward Ho! and The Water Babies, had a stammer himself.
Whilst Kingsley himself did not appear to recommend self-help or advocacy groups for people who stammer, the Kingsley Club promoted a positive mental attitude to support its members in becoming confident speakers, in a similar way discussed by Charles Kingsley in Irrationale of Speech.
Other support groups for people who stammer began to emerge in the first half of the twentieth century. In 1935 a Stammerer's Club was established in Melbourne, Australia, by a Mr H. Collin of Thornbury. At the time of its formation it had 68 members. The club was formed in response to the tragic case of a man from Sydney who "sought relief from the effects of stammering in suicide". As well as providing self-help, this club adopted an advocacy role with the intention of appealing to the Government to provide special education and to fund research into the causes of stammering.
Disability rights movement
Some people who stutter, and are part of the disability rights movement, have begun to embrace their stuttering voices as an important part of their identity. In July 2015 the UK Ministry of Defence (MOD) announced the launch of the Defence Stammering Network to support and champion the interests of British military personnel and MOD civil servants who stammer and to raise awareness of the condition.
Although the Americans with Disabilities Act of 1990 intended to cover speech disabilities, it was not explicitly named and lawsuits increasingly did not cover stuttering as a disability. In 2009, additional amendments were made to the ADA, and it now specifically covers speech disorders.
Stuttering pride
Stuttering pride (or stuttering advocacy) is a social movement repositioning stuttering as a valuable and respectable way of speaking. The movement seeks to counter the societal narratives in which temporal and societal expectations dictate how communication takes place. In this sense, the stuttering pride movement challenges the pervasive societal narrative of stuttering as a defect and instead positions stuttering as a valuable and respectable way of speaking in its own right. The movement encourages stutterers to take pride in their unique speech patterns and in what stuttering can tell us about the world. It also advocates for societal adjustments to allow stutterers equal access to education and employment opportunities, and addresses how this may impact stuttering therapy.
Associations
All India Institute of Speech and Hearing
American Institute for Stuttering
British Stammering Association
European League of Stuttering Associations
International Stuttering Association
Israel Stuttering Association
Michael Palin Centre for Stammering Children
National Stuttering Association, United States
Philippine Stuttering Association
Taiwan Stuttering Association
Stuttering Foundation of America
The Indian Stammering Association
| Biology and health sciences | Disabilities | Health |
28420 | https://en.wikipedia.org/wiki/Specific%20heat%20capacity | Specific heat capacity | In thermodynamics, the specific heat capacity (symbol ) of a substance is the amount of heat that must be added to one unit of mass of the substance in order to cause an increase of one unit in temperature. It is also referred to as Massic heat capacity or as the Specific heat. More formally it is the heat capacity of a sample of the substance divided by the mass of the sample. The SI unit of specific heat capacity is joule per kelvin per kilogram, J⋅kg−1⋅K−1. For example, the heat required to raise the temperature of of water by is , so the specific heat capacity of water is .
Specific heat capacity often varies with temperature, and is different for each state of matter. Liquid water has one of the highest specific heat capacities among common substances, about at 20 °C; but that of ice, just below 0 °C, is only . The specific heat capacities of iron, granite, and hydrogen gas are about 449 J⋅kg−1⋅K−1, 790 J⋅kg−1⋅K−1, and 14300 J⋅kg−1⋅K−1, respectively. While the substance is undergoing a phase transition, such as melting or boiling, its specific heat capacity is technically undefined, because the heat goes into changing its state rather than raising its temperature.
The specific heat capacity of a substance, especially a gas, may be significantly higher when it is allowed to expand as it is heated (specific heat capacity at constant pressure) than when it is heated in a closed vessel that prevents expansion (specific heat capacity at constant volume). These two values are usually denoted by and , respectively; their quotient is the heat capacity ratio.
The term specific heat may also refer to the ratio between the specific heat capacities of a substance at a given temperature and of a reference substance at a reference temperature, such as water at 15 °C; much in the fashion of specific gravity. Specific heat capacity is also related to other intensive measures of heat capacity with other denominators. If the amount of substance is measured as a number of moles, one gets the molar heat capacity instead, whose SI unit is joule per kelvin per mole, J⋅mol−1⋅K−1. If the amount is taken to be the volume of the sample (as is sometimes done in engineering), one gets the volumetric heat capacity, whose SI unit is joule per kelvin per cubic meter, J⋅m−3⋅K−1.
History
Discovery of specific heat
One of the first scientists to use the concept was Joseph Black, an 18th-century medical doctor and professor of medicine at Glasgow University. He measured the specific heat capacities of many substances, using the term capacity for heat. In 1756 or soon thereafter, Black began an extensive study of heat. In 1760 he realized that when two different substances of equal mass but different temperatures are mixed, the changes in number of degrees in the two substances differ, though the heat gained by the cooler substance and lost by the hotter is the same. Black related an experiment conducted by Daniel Gabriel Fahrenheit on behalf of Dutch physician Herman Boerhaave. For clarity, he then described a hypothetical, but realistic variant of the experiment: If equal masses of 100 °F water and 150 °F mercury are mixed, the water temperature increases by 20 ° and the mercury temperature decreases by 30 ° (both arriving at 120 °F), even though the heat gained by the water and lost by the mercury is the same. This clarified the distinction between heat and temperature. It also introduced the concept of specific heat capacity, being different for different substances. Black wrote: “Quicksilver [mercury] ... has less capacity for the matter of heat than water.”
Definition
The specific heat capacity of a substance, usually denoted by or , is the heat capacity of a sample of the substance, divided by the mass of the sample:
where represents the amount of heat needed to uniformly raise the temperature of the sample by a small increment .
Like the heat capacity of an object, the specific heat capacity of a substance may vary, sometimes substantially, depending on the starting temperature of the sample and the pressure applied to it. Therefore, it should be considered a function of those two variables.
These parameters are usually specified when giving the specific heat capacity of a substance. For example, "Water (liquid): = 4187 J⋅kg−1⋅K−1 (15 °C)." When not specified, published values of the specific heat capacity generally are valid for some standard conditions for temperature and pressure.
However, the dependency of on starting temperature and pressure can often be ignored in practical contexts, e.g. when working in narrow ranges of those variables. In those contexts one usually omits the qualifier and approximates the specific heat capacity by a constant suitable for those ranges.
Specific heat capacity is an intensive property of a substance, an intrinsic characteristic that does not depend on the size or shape of the amount in consideration. (The qualifier "specific" in front of an extensive property often indicates an intensive property derived from it.)
Variations
The injection of heat energy into a substance, besides raising its temperature, usually causes an increase in its volume and/or its pressure, depending on how the sample is confined. The choice made about the latter affects the measured specific heat capacity, even for the same starting pressure and starting temperature . Two particular choices are widely used:
If the pressure is kept constant (for instance, at the ambient atmospheric pressure), and the sample is allowed to expand, the expansion generates work, as the force from the pressure displaces the enclosure or the surrounding fluid. That work must come from the heat energy provided. The specific heat capacity thus obtained is said to be measured at constant pressure (or isobaric) and is often denoted
On the other hand, if the expansion is prevented for example, by a sufficiently rigid enclosure or by increasing the external pressure to counteract the internal one no work is generated, and the heat energy that would have gone into it must instead contribute to the internal energy of the sample, including raising its temperature by an extra amount. The specific heat capacity obtained this way is said to be measured at constant volume (or isochoric) and denoted
The value of is always less than the value of for all fluids. This difference is particularly notable in gases where values under constant pressure are typically 30% to 66.7% greater than those at constant volume. Hence the heat capacity ratio of gases is typically between 1.3 and 1.67.
Applicability
The specific heat capacity can be defined and measured for gases, liquids, and solids of fairly general composition and molecular structure. These include gas mixtures, solutions and alloys, or heterogenous materials such as milk, sand, granite, and concrete, if considered at a sufficiently large scale.
The specific heat capacity can be defined also for materials that change state or composition as the temperature and pressure change, as long as the changes are reversible and gradual. Thus, for example, the concepts are definable for a gas or liquid that dissociates as the temperature increases, as long as the products of the dissociation promptly and completely recombine when it drops.
The specific heat capacity is not meaningful if the substance undergoes irreversible chemical changes, or if there is a phase change, such as melting or boiling, at a sharp temperature within the range of temperatures spanned by the measurement.
Measurement
The specific heat capacity of a substance is typically determined according to the definition; namely, by measuring the heat capacity of a sample of the substance, usually with a calorimeter, and dividing by the sample's mass. Several techniques can be applied for estimating the heat capacity of a substance, such as differential scanning calorimetry.
The specific heat capacities of gases can be measured at constant volume, by enclosing the sample in a rigid container. On the other hand, measuring the specific heat capacity at constant volume can be prohibitively difficult for liquids and solids, since one often would need impractical pressures in order to prevent the expansion that would be caused by even small increases in temperature. Instead, the common practice is to measure the specific heat capacity at constant pressure (allowing the material to expand or contract as it wishes), determine separately the coefficient of thermal expansion and the compressibility of the material, and compute the specific heat capacity at constant volume from these data according to the laws of thermodynamics.
Units
International system
The SI unit for specific heat capacity is joule per kelvin per kilogram , J⋅K−1⋅kg−1. Since an increment of temperature of one degree Celsius is the same as an increment of one kelvin, that is the same as joule per degree Celsius per kilogram: J/(kg⋅°C). Sometimes the gram is used instead of kilogram for the unit of mass: 1 J⋅g−1⋅K−1 = 1000 J⋅kg−1⋅K−1.
The specific heat capacity of a substance (per unit of mass) has dimension L2⋅Θ−1⋅T−2, or (L/T)2/Θ. Therefore, the SI unit J⋅kg−1⋅K−1 is equivalent to metre squared per second squared per kelvin (m2⋅K−1⋅s−2).
Imperial engineering units
Professionals in construction, civil engineering, chemical engineering, and other technical disciplines, especially in the United States, may use English Engineering units including the pound (lb = 0.45359237 kg) as the unit of mass, the degree Fahrenheit or Rankine (°R = K, about 0.555556 K) as the unit of temperature increment, and the British thermal unit (BTU ≈ 1055.056 J), as the unit of heat.
In those contexts, the unit of specific heat capacity is BTU/lb⋅°R, or 1 = 4186.68. The BTU was originally defined so that the average specific heat capacity of water would be 1 BTU/lb⋅°F. Note the value's similarity to that of the calorie - 4187 J/kg⋅°C ≈ 4184 J/kg⋅°C (~.07%) - as they are essentially measuring the same energy, using water as a basis reference, scaled to their systems' respective lbs and °F, or kg and °C.
Calories
In chemistry, heat amounts were often measured in calories. Confusingly, there are two common units with that name, respectively denoted cal and Cal:
the small calorie (gram-calorie, cal) is 4.184 J exactly. It was originally defined so that the specific heat capacity of liquid water would be 1 cal/(°C⋅g).
The grand calorie (kilocalorie, kilogram-calorie, food calorie, kcal, Cal) is 1000 small calories, 4184 J exactly. It was defined so that the specific heat capacity of water would be 1 Cal/(°C⋅kg).
While these units are still used in some contexts (such as kilogram calorie in nutrition), their use is now deprecated in technical and scientific fields. When heat is measured in these units, the unit of specific heat capacity is usually:
Note that while cal is of a Cal or kcal, it is also per gram instead of kilogram: ergo, in either unit, the specific heat capacity of water is approximately 1.
Physical basis
The temperature of a sample of a substance reflects the average kinetic energy of its constituent particles (atoms or molecules) relative to its center of mass. However, not all energy provided to a sample of a substance will go into raising its temperature, exemplified via the equipartition theorem.
Monatomic gases
Statistical mechanics predicts that at room temperature and ordinary pressures, an isolated atom in a gas cannot store any significant amount of energy except in the form of kinetic energy, unless multiple electronic states are accessible at room temperature (such is the case for atomic fluorine). Thus, the heat capacity per mole at room temperature is the same for all of the noble gases as well as for many other atomic vapors. More precisely, and , where is the ideal gas unit (which is the product of Boltzmann conversion constant from kelvin microscopic energy unit to the macroscopic energy unit joule, and the Avogadro number).
Therefore, the specific heat capacity (per gram, not per mole) of a monatomic gas will be inversely proportional to its (adimensional) atomic weight . That is, approximately,
For the noble gases, from helium to xenon, these computed values are
Polyatomic gases
On the other hand, a polyatomic gas molecule (consisting of two or more atoms bound together) can store heat energy in additional degrees of freedom. Its kinetic energy contributes to the heat capacity in the same way as monatomic gases, but there are also contributions from the rotations of the molecule and vibration of the atoms relative to each other (including internal potential energy).
There may also be contributions to the heat capacity from excited electronic states for molecules where the energy gap between the ground state and the excited state is sufficiently small, such as . For a few systems, quantum spin statistics can also be important contributions to the heat capacity, even at room temperature. The analysis of the heat capacity of due to ortho/para separation, which arises from nuclear spin statistics, has been referred to as "one of the great triumphs of post-quantum mechanical statistical mechanics."
These extra degrees of freedom or "modes" contribute to the specific heat capacity of the substance. Namely, when heat energy is injected into a gas with polyatomic molecules, only part of it will go into increasing their kinetic energy, and hence the temperature; the rest will go to into the other degrees of freedom. To achieve the same increase in temperature, more heat energy is needed for a gram of that substance than for a gram of a monatomic gas. Thus, the specific heat capacity per mole of a polyatomic gas depends both on the molecular mass and the number of degrees of freedom of the molecules.
Quantum statistical mechanics predicts that each rotational or vibrational mode can only take or lose energy in certain discrete amounts (quanta), and that this affects the system’s thermodynamic properties. Depending on the temperature, the average heat energy per molecule may be too small compared to the quanta needed to activate some of those degrees of freedom. Those modes are said to be "frozen out". In that case, the specific heat capacity of the substance increases with temperature, sometimes in a step-like fashion as mode becomes unfrozen and starts absorbing part of the input heat energy.
For example, the molar heat capacity of nitrogen at constant volume is (at 15 °C, 1 atm), which is . That is the value expected from the Equipartition Theorem if each molecule had 5 kinetic degrees of freedom. These turn out to be three degrees of the molecule's velocity vector, plus two degrees from its rotation about an axis through the center of mass and perpendicular to the line of the two atoms. Because of those two extra degrees of freedom, the specific heat capacity of (736 J⋅K−1⋅kg−1) is greater than that of an hypothetical monatomic gas with the same molecular mass 28 (445 J⋅K−1⋅kg−1), by a factor of . The vibrational and electronic degrees of freedom do not contribute significantly to the heat capacity in this case, due to the relatively large energy level gaps for both vibrational and electronic excitation in this molecule.
This value for the specific heat capacity of nitrogen is practically constant from below −150 °C to about 300 °C. In that temperature range, the two additional degrees of freedom that correspond to vibrations of the atoms, stretching and compressing the bond, are still "frozen out". At about that temperature, those modes begin to "un-freeze" as vibrationally excited states become accessible. As a result starts to increase rapidly at first, then slower as it tends to another constant value. It is 35.5 J⋅K−1⋅mol−1 at 1500 °C, 36.9 at 2500 °C, and 37.5 at 3500 °C. The last value corresponds almost exactly to the value predicted by the Equipartition Theorem, since in the high-temperature limit the theorem predicts that the vibrational degree of freedom contributes twice as much to the heat capacity as any one of the translational or rotational degrees of freedom.
Derivations of heat capacity
Relation between specific heat capacities
Starting from the fundamental thermodynamic relation one can show,
where
is the coefficient of thermal expansion,
is the isothermal compressibility, and
is density.
A derivation is discussed in the article Relations between specific heats.
For an ideal gas, if is expressed as molar density in the above equation, this equation reduces simply to Mayer's relation,
where and are intensive property heat capacities expressed on a per mole basis at constant pressure and constant volume, respectively.
Specific heat capacity
The specific heat capacity of a material on a per mass basis is
which in the absence of phase transitions is equivalent to
where
is the heat capacity of a body made of the material in question,
is the mass of the body,
is the volume of the body, and
is the density of the material.
For gases, and also for other materials under high pressures, there is need to distinguish between different boundary conditions for the processes under consideration (since values differ significantly between different conditions). Typical processes for which a heat capacity may be defined include isobaric (constant pressure, ) or isochoric (constant volume, ) processes. The corresponding specific heat capacities are expressed as
A related parameter to is , the volumetric heat capacity. In engineering practice, for solids or liquids often signifies a volumetric heat capacity, rather than a constant-volume one. In such cases, the mass-specific heat capacity is often explicitly written with the subscript , as . Of course, from the above relationships, for solids one writes
For pure homogeneous chemical compounds with established molecular or molar mass or a molar quantity is established, heat capacity as an intensive property can be expressed on a per mole basis instead of a per mass basis by the following equations analogous to the per mass equations:
where n = number of moles in the body or thermodynamic system. One may refer to such a per mole quantity as molar heat capacity to distinguish it from specific heat capacity on a per-mass basis.
Polytropic heat capacity
The polytropic heat capacity is calculated at processes if all the thermodynamic properties (pressure, volume, temperature) change
The most important polytropic processes run between the adiabatic and the isotherm functions, the polytropic index is between 1 and the adiabatic exponent (γ or κ)
Dimensionless heat capacity
The dimensionless heat capacity of a material is
where
C is the heat capacity of a body made of the material in question (J/K)
n is the amount of substance in the body (mol)
R is the gas constant (J⋅K−1⋅mol−1)
N is the number of molecules in the body. (dimensionless)
kB is the Boltzmann constant (J⋅K−1)
Again, SI units shown for example.
Read more about the quantities of dimension one at BIPM
In the Ideal gas article, dimensionless heat capacity is expressed as .
Heat capacity at absolute zero
From the definition of entropy
the absolute entropy can be calculated by integrating from zero kelvins temperature to the final temperature Tf
The heat capacity must be zero at zero temperature in order for the above integral not to yield an infinite absolute entropy, thus violating the third law of thermodynamics. One of the strengths of the Debye model is that (unlike the preceding Einstein model) it predicts the proper mathematical form of the approach of heat capacity toward zero, as absolute zero temperature is approached.
Solid phase
The theoretical maximum heat capacity for larger and larger multi-atomic gases at higher temperatures, also approaches the Dulong–Petit limit of 3R, so long as this is calculated per mole of atoms, not molecules. The reason is that gases with very large molecules, in theory have almost the same high-temperature heat capacity as solids, lacking only the (small) heat capacity contribution that comes from potential energy that cannot be stored between separate molecules in a gas.
The Dulong–Petit limit results from the equipartition theorem, and as such is only valid in the classical limit of a microstate continuum, which is a high temperature limit. For light and non-metallic elements, as well as most of the common molecular solids based on carbon compounds at standard ambient temperature, quantum effects may also play an important role, as they do in multi-atomic gases. These effects usually combine to give heat capacities lower than 3R per mole of atoms in the solid, although in molecular solids, heat capacities calculated per mole of molecules in molecular solids may be more than 3R. For example, the heat capacity of water ice at the melting point is about 4.6R per mole of molecules, but only 1.5R per mole of atoms. The lower than 3R number "per atom" (as is the case with diamond and beryllium) results from the “freezing out” of possible vibration modes for light atoms at suitably low temperatures, just as in many low-mass-atom gases at room temperatures. Because of high crystal binding energies, these effects are seen in solids more often than liquids: for example the heat capacity of liquid water is twice that of ice at near the same temperature, and is again close to the 3R per mole of atoms of the Dulong–Petit theoretical maximum.
For a more modern and precise analysis of the heat capacities of solids, especially at low temperatures, it is useful to use the idea of phonons. See Debye model.
Theoretical estimation
The path integral Monte Carlo method is a numerical approach for determining the values of heat capacity, based on quantum dynamical principles. However, good approximations can be made for gases in many states using simpler methods outlined below. For many solids composed of relatively heavy atoms (atomic number > iron), at non-cryogenic temperatures, the heat capacity at room temperature approaches 3R = 24.94 joules per kelvin per mole of atoms (Dulong–Petit law, R is the gas constant). Low temperature approximations for both gases and solids at temperatures less than their characteristic Einstein temperatures or Debye temperatures can be made by the methods of Einstein and Debye discussed below.
Water (liquid): CP = 4185.5 J⋅K−1⋅kg−1 (15 °C, 101.325 kPa)
Water (liquid): CVH = 74.539 J⋅K−1⋅mol−1 (25 °C)
For liquids and gases, it is important to know the pressure to which given heat capacity data refer. Most published data are given for standard pressure. However, different standard conditions for temperature and pressure have been defined by different organizations. The International Union of Pure and Applied Chemistry (IUPAC) changed its recommendation from one atmosphere to the round value 100 kPa (≈750.062 Torr).
Relation between heat capacities
Measuring the specific heat capacity at constant volume can be prohibitively difficult for liquids and solids. That is, small temperature changes typically require large pressures to maintain a liquid or solid at constant volume, implying that the containing vessel must be nearly rigid or at least very strong (see coefficient of thermal expansion and compressibility). Instead, it is easier to measure the heat capacity at constant pressure (allowing the material to expand or contract freely) and solve for the heat capacity at constant volume using mathematical relationships derived from the basic thermodynamic laws.
The heat capacity ratio, or adiabatic index, is the ratio of the heat capacity at constant pressure to heat capacity at constant volume. It is sometimes also known as the isentropic expansion factor.
Ideal gas
For an ideal gas, evaluating the partial derivatives above according to the equation of state, where R is the gas constant, for an ideal gas
Substituting
this equation reduces simply to Mayer's relation:
The differences in heat capacities as defined by the above Mayer relation is only exact for an ideal gas and would be different for any real gas.
Specific heat capacity
The specific heat capacity of a material on a per mass basis is
which in the absence of phase transitions is equivalent to
where
is the heat capacity of a body made of the material in question,
is the mass of the body,
is the volume of the body,
is the density of the material.
For gases, and also for other materials under high pressures, there is need to distinguish between different boundary conditions for the processes under consideration (since values differ significantly between different conditions). Typical processes for which a heat capacity may be defined include isobaric (constant pressure, ) or isochoric (constant volume, ) processes. The corresponding specific heat capacities are expressed as
From the results of the previous section, dividing through by the mass gives the relation
A related parameter to is , the volumetric heat capacity. In engineering practice, for solids or liquids often signifies a volumetric heat capacity, rather than a constant-volume one. In such cases, the specific heat capacity is often explicitly written with the subscript , as . Of course, from the above relationships, for solids one writes
For pure homogeneous chemical compounds with established molecular or molar mass, or a molar quantity, heat capacity as an intensive property can be expressed on a per-mole basis instead of a per-mass basis by the following equations analogous to the per mass equations:
where n is the number of moles in the body or thermodynamic system. One may refer to such a per-mole quantity as molar heat capacity to distinguish it from specific heat capacity on a per-mass basis.
Polytropic heat capacity
The polytropic heat capacity is calculated at processes if all the thermodynamic properties (pressure, volume, temperature) change:
The most important polytropic processes run between the adiabatic and the isotherm functions, the polytropic index is between 1 and the adiabatic exponent (γ or κ).
Dimensionless heat capacity
The dimensionless heat capacity of a material is
where
is the heat capacity of a body made of the material in question (J/K),
n is the amount of substance in the body (mol),
R is the gas constant (J/(K⋅mol)),
N is the number of molecules in the body (dimensionless),
kB is the Boltzmann constant (J/(K⋅molecule)).
In the ideal gas article, dimensionless heat capacity is expressed as and is related there directly to half the number of degrees of freedom per particle. This holds true for quadratic degrees of freedom, a consequence of the equipartition theorem.
More generally, the dimensionless heat capacity relates the logarithmic increase in temperature to the increase in the dimensionless entropy per particle , measured in nats.
Alternatively, using base-2 logarithms, relates the base-2 logarithmic increase in temperature to the increase in the dimensionless entropy measured in bits.
Heat capacity at absolute zero
From the definition of entropy
the absolute entropy can be calculated by integrating from zero to the final temperature Tf:
Thermodynamic derivation
In theory, the specific heat capacity of a substance can also be derived from its abstract thermodynamic modeling by an equation of state and an internal energy function.
State of matter in a homogeneous sample
To apply the theory, one considers the sample of the substance (solid, liquid, or gas) for which the specific heat capacity can be defined; in particular, that it has homogeneous composition and fixed mass . Assume that the evolution of the system is always slow enough for the internal pressure and temperature be considered uniform throughout. The pressure would be equal to the pressure applied to it by the enclosure or some surrounding fluid, such as air.
The state of the material can then be specified by three parameters: its temperature , the pressure , and its specific volume , where is the volume of the sample. (This quantity is the reciprocal of the material's density .) Like and , the specific volume is an intensive property of the material and its state, that does not depend on the amount of substance in the sample.
Those variables are not independent. The allowed states are defined by an equation of state relating those three variables: The function depends on the material under consideration. The specific internal energy stored internally in the sample, per unit of mass, will then be another function of these state variables, that is also specific of the material. The total internal energy in the sample then will be .
For some simple materials, like an ideal gas, one can derive from basic theory the equation of state and even the specific internal energy In general, these functions must be determined experimentally for each substance.
Conservation of energy
The absolute value of this quantity is undefined, and (for the purposes of thermodynamics) the state of "zero internal energy" can be chosen arbitrarily. However, by the law of conservation of energy, any infinitesimal increase in the total internal energy must be matched by the net flow of heat energy into the sample, plus any net mechanical energy provided to it by enclosure or surrounding medium on it. The latter is , where is the change in the sample's volume in that infinitesimal step. Therefore
hence
If the volume of the sample (hence the specific volume of the material) is kept constant during the injection of the heat amount , then the term is zero (no mechanical work is done). Then, dividing by ,
where is the change in temperature that resulted from the heat input. The left-hand side is the specific heat capacity at constant volume of the material.
For the heat capacity at constant pressure, it is useful to define the specific enthalpy of the system as the sum . An infinitesimal change in the specific enthalpy will then be
therefore
If the pressure is kept constant, the second term on the left-hand side is zero, and
The left-hand side is the specific heat capacity at constant pressure of the material.
Connection to equation of state
In general, the infinitesimal quantities are constrained by the equation of state and the specific internal energy function. Namely,
Here denotes the (partial) derivative of the state equation with respect to its argument, keeping the other two arguments fixed, evaluated at the state in question. The other partial derivatives are defined in the same way. These two equations on the four infinitesimal increments normally constrain them to a two-dimensional linear subspace space of possible infinitesimal state changes, that depends on the material and on the state. The constant-volume and constant-pressure changes are only two particular directions in this space.
This analysis also holds no matter how the energy increment is injected into the sample, namely by heat conduction, irradiation, electromagnetic induction, radioactive decay, etc.
Relation between heat capacities
For any specific volume , denote the function that describes how the pressure varies with the temperature , as allowed by the equation of state, when the specific volume of the material is forcefully kept constant at . Analogously, for any pressure , let be the function that describes how the specific volume varies with the temperature, when the pressure is kept constant at . Namely, those functions are such that
and
for any values of . In other words, the graphs of and are slices of the surface defined by the state equation, cut by planes of constant and constant , respectively.
Then, from the fundamental thermodynamic relation it follows that
This equation can be rewritten as
where
is the coefficient of thermal expansion,
is the isothermal compressibility,
both depending on the state .
The heat capacity ratio, or adiabatic index, is the ratio of the heat capacity at constant pressure to heat capacity at constant volume. It is sometimes also known as the isentropic expansion factor.
Calculation from first principles
The path integral Monte Carlo method is a numerical approach for determining the values of heat capacity, based on quantum dynamical principles. However, good approximations can be made for gases in many states using simpler methods outlined below. For many solids composed of relatively heavy atoms (atomic number > iron), at non-cryogenic temperatures, the heat capacity at room temperature approaches 3R = 24.94 joules per kelvin per mole of atoms (Dulong–Petit law, R is the gas constant). Low temperature approximations for both gases and solids at temperatures less than their characteristic Einstein temperatures or Debye temperatures can be made by the methods of Einstein and Debye discussed below. However, attention should be made for the consistency of such ab-initio considerations when used along with an equation of state for the considered material.
Ideal gas
For an ideal gas, evaluating the partial derivatives above according to the equation of state, where R is the gas constant, for an ideal gas
Substituting
this equation reduces simply to Mayer's relation:
The differences in heat capacities as defined by the above Mayer relation is only exact for an ideal gas and would be different for any real gas.
| Physical sciences | Thermodynamics | Physics |
28422 | https://en.wikipedia.org/wiki/Slingshot | Slingshot | A slingshot or catapult is a small hand-powered projectile weapon. The classic form consists of a Y-shaped frame, with two tubes or strips made from either a natural rubber or synthetic elastic material. These are attached to the upper two ends. The other ends of the strips lead back to a pouch that holds the projectile. One hand holds the frame, while the other hand grasps the pocket and draws it back to the desired extent to provide power for the projectile—up to a full span of the arms with sufficiently long bands.
Other names include catapult (United Kingdom), peashooter (United States), gulel (India), (South Africa), or ging, shanghai, pachoonga (Australia and New Zealand)
Use and history
Slingshots depend on strong elastic materials for their projectile firepower, typically vulcanized natural rubber or the equivalent such as silicone rubber tubing, and thus date no earlier than the invention of vulcanized rubber by Charles Goodyear in 1839 (patented in 1844). By 1860, this "new engine" had established a reputation for use by juveniles in vandalism. For much of their early history, slingshots were a "do-it-yourself" item, typically made from a forked branch to form the Y-shaped handle, with rubber strips sliced from items such as inner tubes or other sources of good vulcanized rubber, and using suitably sized stones.
While early slingshots were most associated with young vandals, they could be effective hunting arms in the hands of a skilled user. Firing projectiles, such as lead musket balls, buckshot, steel ball bearings, air gun pellets, or small nails, a slingshot was capable of taking game such as quail, pheasant, rabbit, dove, and squirrel. Placing multiple balls in the pouch produces a shotgun effect (even though not very accurate), such as firing a dozen BBs at a time for hunting small birds. With the addition of a suitable rest, the slingshot can also be used to shoot arrows, allowing the hunting of medium-sized game at short ranges.
While commercially made slingshots date from at latest 1918, with the introduction of the Zip-Zip, a cast iron model, it was not until the post–World War II years that slingshots saw a surge in popularity, and legitimacy. They were still primarily home-built; a 1946 Popular Science article details a slingshot builder and hunter using home-built slingshots made from forked dogwood sticks to take small game at ranges of up to with No. 0 lead buckshot ( diameter).
The Wham-O company, founded in 1948, produced the Wham-O slingshot. It was made of ash wood and used flat rubber bands. The Wham-O was suitable for hunting, with a draw weight of up to , and was available with an arrow rest.
The National Slingshot Association was founded in the 1940s, headquartered in San Marino, California. It organised slingshot clubs and competitions nationwide. Despite the slingshot's reputation as a tool of juvenile delinquents, the NSA reported that 80% of slingshot sales were to men over 30 years old, many of them professionals. John Milligan, a part-time manufacturer of the aluminium-framed John Milligan Special, a hunting slingshot, reported that about a third of his customers were physicians.
Slingshots are also occasionally used in angling to disperse bait over an area of water, so that fish may be attracted.
A home-made derivative of a slingshot also exists, consisting of a rubber balloon cut in half and tied to a tubular object such as the neck of a plastic bottle, or a small pipe. The projectile is inserted through the tube and into the cut balloon, and the user stretches the balloon to launch the projectile. These so-called "balloon guns" are sometimes made as a substitute to ordinary slingshot, and are often used to create the "shotgun" effect with several projectiles fired at once.
In modern times the slingshot has been used by civilians against governments. Examples of this are Hong Kong during the 2019-2020 protests where they were used against the Hong Kong Police Force, by the Palestinians against Israeli forces. and by the Ukrainians during the Maidan Revolution in 2014.
Military use
Slingshots have been used as military weapons, but primarily by guerrilla forces due to the easily available resources and technology required to construct one. Such guerrilla groups included the Irish Republican Army; prior to the 2003 invasion of Iraq, Saddam Hussein released a propaganda video demonstrating slingshots as a possible insurgency weapon for use against invading forces.
Slingshots have also been used by the military to launch unmanned aerial vehicles (UAVs). Two crew members form the fork, with an elastic cord stretched between them to provide power to launch the small aircraft.
On the Battle of Marawi, the soldiers of the Philippine Army's elite Scout Rangers were observed using slingshots with grenades as an improvised mortar to attack Maute and Abu Sayyaf forces.
Sport
Slingshots, often recognized as tools or toys, are also utilized in various organized sports around the world. Competitive slingshot shooting, or catapult shooting, is gaining popularity, with events held in countries like Spain, Italy, and China. The Slingshot World Cup is one of the most prestigious competitions, attracting participants globally who demonstrate their accuracy and skill by aiming at various targets.
Competitions and Events
Slingshot World Cup: This premier event draws competitors from across the globe, focusing on target shooting to earn points based on accuracy.
National Championships: Many nations, including the USA, Spain, and Italy, host their own national championships, featuring various categories and age groups.
Types of Competitions
Target Shooting: Participants aim at stationary or moving targets, which can vary in size and distance.
Field Shooting: This involves shooting at targets set in natural environments, simulating real hunting scenarios.
Equipment
Slingshots: Modern slingshots are crafted from high-quality materials such as aluminum and steel. Shooters often customize their equipment with different bands and pouches to match their preferences.
Ammunition: Common types of ammunition include steel balls, marbles, and biodegradable pellets, which cater to eco-friendly practices.
Skills and Techniques
Accuracy: Precision is paramount in slingshot sports, and participants dedicate time to improve their aim and consistency.
Stance and Grip: Proper stance and grip are essential for stability and control, leading many shooters to develop unique styles.
Community and Culture
Clubs and Associations: Numerous slingshot clubs and associations exist worldwide, providing enthusiasts with opportunities to share tips, participate in events, and promote the sport.
Online Presence: The slingshot community is active online, with forums, social media groups, and YouTube channels dedicated to techniques, reviews, and competition highlights.
Overall, slingshot sports blend tradition with modernity, making them an engaging and accessible activity for people of all ages.
Dangers
One of the dangers inherent in slingshots is the high probability that the bands will fail. Most bands are made from latex, which degrades with time and use, causing the bands to eventually fail under load. Failures at the pouch end are safest, as they result in the band rebounding away from the user. Failures at the fork end, however, send the band back towards the shooter's face, which can cause eye and facial injuries. One method to minimize the chance of a fork end failure is to utilize a tapered band, thinner at the pouch end, and thicker and stronger at the fork end. Designs that use loose parts at the fork are the most dangerous, as they can result in those parts being propelled back towards the shooter's face, such as the ball attachment used in the recalled Daisy "Natural" line of slingshots (see image). The band could slip out of the slot in which it rested, and the hard ball in the tube resulted in cases of blindness and broken teeth. Daisy models using plain tubular bands were not covered in the recall, because the elastic tubing does not cause severe injuries upon failure. Another big danger is the fork breakage; some commercial slingshots made from cheap zinc alloy may break and severely injure shooters' eyes and face.
Legal issues
Many jurisdictions prohibit the use of arm-braced slingshots. For example, New York Penal law 265.01 defines it as a Class-4 misdemeanor, and in some states of Australia they are also a prohibited weapon.
| Technology | Ranged weapons | null |
28431 | https://en.wikipedia.org/wiki/Space%20exploration | Space exploration | Space exploration is the use of astronomy and space technology to explore outer space. While the exploration of space is currently carried out mainly by astronomers with telescopes, its physical exploration is conducted both by uncrewed robotic space probes and human spaceflight. Space exploration, like its classical form astronomy, is one of the main sources for space science.
While the observation of objects in space, known as astronomy, predates reliable recorded history, it was the development of large and relatively efficient rockets during the mid-twentieth century that allowed physical space exploration to become a reality. Common rationales for exploring space include advancing scientific research, national prestige, uniting different nations, ensuring the future survival of humanity, and developing military and strategic advantages against other countries.
The early era of space exploration was driven by a "Space Race" between the Soviet Union and the United States. A driving force of the start of space exploration was during the Cold War. After the ability to create nuclear weapons, the narrative of defense/offense left land and the power to control the air the focus. Both the Soviet Union and the U.S. were racing to prove their superiority in technology through exploring space. In fact, the reason NASA was created was as a response to Sputnik I.
The launch of the first human-made object to orbit Earth, the Soviet Union's Sputnik 1, on 4 October 1957, and the first Moon landing by the American Apollo 11 mission on 20 July 1969 are often taken as landmarks for this initial period. The Soviet space program achieved many of the first milestones, including the first living being in orbit in 1957, the first human spaceflight (Yuri Gagarin aboard Vostok 1) in 1961, the first spacewalk (by Alexei Leonov) on 18 March 1965, the first automatic landing on another celestial body in 1966, and the launch of the first space station (Salyut 1) in 1971. After the first 20 years of exploration, focus shifted from one-off flights to renewable hardware, such as the Space Shuttle program, and from competition to cooperation as with the International Space Station (ISS).
With the substantial completion of the ISS following STS-133 in March 2011, plans for space exploration by the U.S. remained in flux. The Constellation program aiming for a return to the Moon by 2020 was judged unrealistic by an expert review panel reporting in 2009. Constellation ultimately was replaced with the Artemis Program, of which the first mission occurred in 2022, with a planned crewed landing to occur with Artemis III. The rise of the private space industry also began in earnest in the 2010s with the development of private launch vehicles, space capsules and satellite manufacturing.
In the 2000s, China initiated a successful crewed spaceflight program while India launched the Chandrayaan programme, while the European Union and Japan have also planned future crewed space missions. The two primary global programs gaining traction in the 2020s are the Chinese-led International Lunar Research Station and the US-led Artemis Program, with its plan to build the Lunar Gateway and the Artemis Base Camp, each having its own set of international partners.
History of exploration
First telescopes
The first telescope is said to have been invented in 1608 in the Netherlands by an eyeglass maker named Hans Lippershey, but their first recorded use in astronomy was by Galileo Galilei in 1609. In 1668 Isaac Newton built his own reflecting telescope, the first fully functional telescope of this kind, and a landmark for future developments due to its superior features over the previous Galilean telescope.
A string of discoveries in the Solar System (and beyond) followed, then and in the next centuries: the mountains of the Moon, the phases of Venus, the main satellites of Jupiter and Saturn, the rings of Saturn, many comets, the asteroids, the new planets Uranus and Neptune, and many more satellites.
The Orbiting Astronomical Observatory 2 was the first space telescope launched 1968, but the launching of Hubble Space Telescope in 1990 set a milestone. As of 1 December 2022, there were 5,284 confirmed exoplanets discovered. The Milky Way is estimated to contain 100–400 billion stars and more than 100 billion planets. There are at least 2 trillion galaxies in the observable universe. HD1 is the most distant known object from Earth, reported as 33.4 billion light-years away.
First outer space flights
MW 18014 was a German V-2 rocket test launch that took place on 20 June 1944, at the Peenemünde Army Research Center in Peenemünde. It was the first human-made object to reach outer space, attaining an apogee of 176 kilometers, which is well above the Kármán line. It was a vertical test launch. Although the rocket reached space, it did not reach orbital velocity, and therefore returned to Earth in an impact, becoming the first sub-orbital spaceflight. In 1949, the Bumper-WAC reached an altitude of , becoming the first human-made object to enter space, according to NASA.
First object in orbit
The first successful orbital launch was of the Soviet uncrewed Sputnik 1 ("Satellite 1") mission on 4 October 1957. The satellite weighed about , and is believed to have orbited Earth at a height of about . It had two radio transmitters (20 and 40 MHz), which emitted "beeps" that could be heard by radios around the globe. Analysis of the radio signals was used to gather information about the electron density of the ionosphere, while temperature and pressure data was encoded in the duration of radio beeps. The results indicated that the satellite was not punctured by a meteoroid. Sputnik 1 was launched by an R-7 rocket. It burned up upon re-entry on 3 January 1958.
First human outer space flight
The first successful human spaceflight was Vostok 1 ("East 1"), carrying the 27-year-old Russian cosmonaut, Yuri Gagarin, on 12 April 1961. The spacecraft completed one orbit around the globe, lasting about 1 hour and 48 minutes. Gagarin's flight resonated around the world; it was a demonstration of the advanced Soviet space program and it opened an entirely new era in space exploration: human spaceflight.
First astronomical body space explorations
The first artificial object to reach another celestial body was Luna 2 reaching the Moon in 1959. The first soft landing on another celestial body was performed by Luna 9 landing on the Moon on 3 February 1966. Luna 10 became the first artificial satellite of the Moon, entering in a lunar orbit on 3 April 1966.
The first crewed landing on another celestial body was performed by Apollo 11 on 20 July 1969, landing on the Moon. There have been a total of six spacecraft with humans landing on the Moon starting from 1969 to the last human landing in 1972.
The first interplanetary flyby was the 1961 Venera 1 flyby of Venus, though the 1962 Mariner 2 was the first flyby of Venus to return data (closest approach 34,773 kilometers). Pioneer 6 was the first satellite to orbit the Sun, launched on 16 December 1965. The other planets were first flown by in 1965 for Mars by Mariner 4, 1973 for Jupiter by Pioneer 10, 1974 for Mercury by Mariner 10, 1979 for Saturn by Pioneer 11, 1986 for Uranus by Voyager 2, 1989 for Neptune by Voyager 2. In 2015, the dwarf planets Ceres and Pluto were orbited by Dawn and passed by New Horizons, respectively. This accounts for flybys of each of the eight planets in the Solar System, the Sun, the Moon, and Ceres and Pluto (two of the five recognized dwarf planets).
The first interplanetary surface mission to return at least limited surface data from another planet was the 1970 landing of Venera 7, which returned data to Earth for 23 minutes from Venus. In 1975, Venera 9 was the first to return images from the surface of another planet, returning images from Venus. In 1971, the Mars 3 mission achieved the first soft landing on Mars returning data for almost 20 seconds. Later, much longer duration surface missions were achieved, including over six years of Mars surface operation by Viking 1 from 1975 to 1982 and over two hours of transmission from the surface of Venus by Venera 13 in 1982, the longest ever Soviet planetary surface mission. Venus and Mars are the two planets outside of Earth on which humans have conducted surface missions with uncrewed robotic spacecraft.
First space station
Salyut 1 was the first space station of any kind, launched into low Earth orbit by the Soviet Union on 19 April 1971. The International Space Station (ISS) is currently the largest and oldest of the 2 current fully functional space stations, inhabited continuously since the year 2000. The other, Tiangong space station built by China, is now fully crewed and operational.
First interstellar space flight
Voyager 1 became the first human-made object to leave the Solar System into interstellar space on 25 August 2012. The probe passed the heliopause at 121 AU to enter interstellar space.
Farthest from Earth
The Apollo 13 flight passed the far side of the Moon at an altitude of above the lunar surface, and 400,171 km (248,655 mi) from Earth, marking the record for the farthest humans have ever traveled from Earth in 1970.
Voyager 1 was at a distance of from Earth. It is the most distant human-made object from Earth.
Targets of exploration
Starting in the mid-20th century probes and then human missions were sent into Earth orbit, and then on to the Moon. Also, probes were sent throughout the known Solar System, and into Solar orbit. Uncrewed spacecraft have been sent into orbit around Saturn, Jupiter, Mars, Venus, and Mercury by the 21st century, and the most distance active spacecraft, Voyager 1 and 2 traveled beyond 100 times the Earth-Sun distance. The instruments were enough though that it is thought they have left the Sun's heliosphere, a sort of bubble of particles made in the Galaxy by the Sun's solar wind.
The Sun
The Sun is a major focus of space exploration. Being above the atmosphere in particular and Earth's magnetic field gives access to the solar wind and infrared and ultraviolet radiations that cannot reach Earth's surface. The Sun generates most space weather, which can affect power generation and transmission systems on Earth and interfere with, and even damage, satellites and space probes. Numerous spacecraft dedicated to observing the Sun, beginning with the Apollo Telescope Mount, have been launched and still others have had solar observation as a secondary objective. Parker Solar Probe, launched in 2018, will approach the Sun to within 1/9th the orbit of Mercury.
Mercury
Mercury remains the least explored of the Terrestrial planets. As of May 2013, the Mariner 10 and MESSENGER missions have been the only missions that have made close observations of Mercury. MESSENGER entered orbit around Mercury in March 2011, to further investigate the observations made by Mariner 10 in 1975 (Munsell, 2006b). A third mission to Mercury, scheduled to arrive in 2025, BepiColombo is to include two probes. BepiColombo is a joint mission between Japan and the European Space Agency. MESSENGER and BepiColombo are intended to gather complementary data to help scientists understand many of the mysteries discovered by Mariner 10's flybys.
Flights to other planets within the Solar System are accomplished at a cost in energy, which is described by the net change in velocity of the spacecraft, or delta-v. Due to the relatively high delta-v to reach Mercury and its proximity to the Sun, it is difficult to explore and orbits around it are rather unstable.
Venus
Venus was the first target of interplanetary flyby and lander missions and, despite one of the most hostile surface environments in the Solar System, has had more landers sent to it (nearly all from the Soviet Union) than any other planet in the Solar System. The first flyby was the 1961 Venera 1, though the 1962 Mariner 2 was the first flyby to successfully return data. Mariner 2 has been followed by several other flybys by multiple space agencies often as part of missions using a Venus flyby to provide a gravitational assist en route to other celestial bodies. In 1967, Venera 4 became the first probe to enter and directly examine the atmosphere of Venus. In 1970, Venera 7 became the first successful lander to reach the surface of Venus and by 1985 it had been followed by eight additional successful Soviet Venus landers which provided images and other direct surface data. Starting in 1975, with the Soviet orbiter Venera 9, some ten successful orbiter missions have been sent to Venus, including later missions which were able to map the surface of Venus using radar to pierce the obscuring atmosphere.
Earth
Space exploration has been used as a tool to understand Earth as a celestial object. Orbital missions can provide data for Earth that can be difficult or impossible to obtain from a purely ground-based point of reference.
For example, the existence of the Van Allen radiation belts was unknown until their discovery by the United States' first artificial satellite, Explorer 1. These belts contain radiation trapped by Earth's magnetic fields, which currently renders construction of habitable space stations above 1000 km impractical. Following this early unexpected discovery, a large number of Earth observation satellites have been deployed specifically to explore Earth from a space-based perspective. These satellites have significantly contributed to the understanding of a variety of Earth-based phenomena. For instance, the hole in the ozone layer was found by an artificial satellite that was exploring Earth's atmosphere, and satellites have allowed for the discovery of archeological sites or geological formations that were difficult or impossible to otherwise identify.
Moon
The Moon was the first celestial body to be the object of space exploration. It holds the distinctions of being the first remote celestial object to be flown by, orbited, and landed upon by spacecraft, and the only remote celestial object ever to be visited by humans.
In 1959, the Soviets obtained the first images of the far side of the Moon, never previously visible to humans. The U.S. exploration of the Moon began with the Ranger 4 impactor in 1962. Starting in 1966, the Soviets successfully deployed a number of landers to the Moon which were able to obtain data directly from the Moon's surface; just four months later, Surveyor 1 marked the debut of a successful series of U.S. landers. The Soviet uncrewed missions culminated in the Lunokhod program in the early 1970s, which included the first uncrewed rovers and also successfully brought lunar soil samples to Earth for study. This marked the first (and to date the only) automated return of extraterrestrial soil samples to Earth. Uncrewed exploration of the Moon continues with various nations periodically deploying lunar orbiters. China's Chang'e 4 in 2019 and Chang'e 6 in 2024 achieved the world's first landing and sample return on the far side of the Moon. India's Chandrayaan-3 in 2023 achieved the world's first landing on the lunar south pole region.
Crewed exploration of the Moon began in 1968 with the Apollo 8 mission that successfully orbited the Moon, the first time any extraterrestrial object was orbited by humans. In 1969, the Apollo 11 mission marked the first time humans set foot upon another world. Crewed exploration of the Moon did not continue for long. The Apollo 17 mission in 1972 marked the sixth landing and the most recent human visit. Artemis II is scheduled to complete a crewed flyby of the Moon in 2025, and Artemis III will perform the first lunar landing since Apollo 17 with it scheduled for launch no earlier than 2026. Robotic missions are still pursued vigorously.
Mars
The exploration of Mars has been an important part of the space exploration programs of the Soviet Union (later Russia), the United States, Europe, Japan and India. Dozens of robotic spacecraft, including orbiters, landers, and rovers, have been launched toward Mars since the 1960s. These missions were aimed at gathering data about current conditions and answering questions about the history of Mars. The questions raised by the scientific community are expected to not only give a better appreciation of the Red Planet but also yield further insight into the past, and possible future, of Earth.
The exploration of Mars has come at a considerable financial cost with roughly two-thirds of all spacecraft destined for Mars failing before completing their missions, with some failing before they even began. Such a high failure rate can be attributed to the complexity and large number of variables involved in an interplanetary journey, and has led researchers to jokingly speak of The Great Galactic Ghoul which subsists on a diet of Mars probes. This phenomenon is also informally known as the "Mars Curse". In contrast to overall high failure rates in the exploration of Mars, India has become the first country to achieve success of its maiden attempt. India's Mars Orbiter Mission (MOM) is one of the least expensive interplanetary missions ever undertaken with an approximate total cost of 450 Crore (). The first mission to Mars by any Arab country has been taken up by the United Arab Emirates. Called the Emirates Mars Mission, it was launched on 19 July 2020 and went into orbit around Mars on 9 February 2021. The uncrewed exploratory probe was named "Hope Probe" and was sent to Mars to study its atmosphere in detail.
Phobos
The Russian space mission Fobos-Grunt, which launched on 9 November 2011, experienced a failure leaving it stranded in low Earth orbit. It was to begin exploration of the Phobos and Martian circumterrestrial orbit, and study whether the moons of Mars, or at least Phobos, could be a "trans-shipment point" for spaceships traveling to Mars.
Asteroids
Until the advent of space travel, objects in the asteroid belt were merely pinpricks of light in even the largest telescopes, their shapes and terrain remaining a mystery. Several asteroids have now been visited by probes, the first of which was Galileo, which flew past two: 951 Gaspra in 1991, followed by 243 Ida in 1993. Both of these lay near enough to Galileos planned trajectory to Jupiter that they could be visited at acceptable cost. The first landing on an asteroid was performed by the NEAR Shoemaker probe in 2000, following an orbital survey of the object, 433 Eros. The dwarf planet Ceres and the asteroid 4 Vesta, two of the three largest asteroids, were visited by NASA's Dawn spacecraft, launched in 2007.
Hayabusa was a robotic spacecraft developed by the Japan Aerospace Exploration Agency to return a sample of material from the small near-Earth asteroid 25143 Itokawa to Earth for further analysis. Hayabusa was launched on 9 May 2003 and rendezvoused with Itokawa in mid-September 2005. After arriving at Itokawa, Hayabusa studied the asteroid's shape, spin, topography, color, composition, density, and history. In November 2005, it landed on the asteroid twice to collect samples. The spacecraft returned to Earth on 13 June 2010.
Jupiter
The exploration of Jupiter has consisted solely of a number of automated NASA spacecraft visiting the planet since 1973. A large majority of the missions have been "flybys", in which detailed observations are taken without the probe landing or entering orbit; such as in Pioneer and Voyager programs. The Galileo and Juno spacecraft are the only spacecraft to have entered the planet's orbit. As Jupiter is believed to have only a relatively small rocky core and no real solid surface, a landing mission is precluded.
Reaching Jupiter from Earth requires a delta-v of 9.2 km/s, which is comparable to the 9.7 km/s delta-v needed to reach low Earth orbit. Fortunately, gravity assists through planetary flybys can be used to reduce the energy required at launch to reach Jupiter, albeit at the cost of a significantly longer flight duration.
Jupiter has 95 known moons, many of which have relatively little known information about them.
Saturn
Saturn has been explored only through uncrewed spacecraft launched by NASA, including one mission (Cassini–Huygens) planned and executed in cooperation with other space agencies. These missions consist of flybys in 1979 by Pioneer 11, in 1980 by Voyager 1, in 1982 by Voyager 2 and an orbital mission by the Cassini spacecraft, which lasted from 2004 until 2017.
Saturn has at least 62 known moons, although the exact number is debatable since Saturn's rings are made up of vast numbers of independently orbiting objects of varying sizes. The largest of the moons is Titan, which holds the distinction of being the only moon in the Solar System with an atmosphere denser and thicker than that of Earth. Titan holds the distinction of being the only object in the Outer Solar System that has been explored with a lander, the Huygens probe deployed by the Cassini spacecraft.
Uranus
The exploration of Uranus has been entirely through the Voyager 2 spacecraft, with no other visits currently planned. Given its axial tilt of 97.77°, with its polar regions exposed to sunlight or darkness for long periods, scientists were not sure what to expect at Uranus. The closest approach to Uranus occurred on 24 January 1986. Voyager 2 studied the planet's unique atmosphere and magnetosphere. Voyager 2 also examined its ring system and the moons of Uranus including all five of the previously known moons, while discovering an additional ten previously unknown moons.
Images of Uranus proved to have a uniform appearance, with no evidence of the dramatic storms or atmospheric banding evident on Jupiter and Saturn. Great effort was required to even identify a few clouds in the images of the planet. The magnetosphere of Uranus, however, proved to be unique, being profoundly affected by the planet's unusual axial tilt. In contrast to the bland appearance of Uranus itself, striking images were obtained of the Moons of Uranus, including evidence that Miranda had been unusually geologically active.
Neptune
The exploration of Neptune began with the 25 August 1989 Voyager 2 flyby, the sole visit to the system as of . The possibility of a Neptune Orbiter has been discussed, but no other missions have been given serious thought.
Although the extremely uniform appearance of Uranus during Voyager 2s visit in 1986 had led to expectations that Neptune would also have few visible atmospheric phenomena, the spacecraft found that Neptune had obvious banding, visible clouds, auroras, and even a conspicuous anticyclone storm system rivaled in size only by Jupiter's Great Red Spot. Neptune also proved to have the fastest winds of any planet in the Solar System, measured as high as 2,100 km/h. Voyager 2 also examined Neptune's ring and moon system. It discovered 900 complete rings and additional partial ring "arcs" around Neptune. In addition to examining Neptune's three previously known moons, Voyager 2 also discovered five previously unknown moons, one of which, Proteus, proved to be the last largest moon in the system. Data from Voyager 2 supported the view that Neptune's largest moon, Triton, is a captured Kuiper belt object.
Pluto
The dwarf planet Pluto presents significant challenges for spacecraft because of its great distance from Earth (requiring high velocity for reasonable trip times) and small mass (making capture into orbit difficult at present). Voyager 1 could have visited Pluto, but controllers opted instead for a close flyby of Saturn's moon Titan, resulting in a trajectory incompatible with a Pluto flyby. Voyager 2 never had a plausible trajectory for reaching Pluto.
After an intense political battle, a mission to Pluto dubbed New Horizons was granted funding from the United States government in 2003. New Horizons was launched successfully on 19 January 2006. In early 2007 the craft made use of a gravity assist from Jupiter. Its closest approach to Pluto was on 14 July 2015; scientific observations of Pluto began five months prior to closest approach and continued for 16 days after the encounter.
Kuiper Belt Objects
The New Horizons mission also performed a flyby of the small planetesimal Arrokoth, in the Kuiper belt, in 2019. This was its first extended mission.
Comets
Although many comets have been studied from Earth sometimes with centuries-worth of observations, only a few comets have been closely visited. In 1985, the International Cometary Explorer conducted the first comet fly-by (21P/Giacobini-Zinner) before joining the Halley Armada studying the famous comet. The Deep Impact probe smashed into 9P/Tempel to learn more about its structure and composition and the Stardust mission returned samples of another comet's tail. The Philae lander successfully landed on Comet Churyumov–Gerasimenko in 2014 as part of the broader Rosetta mission.
Deep space exploration
Deep space exploration is the branch of astronomy, astronautics and space technology that is involved with the exploration of distant regions of outer space. Physical exploration of space is conducted both by human spaceflights (deep-space astronautics) and by robotic spacecraft.
Some of the best candidates for future deep space engine technologies include anti-matter, nuclear power and beamed propulsion. Beamed propulsion, appears to be the best candidate for deep space exploration presently available, since it uses known physics and known technology that is being developed for other purposes.
Future of space exploration
Breakthrough Starshot
Breakthrough Starshot is a research and engineering project by the Breakthrough Initiatives to develop a proof-of-concept fleet of light sail spacecraft named StarChip, to be capable of making the journey to the Alpha Centauri star system 4.37 light-years away. It was founded in 2016 by Yuri Milner, Stephen Hawking, and Mark Zuckerberg.
Asteroids
An article in the science magazine Nature suggested the use of asteroids as a gateway for space exploration, with the ultimate destination being Mars. In order to make such an approach viable, three requirements need to be fulfilled: first, "a thorough asteroid survey to find thousands of nearby bodies suitable for astronauts to visit"; second, "extending flight duration and distance capability to ever-increasing ranges out to Mars"; and finally, "developing better robotic vehicles and tools to enable astronauts to explore an asteroid regardless of its size, shape or spin". Furthermore, using asteroids would provide astronauts with protection from galactic cosmic rays, with mission crews being able to land on them without great risk to radiation exposure.
Artemis program
The Artemis program is an ongoing crewed spaceflight program carried out by NASA, U.S. commercial spaceflight companies, and international partners such as ESA, with the goal of landing "the first woman and the next man" on the Moon, specifically at the lunar south pole region. Artemis would be the next step towards the long-term goal of establishing a sustainable presence on the Moon, laying the foundation for private companies to build a lunar economy, and eventually sending humans to Mars.
In 2017, the lunar campaign was authorized by Space Policy Directive 1, using various ongoing spacecraft programs such as Orion, the Lunar Gateway, Commercial Lunar Payload Services, and adding an undeveloped crewed lander. The Space Launch System will serve as the primary launch vehicle for Orion, while commercial launch vehicles are planned for use to launch other elements of the campaign. NASA requested $1.6 billion in additional funding for Artemis for fiscal year 2020, while the U.S. Senate Appropriations Committee requested from NASA a five-year budget profile which is needed for evaluation and approval by the U.S. Congress. As of 2024, the first Artemis mission was launched in 2022 with the second mission, a crewed lunar flyby planned for 2025. Construction on the Lunar Gateway is underway with initial capabilities set for the 2025–2027 timeframe. The first CLPS lander landed in 2024, marking the first US spacecraft to land since Apollo 17.
Rationales
The research that is conducted by national space exploration agencies, such as NASA and Roscosmos, is one of the reasons supporters cite to justify government expenses. Economic analyses of the NASA programs often showed ongoing economic benefits (such as NASA spin-offs), generating many times the revenue of the cost of the program. It is also argued that space exploration would lead to the extraction of resources on other planets and especially asteroids, which contain billions of dollars worth of minerals and metals. Such expeditions could generate substantial revenue. In addition, it has been argued that space exploration programs help inspire youth to study in science and engineering. Space exploration also gives scientists the ability to perform experiments in other settings and expand humanity's knowledge.
Another claim is that space exploration is a necessity to humankind and that staying on Earth will eventually lead to extinction. Some of the reasons are lack of natural resources, comets, nuclear war, and worldwide epidemic. Stephen Hawking, renowned British theoretical physicist, said, "I don't think the human race will survive the next thousand years, unless we spread into space. There are too many accidents that can befall life on a single planet. But I'm an optimist. We will reach out to the stars." Author Arthur C. Clarke (1950) presented a summary of motivations for the human exploration of space in his non-fiction semi-technical monograph Interplanetary Flight. He argued that humanity's choice is essentially between expansion off Earth into space, versus cultural (and eventually biological) stagnation and death.
These motivations could be attributed to one of the first rocket scientists in NASA, Wernher von Braun, and his vision of humans moving beyond Earth. The basis of this plan was to:
Develop multi-stage rockets capable of placing satellites, animals, and humans in space.
Development of large, winged reusable spacecraft capable of carrying humans and equipment into Earth orbit in a way that made space access routine and cost-effective.
Construction of a large, permanently occupied space station to be used as a platform both to observe Earth and from which to launch deep space expeditions.
Launching the first human flights around the Moon, leading to the first landings of humans on the Moon, with the intent of exploring that body and establishing permanent lunar bases.
Assembly and fueling of spaceships in Earth orbit for the purpose of sending humans to Mars with the intent of eventually colonizing that planet.
Known as the Von Braun Paradigm, the plan was formulated to lead humans in the exploration of space. Von Braun's vision of human space exploration served as the model for efforts in space exploration well into the twenty-first century, with NASA incorporating this approach into the majority of their projects. The steps were followed out of order, as seen by the Apollo program reaching the moon before the space shuttle program was started, which in turn was used to complete the International Space Station. Von Braun's Paradigm formed NASA's drive for human exploration, in the hopes that humans discover the far reaches of the universe.
NASA has produced a series of public service announcement videos supporting the concept of space exploration.
Overall, the U.S. public remains largely supportive of both crewed and uncrewed space exploration. According to an Associated Press Poll conducted in July 2003, 71% of U.S. citizens agreed with the statement that the space program is "a good investment", compared to 21% who did not.
Human nature
Space advocacy and space policy regularly invokes exploration as a human nature.
Topics
Spaceflight
Spaceflight is the use of space technology to achieve the flight of spacecraft into and through outer space.
Spaceflight is used in space exploration, and also in commercial activities like space tourism and satellite telecommunications. Additional non-commercial uses of spaceflight include space observatories, reconnaissance satellites and other Earth observation satellites.
A spaceflight typically begins with a rocket launch, which provides the initial thrust to overcome the force of gravity and propels the spacecraft from the surface of Earth. Once in space, the motion of a spacecraft—both when unpropelled and when under propulsion—is covered by the area of study called astrodynamics. Some spacecraft remain in space indefinitely, some disintegrate during atmospheric reentry, and others reach a planetary or lunar surface for landing or impact.
Satellites
Satellites are used for a large number of purposes. Common types include military (spy) and civilian Earth observation satellites, communication satellites, navigation satellites, weather satellites, and research satellites. Space stations and human spacecraft in orbit are also satellites.
Commercialization of space
The commercialization of space first started out with the launching of private satellites by NASA or other space agencies. Current examples of the commercial satellite use of space include satellite navigation systems, satellite television, satellite communications (such as internet services) and satellite radio. The next step of commercialization of space was seen as human spaceflight. Flying humans safely to and from space had become routine to NASA and Russia. Reusable spacecraft were an entirely new engineering challenge, something only seen in novels and films like Star Trek and War of the Worlds. Astronaut Buzz Aldrin supported the use of making a reusable vehicle like the space shuttle. Aldrin held that reusable spacecraft were the key in making space travel affordable, stating that the use of "passenger space travel is a huge potential market big enough to justify the creation of reusable launch vehicles". Space tourism is a next step in the use of reusable vehicles in the commercialization of space. The purpose of this form of space travel is personal pleasure.
Private spaceflight companies such as SpaceX and Blue Origin, and commercial space stations such as the Axiom Space and the Bigelow Commercial Space Station have changed the cost and overall landscape of space exploration, and are expected to continue to do so in the near future.
Alien life
Astrobiology is the interdisciplinary study of life in the universe, combining aspects of astronomy, biology and geology. It is focused primarily on the study of the origin, distribution and evolution of life. It is also known as exobiology''' (from Greek: έξω, exo, "outside"). The term "Xenobiology" has been used as well, but this is technically incorrect because its terminology means "biology of the foreigners". Astrobiologists must also consider the possibility of life that is chemically entirely distinct from any life found on Earth. In the Solar System, some of the prime locations for current or past astrobiology are on Enceladus, Europa, Mars, and Titan.
Human spaceflight and habitation
To date, the longest human occupation of space is the International Space Station which has been in continuous use for . Valeri Polyakov's record single spaceflight of almost 438 days aboard the Mir space station has not been surpassed. The health effects of space have been well documented through years of research conducted in the field of aerospace medicine. Analog environments similar to those experienced in space travel (like deep sea submarines), have been used in this research to further explore the relationship between isolation and extreme environments. It is imperative that the health of the crew be maintained as any deviation from baseline may compromise the integrity of the mission as well as the safety of the crew, hence the astronauts must endure rigorous medical screenings and tests prior to embarking on any missions. However, it does not take long for the environmental dynamics of spaceflight to commence its toll on the human body; for example, space motion sickness (SMS) – a condition which affects the neurovestibular system and culminates in mild to severe signs and symptoms such as vertigo, dizziness, fatigue, nausea, and disorientation – plagues almost all space travelers within their first few days in orbit. Space travel can also have an impact on the psyche of the crew members as delineated in anecdotal writings composed after their retirement. Space travel can adversely affect the body's natural biological clock (circadian rhythm); sleep patterns causing sleep deprivation and fatigue; and social interaction; consequently, residing in a Low Earth Orbit (LEO) environment for a prolonged amount of time can result in both mental and physical exhaustion. Long-term stays in space reveal issues with bone and muscle loss in low gravity, immune system suppression, problems with eyesight, and radiation exposure. The lack of gravity causes fluid to rise upward which can cause pressure to build up in the eye, resulting in vision problems; the loss of bone minerals and densities; cardiovascular deconditioning; and decreased endurance and muscle mass.
Radiation is an insidious health hazard to space travelers as it is invisible and can cause cancer. When above the Earth's magnetic field, spacecraft are no longer protected from the sun's radiation; the danger of radiation is even more potent in deep space. The hazards of radiation can be ameliorated through protective shielding on the spacecraft, alerts, and dosimetry.
Fortunately, with new and rapidly evolving technological advancements, those in Mission Control are able to monitor the health of their astronauts more closely using telemedicine. One may not be able to completely evade the physiological effects of space flight, but those effects can be mitigated. For example, medical systems aboard space vessels such as the International Space Station (ISS) are well equipped and designed to counteract the effects of lack of gravity and weightlessness; on-board treadmills can help prevent muscle loss and reduce the risk of developing premature osteoporosis. Additionally, a crew medical officer is appointed for each ISS mission and a flight surgeon is available 24/7 via the ISS Mission Control Center located in Houston, Texas. Although the interactions are intended to take place in real time, communications between the space and terrestrial crew may become delayed – sometimes by as much as 20 minutes – as their distance from each other increases when the spacecraft moves further out of low Earth orbit; because of this the crew are trained and need to be prepared to respond to any medical emergencies that may arise on the vessel as the ground crew are hundreds of miles away.
Many past and current concepts for the continued exploration and colonization of space focus on a return to the Moon as a "steppingstone" to the other planets, especially Mars. At the end of 2006, NASA announced they were planning to build a permanent Moon base with continual presence by 2024.
Beyond the technical factors that could make living in space more widespread, it has been suggested that the lack of private property, the inability or difficulty in establishing property rights in space, has been an impediment to the development of space for human habitation. Since the advent of space technology in the latter half of the twentieth century, the ownership of property in space has been murky, with strong arguments both for and against. In particular, the making of national territorial claims in outer space and on celestial bodies has been specifically proscribed by the Outer Space Treaty, which had been, , ratified by all spacefaring nations. Space colonization, also called space settlement and space humanization, would be the permanent autonomous (self-sufficient) human habitation of locations outside Earth, especially of natural satellites or planets such as the Moon or Mars, using significant amounts of in-situ resource utilization.
Human representation and participation
Participation and representation of humanity in space is an issue ever since the first phase of space exploration. Some rights of non-spacefaring countries have been mostly secured through international space law, declaring space the "province of all mankind", understanding spaceflight as its resource, though sharing of space for all humanity is still criticized as imperialist and lacking. Additionally to international inclusion, the inclusion of women and people of colour has also been lacking. To reach a more inclusive spaceflight, some organizations like the Justspace Alliance and IAU featured Inclusive Astronomy have been formed in recent years.
Women
The first woman to go to space was Valentina Tereshkova. She flew in 1963 but it was not until the 1980s that another woman entered space again. All astronauts were required to be military test pilots at the time and women were not able to join this career. This is one reason for the delay in allowing women to join space crews. After the rule changed, Svetlana Savitskaya became the second woman to go to space, she was also from the Soviet Union. Sally Ride became the next woman in space and the first woman to fly to space through the United States program.
Since then, eleven other countries have allowed women astronauts. The first all-female space walk occurred in 2018, including Christina Koch and Jessica Meir. They had both previously participated in space walks with NASA. The first woman to go to the Moon is planned for 2026.
Despite these developments, women are underrepresented among astronauts and especially cosmonauts. Issues that block potential applicants from the programs, and limit the space missions they are able to go on, include:
agencies limiting women to half as much time in space than men, arguing that there may be unresearched additional risks for cancer.
a lack of space suits sized appropriately for female astronauts.
Art
Artistry in and from space ranges from signals, capturing and arranging material like Yuri Gagarin's selfie in space or the image The Blue Marble, over drawings like the first one in space by cosmonaut and artist Alexei Leonov, music videos like Chris Hadfield's cover of Space Oddity on board the ISS, to permanent installations on celestial bodies like on the Moon.
| Technology | Space | null |
28437 | https://en.wikipedia.org/wiki/Simple%20harmonic%20motion | Simple harmonic motion | In mechanics and physics, simple harmonic motion (sometimes abbreviated as ) is a special type of periodic motion an object experiences by means of a restoring force whose magnitude is directly proportional to the distance of the object from an equilibrium position and acts towards the equilibrium position. It results in an oscillation that is described by a sinusoid which continues indefinitely (if uninhibited by friction or any other dissipation of energy).
Simple harmonic motion can serve as a mathematical model for a variety of motions, but is typified by the oscillation of a mass on a spring when it is subject to the linear elastic restoring force given by Hooke's law. The motion is sinusoidal in time and demonstrates a single resonant frequency. Other phenomena can be modeled by simple harmonic motion, including the motion of a simple pendulum, although for it to be an accurate model, the net force on the object at the end of the pendulum must be proportional to the displacement (and even so, it is only a good approximation when the angle of the swing is small; see small-angle approximation). Simple harmonic motion can also be used to model molecular vibration.
Simple harmonic motion provides a basis for the characterization of more complicated periodic motion through the techniques of Fourier analysis.
Introduction
The motion of a particle moving along a straight line with an acceleration whose direction is always toward a fixed point on the line and whose magnitude is proportional to the displacement from the fixed point is called simple harmonic motion.
In the diagram, a simple harmonic oscillator, consisting of a weight attached to one end of a spring, is shown. The other end of the spring is connected to a rigid support such as a wall. If the system is left at rest at the equilibrium position then there is no net force acting on the mass. However, if the mass is displaced from the equilibrium position, the spring exerts a restoring elastic force that obeys Hooke's law.
Mathematically,
where is the restoring elastic force exerted by the spring (in SI units: N), is the spring constant (N·m−1), and is the displacement from the equilibrium position (in metres).
For any simple mechanical harmonic oscillator:
When the system is displaced from its equilibrium position, a restoring force that obeys Hooke's law tends to restore the system to equilibrium.
Once the mass is displaced from its equilibrium position, it experiences a net restoring force. As a result, it accelerates and starts going back to the equilibrium position. When the mass moves closer to the equilibrium position, the restoring force decreases. At the equilibrium position, the net restoring force vanishes. However, at , the mass has momentum because of the acceleration that the restoring force has imparted. Therefore, the mass continues past the equilibrium position, compressing the spring. A net restoring force then slows it down until its velocity reaches zero, whereupon it is accelerated back to the equilibrium position again.
As long as the system has no energy loss, the mass continues to oscillate. Thus simple harmonic motion is a type of periodic motion. If energy is lost in the system, then the mass exhibits damped oscillation.
Note if the real space and phase space plot are not co-linear, the phase space motion becomes elliptical. The area enclosed depends on the amplitude and the maximum momentum.
Dynamics
In Newtonian mechanics, for one-dimensional simple harmonic motion, the equation of motion, which is a second-order linear ordinary differential equation with constant coefficients, can be obtained by means of Newton's second law and Hooke's law for a mass on a spring.
where is the inertial mass of the oscillating body, is its displacement from the equilibrium (or mean) position, and is a constant (the spring constant for a mass on a spring).
Therefore,
Solving the differential equation above produces a solution that is a sinusoidal function:
where
The meaning of the constants and can be easily found: setting on the equation above we see that , so that is the initial position of the particle, ; taking the derivative of that equation and evaluating at zero we get that , so that is the initial speed of the particle divided by the angular frequency, . Thus we can write:
This equation can also be written in the form:
where
or equivalently
In the solution, and are two constants determined by the initial conditions (specifically, the initial position at time is , while the initial velocity is ), and the origin is set to be the equilibrium position. Each of these constants carries a physical meaning of the motion: is the amplitude (maximum displacement from the equilibrium position), is the angular frequency, and is the initial phase.
Using the techniques of calculus, the velocity and acceleration as a function of time can be found:
Speed:
Maximum speed: (at equilibrium point)
Maximum acceleration: (at extreme points)
By definition, if a mass is under SHM its acceleration is directly proportional to displacement.
where
Since ,
and, since where is the time period,
These equations demonstrate that the simple harmonic motion is isochronous (the period and frequency are independent of the amplitude and the initial phase of the motion).
Energy
Substituting with , the kinetic energy of the system at time is
and the potential energy is
In the absence of friction and other energy loss, the total mechanical energy has a constant value
Examples
The following physical systems are some examples of simple harmonic oscillator.
Mass on a spring
A mass attached to a spring of spring constant exhibits simple harmonic motion in closed space. The equation for describing the period:
shows the period of oscillation is independent of the amplitude, though in practice the amplitude should be small. The above equation is also valid in the case when an additional constant force is being applied on the mass, i.e. the additional constant force cannot change the period of oscillation.
Uniform circular motion
Simple harmonic motion can be considered the one-dimensional projection of uniform circular motion. If an object moves with angular speed around a circle of radius centered at the origin of the -plane, then its motion along each coordinate is simple harmonic motion with amplitude and angular frequency .
Oscillatory motion
The motion of a body in which it moves to and from about a definite point is also called oscillatory motion or vibratory motion. The time period is able to be calculated by
where l is the distance from rotation to center of mass of object undergoing SHM and g being gravitational acceleration. This is analogous to the mass-spring system.
Mass of a simple pendulum
In the small-angle approximation, the motion of a simple pendulum is approximated by simple harmonic motion. The period of a mass attached to a pendulum of length with gravitational acceleration is given by
This shows that the period of oscillation is independent of the amplitude and mass of the pendulum but not of the acceleration due to gravity, , therefore a pendulum of the same length on the Moon would swing more slowly due to the Moon's lower gravitational field strength. Because the value of varies slightly over the surface of the earth, the time period will vary slightly from place to place and will also vary with height above sea level.
This approximation is accurate only for small angles because of the expression for angular acceleration being proportional to the sine of the displacement angle:
where is the moment of inertia. When is small, and therefore the expression becomes
which makes angular acceleration directly proportional and opposite to , satisfying the definition of simple harmonic motion (that net force is directly proportional to the displacement from the mean position and is directed towards the mean position).
Scotch yoke
A Scotch yoke mechanism can be used to convert between rotational motion and linear reciprocating motion. The linear motion can take various forms depending on the shape of the slot, but the basic yoke with a constant rotation speed produces a linear motion that is simple harmonic in form.
| Physical sciences | Basics_10 | null |
28442 | https://en.wikipedia.org/wiki/Sorting%20algorithm | Sorting algorithm | In computer science, a sorting algorithm is an algorithm that puts elements of a list into an order. The most frequently used orders are numerical order and lexicographical order, and either ascending or descending. Efficient sorting is important for optimizing the efficiency of other algorithms (such as search and merge algorithms) that require input data to be in sorted lists. Sorting is also often useful for canonicalizing data and for producing human-readable output.
Formally, the output of any sorting algorithm must satisfy two conditions:
The output is in monotonic order (each element is no smaller/larger than the previous element, according to the required order).
The output is a permutation (a reordering, yet retaining all of the original elements) of the input.
Although some algorithms are designed for sequential access, the highest-performing algorithms assume data is stored in a data structure which allows random access.
History and concepts
From the beginning of computing, the sorting problem has attracted a great deal of research, perhaps due to the complexity of solving it efficiently despite its simple, familiar statement. Among the authors of early sorting algorithms around 1951 was Betty Holberton, who worked on ENIAC and UNIVAC. Bubble sort was analyzed as early as 1956. Asymptotically optimal algorithms have been known since the mid-20th century new algorithms are still being invented, with the widely used Timsort dating to 2002, and the library sort being first published in 2006.
Comparison sorting algorithms have a fundamental requirement of Ω(n log n) comparisons (some input sequences will require a multiple of n log n comparisons, where n is the number of elements in the array to be sorted). Algorithms not based on comparisons, such as counting sort, can have better performance.
Sorting algorithms are prevalent in introductory computer science classes, where the abundance of algorithms for the problem provides a gentle introduction to a variety of core algorithm concepts, such as big O notation, divide-and-conquer algorithms, data structures such as heaps and binary trees, randomized algorithms, best, worst and average case analysis, time–space tradeoffs, and upper and lower bounds.
Sorting small arrays optimally (in the fewest comparisons and swaps) or fast (i.e. taking into account machine-specific details) is still an open research problem, with solutions only known for very small arrays (<20 elements). Similarly optimal (by various definitions) sorting on a parallel machine is an open research topic.
Classification
Sorting algorithms can be classified by:
Computational complexity
Best, worst and average case behavior in terms of the size of the list. For typical serial sorting algorithms, good behavior is O(n log n), with parallel sort in O(log2 n), and bad behavior is O(n2). Ideal behavior for a serial sort is O(n), but this is not possible in the average case. Optimal parallel sorting is O(log n).
Swaps for "in-place" algorithms.
Memory usage (and use of other computer resources). In particular, some sorting algorithms are "in-place". Strictly, an in-place sort needs only O(1) memory beyond the items being sorted; sometimes O(log n) additional memory is considered "in-place".
Recursion: Some algorithms are either recursive or non-recursive, while others may be both (e.g., merge sort).
Stability: stable sorting algorithms maintain the relative order of records with equal keys (i.e., values).
Whether or not they are a comparison sort. A comparison sort examines the data only by comparing two elements with a comparison operator.
General method: insertion, exchange, selection, merging, etc. Exchange sorts include bubble sort and quicksort. Selection sorts include cycle sort and heapsort.
Whether the algorithm is serial or parallel. The remainder of this discussion almost exclusively concentrates on serial algorithms and assumes serial operation.
Adaptability: Whether or not the presortedness of the input affects the running time. Algorithms that take this into account are known to be adaptive.
Online: An algorithm such as Insertion Sort that is online can sort a constant stream of input.
Stability
Stable sort algorithms sort equal elements in the same order that they appear in the input. For example, in the card sorting example to the right, the cards are being sorted by their rank, and their suit is being ignored. This allows the possibility of multiple different correctly sorted versions of the original list. Stable sorting algorithms choose one of these, according to the following rule: if two items compare as equal (like the two 5 cards), then their relative order will be preserved, i.e. if one comes before the other in the input, it will come before the other in the output.
Stability is important to preserve order over multiple sorts on the same data set. For example, say that student records consisting of name and class section are sorted dynamically, first by name, then by class section. If a stable sorting algorithm is used in both cases, the sort-by-class-section operation will not change the name order; with an unstable sort, it could be that sorting by section shuffles the name order, resulting in a nonalphabetical list of students.
More formally, the data being sorted can be represented as a record or tuple of values, and the part of the data that is used for sorting is called the key. In the card example, cards are represented as a record (rank, suit), and the key is the rank. A sorting algorithm is stable if whenever there are two records R and S with the same key, and R appears before S in the original list, then R will always appear before S in the sorted list.
When equal elements are indistinguishable, such as with integers, or more generally, any data where the entire element is the key, stability is not an issue. Stability is also not an issue if all keys are different.
Unstable sorting algorithms can be specially implemented to be stable. One way of doing this is to artificially extend the key comparison so that comparisons between two objects with otherwise equal keys are decided using the order of the entries in the original input list as a tie-breaker. Remembering this order, however, may require additional time and space.
One application for stable sorting algorithms is sorting a list using a primary and secondary key. For example, suppose we wish to sort a hand of cards such that the suits are in the order clubs (♣), diamonds (♦), hearts (♥), spades (♠), and within each suit, the cards are sorted by rank. This can be done by first sorting the cards by rank (using any sort), and then doing a stable sort by suit:
Within each suit, the stable sort preserves the ordering by rank that was already done. This idea can be extended to any number of keys and is utilised by radix sort. The same effect can be achieved with an unstable sort by using a lexicographic key comparison, which, e.g., compares first by suit, and then compares by rank if the suits are the same.
Comparison of algorithms
This analysis assumes that the length of each key is constant and that all comparisons, swaps and other operations can proceed in constant time.
Legend:
is the number of records to be sorted.
Comparison column has the following ranking classifications: "Best", "Average" and "Worst" if the time complexity is given for each case.
"Memory" denotes the amount of additional storage required by the algorithm.
The run times and the memory requirements listed are inside big O notation, hence the base of the logarithms does not matter.
The notation means .
Comparison sorts
Below is a table of comparison sorts. Mathematical analysis demonstrates a comparison sort cannot perform better than on average.
Non-comparison sorts
The following table describes integer sorting algorithms and other sorting algorithms that are not comparison sorts. These algorithms are not limited to Ω(n log n) unless meet unit-cost random-access machine model as described below.
Complexities below assume items to be sorted, with keys of size , digit size , and the range of numbers to be sorted.
Many of them are based on the assumption that the key size is large enough that all entries have unique key values, and hence that , where ≪ means "much less than".
In the unit-cost random-access machine model, algorithms with running time of , such as radix sort, still take time proportional to , because is limited to be not more than , and a larger number of elements to sort would require a bigger in order to store them in the memory.
Samplesort can be used to parallelize any of the non-comparison sorts, by efficiently distributing data into several buckets and then passing down sorting to several processors, with no need to merge as buckets are already sorted between each other.
Others
Some algorithms are slow compared to those discussed above, such as the bogosort with unbounded run time and the stooge sort which has O(n2.7) run time. These sorts are usually described for educational purposes to demonstrate how the run time of algorithms is estimated. The following table describes some sorting algorithms that are impractical for real-life use in traditional software contexts due to extremely poor performance or specialized hardware requirements.
Theoretical computer scientists have detailed other sorting algorithms that provide better than O(n log n) time complexity assuming additional constraints, including:
Thorup's algorithm, a randomized algorithm for sorting keys from a domain of finite size, taking time and O(n) space.
A randomized integer sorting algorithm taking expected time and O(n) space.
One of the authors of the previously mentioned algorithm also claims to have discovered an algorithm taking time and O(n) space, sorting real numbers. Further claiming that, without any added assumptions on the input, it can be modified to achieve time and O(n) space.
Popular sorting algorithms
While there are a large number of sorting algorithms, in practical implementations a few algorithms predominate. Insertion sort is widely used for small data sets, while for large data sets an asymptotically efficient sort is used, primarily heapsort, merge sort, or quicksort. Efficient implementations generally use a hybrid algorithm, combining an asymptotically efficient algorithm for the overall sort with insertion sort for small lists at the bottom of a recursion. Highly tuned implementations use more sophisticated variants, such as Timsort (merge sort, insertion sort, and additional logic), used in Android, Java, and Python, and introsort (quicksort and heapsort), used (in variant forms) in some C++ sort implementations and in .NET.
For more restricted data, such as numbers in a fixed interval, distribution sorts such as counting sort or radix sort are widely used. Bubble sort and variants are rarely used in practice, but are commonly found in teaching and theoretical discussions.
When physically sorting objects (such as alphabetizing papers, tests or books) people intuitively generally use insertion sorts for small sets. For larger sets, people often first bucket, such as by initial letter, and multiple bucketing allows practical sorting of very large sets. Often space is relatively cheap, such as by spreading objects out on the floor or over a large area, but operations are expensive, particularly moving an object a large distance – locality of reference is important. Merge sorts are also practical for physical objects, particularly as two hands can be used, one for each list to merge, while other algorithms, such as heapsort or quicksort, are poorly suited for human use. Other algorithms, such as library sort, a variant of insertion sort that leaves spaces, are also practical for physical use.
Simple sorts
Two of the simplest sorts are insertion sort and selection sort, both of which are efficient on small data, due to low overhead, but not efficient on large data. Insertion sort is generally faster than selection sort in practice, due to fewer comparisons and good performance on almost-sorted data, and thus is preferred in practice, but selection sort uses fewer writes, and thus is used when write performance is a limiting factor.
Insertion sort
Insertion sort is a simple sorting algorithm that is relatively efficient for small lists and mostly sorted lists, and is often used as part of more sophisticated algorithms. It works by taking elements from the list one by one and inserting them in their correct position into a new sorted list similar to how one puts money in their wallet. In arrays, the new list and the remaining elements can share the array's space, but insertion is expensive, requiring shifting all following elements over by one. Shellsort is a variant of insertion sort that is more efficient for larger lists.
Selection sort
Selection sort is an in-place comparison sort. It has O(n2) complexity, making it inefficient on large lists, and generally performs worse than the similar insertion sort. Selection sort is noted for its simplicity and also has performance advantages over more complicated algorithms in certain situations.
The algorithm finds the minimum value, swaps it with the value in the first position, and repeats these steps for the remainder of the list. It does no more than n swaps and thus is useful where swapping is very expensive.
Efficient sorts
Practical general sorting algorithms are almost always based on an algorithm with average time complexity (and generally worst-case complexity) O(n log n), of which the most common are heapsort, merge sort, and quicksort. Each has advantages and drawbacks, with the most significant being that simple implementation of merge sort uses O(n) additional space, and simple implementation of quicksort has O(n2) worst-case complexity. These problems can be solved or ameliorated at the cost of a more complex algorithm.
While these algorithms are asymptotically efficient on random data, for practical efficiency on real-world data various modifications are used. First, the overhead of these algorithms becomes significant on smaller data, so often a hybrid algorithm is used, commonly switching to insertion sort once the data is small enough. Second, the algorithms often perform poorly on already sorted data or almost sorted data – these are common in real-world data and can be sorted in O(n) time by appropriate algorithms. Finally, they may also be unstable, and stability is often a desirable property in a sort. Thus more sophisticated algorithms are often employed, such as Timsort (based on merge sort) or introsort (based on quicksort, falling back to heapsort).
Merge sort
Merge sort takes advantage of the ease of merging already sorted lists into a new sorted list. It starts by comparing every two elements (i.e., 1 with 2, then 3 with 4...) and swapping them if the first should come after the second. It then merges each of the resulting lists of two into lists of four, then merges those lists of four, and so on; until at last two lists are merged into the final sorted list. Of the algorithms described here, this is the first that scales well to very large lists, because its worst-case running time is O(n log n). It is also easily applied to lists, not only arrays, as it only requires sequential access, not random access. However, it has additional O(n) space complexity and involves a large number of copies in simple implementations.
Merge sort has seen a relatively recent surge in popularity for practical implementations, due to its use in the sophisticated algorithm Timsort, which is used for the standard sort routine in the programming languages Python and Java (as of JDK7). Merge sort itself is the standard routine in Perl, among others, and has been used in Java at least since 2000 in JDK1.3.
Heapsort
Heapsort is a much more efficient version of selection sort. It also works by determining the largest (or smallest) element of the list, placing that at the end (or beginning) of the list, then continuing with the rest of the list, but accomplishes this task efficiently by using a data structure called a heap, a special type of binary tree. Once the data list has been made into a heap, the root node is guaranteed to be the largest (or smallest) element. When it is removed and placed at the end of the list, the heap is rearranged so the largest element remaining moves to the root. Using the heap, finding the next largest element takes O(log n) time, instead of O(n) for a linear scan as in simple selection sort. This allows Heapsort to run in O(n log n) time, and this is also the worst-case complexity.
Quicksort
Quicksort is a divide-and-conquer algorithm which relies on a partition operation: to partition an array, an element called a pivot is selected. All elements smaller than the pivot are moved before it and all greater elements are moved after it. This can be done efficiently in linear time and in-place. The lesser and greater sublists are then recursively sorted. This yields an average time complexity of O(n log n), with low overhead, and thus this is a popular algorithm. Efficient implementations of quicksort (with in-place partitioning) are typically unstable sorts and somewhat complex but are among the fastest sorting algorithms in practice. Together with its modest O(log n) space usage, quicksort is one of the most popular sorting algorithms and is available in many standard programming libraries.
The important caveat about quicksort is that its worst-case performance is O(n2); while this is rare, in naive implementations (choosing the first or last element as pivot) this occurs for sorted data, which is a common case. The most complex issue in quicksort is thus choosing a good pivot element, as consistently poor choices of pivots can result in drastically slower O(n2) performance, but good choice of pivots yields O(n log n) performance, which is asymptotically optimal. For example, if at each step the median is chosen as the pivot then the algorithm works in O(n log n). Finding the median, such as by the median of medians selection algorithm is however an O(n) operation on unsorted lists and therefore exacts significant overhead with sorting. In practice choosing a random pivot almost certainly yields O(n log n) performance.
If a guarantee of O(n log n) performance is important, there is a simple modification to achieve that. The idea, due to Musser, is to set a limit on the maximum depth of recursion. If that limit is exceeded, then sorting is continued using the heapsort algorithm. Musser proposed that the limit should be , which is approximately twice the maximum recursion depth one would expect on average with a randomly ordered array.
Shellsort
Shellsort was invented by Donald Shell in 1959. It improves upon insertion sort by moving out of order elements more than one position at a time. The concept behind Shellsort is that insertion sort performs in time, where k is the greatest distance between two out-of-place elements. This means that generally, they perform in O(n2), but for data that is mostly sorted, with only a few elements out of place, they perform faster. So, by first sorting elements far away, and progressively shrinking the gap between the elements to sort, the final sort computes much faster. One implementation can be described as arranging the data sequence in a two-dimensional array and then sorting the columns of the array using insertion sort.
The worst-case time complexity of Shellsort is an open problem and depends on the gap sequence used, with known complexities ranging from O(n2) to O(n4/3) and Θ(n log2 n). This, combined with the fact that Shellsort is in-place, only needs a relatively small amount of code, and does not require use of the call stack, makes it is useful in situations where memory is at a premium, such as in embedded systems and operating system kernels.
Bubble sort and variants
Bubble sort, and variants such as the Comb sort and cocktail sort, are simple, highly inefficient sorting algorithms. They are frequently seen in introductory texts due to ease of analysis, but they are rarely used in practice.
Bubble sort
Bubble sort is a simple sorting algorithm. The algorithm starts at the beginning of the data set. It compares the first two elements, and if the first is greater than the second, it swaps them. It continues doing this for each pair of adjacent elements to the end of the data set. It then starts again with the first two elements, repeating until no swaps have occurred on the last pass. This algorithm's average time and worst-case performance is O(n2), so it is rarely used to sort large, unordered data sets. Bubble sort can be used to sort a small number of items (where its asymptotic inefficiency is not a high penalty). Bubble sort can also be used efficiently on a list of any length that is nearly sorted (that is, the elements are not significantly out of place). For example, if any number of elements are out of place by only one position (e.g. 0123546789 and 1032547698), bubble sort's exchange will get them in order on the first pass, the second pass will find all elements in order, so the sort will take only 2n time.
Comb sort
Comb sort is a relatively simple sorting algorithm based on bubble sort and originally designed by Włodzimierz Dobosiewicz in 1980. It was later rediscovered and popularized by Stephen Lacey and Richard Box with a Byte Magazine article published in April 1991. The basic idea is to eliminate turtles, or small values near the end of the list, since in a bubble sort these slow the sorting down tremendously. (Rabbits, large values around the beginning of the list, do not pose a problem in bubble sort) It accomplishes this by initially swapping elements that are a certain distance from one another in the array, rather than only swapping elements if they are adjacent to one another, and then shrinking the chosen distance until it is operating as a normal bubble sort. Thus, if Shellsort can be thought of as a generalized version of insertion sort that swaps elements spaced a certain distance away from one another, comb sort can be thought of as the same generalization applied to bubble sort.
Exchange sort
Exchange sort is sometimes confused with bubble sort, although the algorithms are in fact distinct. Exchange sort works by comparing the first element with all elements above it, swapping where needed, thereby guaranteeing that the first element is correct for the final sort order; it then proceeds to do the same for the second element, and so on. It lacks the advantage that bubble sort has of detecting in one pass if the list is already sorted, but it can be faster than bubble sort by a constant factor (one less pass over the data to be sorted; half as many total comparisons) in worst-case situations. Like any simple O(n2) sort it can be reasonably fast over very small data sets, though in general insertion sort will be faster.
Distribution sorts
Distribution sort refers to any sorting algorithm where data is distributed from their input to multiple intermediate structures which are then gathered and placed on the output. For example, both bucket sort and flashsort are distribution-based sorting algorithms. Distribution sorting algorithms can be used on a single processor, or they can be a distributed algorithm, where individual subsets are separately sorted on different processors, then combined. This allows external sorting of data too large to fit into a single computer's memory.
Counting sort
Counting sort is applicable when each input is known to belong to a particular set, S, of possibilities. The algorithm runs in O(|S| + n) time and O(|S|) memory where n is the length of the input. It works by creating an integer array of size |S| and using the ith bin to count the occurrences of the ith member of S in the input. Each input is then counted by incrementing the value of its corresponding bin. Afterward, the counting array is looped through to arrange all of the inputs in order. This sorting algorithm often cannot be used because S needs to be reasonably small for the algorithm to be efficient, but it is extremely fast and demonstrates great asymptotic behavior as n increases. It also can be modified to provide stable behavior.
Bucket sort
Bucket sort is a divide-and-conquer sorting algorithm that generalizes counting sort by partitioning an array into a finite number of buckets. Each bucket is then sorted individually, either using a different sorting algorithm or by recursively applying the bucket sorting algorithm.
A bucket sort works best when the elements of the data set are evenly distributed across all buckets.
Radix sort
Radix sort is an algorithm that sorts numbers by processing individual digits. n numbers consisting of k digits each are sorted in O(n · k) time. Radix sort can process digits of each number either starting from the least significant digit (LSD) or starting from the most significant digit (MSD). The LSD algorithm first sorts the list by the least significant digit while preserving their relative order using a stable sort. Then it sorts them by the next digit, and so on from the least significant to the most significant, ending up with a sorted list. While the LSD radix sort requires the use of a stable sort, the MSD radix sort algorithm does not (unless stable sorting is desired). In-place MSD radix sort is not stable. It is common for the counting sort algorithm to be used internally by the radix sort. A hybrid sorting approach, such as using insertion sort for small bins, improves performance of radix sort significantly.
Memory usage patterns and index sorting
When the size of the array to be sorted approaches or exceeds the available primary memory, so that (much slower) disk or swap space must be employed, the memory usage pattern of a sorting algorithm becomes important, and an algorithm that might have been fairly efficient when the array fit easily in RAM may become impractical. In this scenario, the total number of comparisons becomes (relatively) less important, and the number of times sections of memory must be copied or swapped to and from the disk can dominate the performance characteristics of an algorithm. Thus, the number of passes and the localization of comparisons can be more important than the raw number of comparisons, since comparisons of nearby elements to one another happen at system bus speed (or, with caching, even at CPU speed), which, compared to disk speed, is virtually instantaneous.
For example, the popular recursive quicksort algorithm provides quite reasonable performance with adequate RAM, but due to the recursive way that it copies portions of the array it becomes much less practical when the array does not fit in RAM, because it may cause a number of slow copy or move operations to and from disk. In that scenario, another algorithm may be preferable even if it requires more total comparisons.
One way to work around this problem, which works well when complex records (such as in a relational database) are being sorted by a relatively small key field, is to create an index into the array and then sort the index, rather than the entire array. (A sorted version of the entire array can then be produced with one pass, reading from the index, but often even that is unnecessary, as having the sorted index is adequate.) Because the index is much smaller than the entire array, it may fit easily in memory where the entire array would not, effectively eliminating the disk-swapping problem. This procedure is sometimes called "tag sort".
Another technique for overcoming the memory-size problem is using external sorting, for example, one of the ways is to combine two algorithms in a way that takes advantage of the strength of each to improve overall performance. For instance, the array might be subdivided into chunks of a size that will fit in RAM, the contents of each chunk sorted using an efficient algorithm (such as quicksort), and the results merged using a k-way merge similar to that used in merge sort. This is faster than performing either merge sort or quicksort over the entire list.
Techniques can also be combined. For sorting very large sets of data that vastly exceed system memory, even the index may need to be sorted using an algorithm or combination of algorithms designed to perform reasonably with virtual memory, i.e., to reduce the amount of swapping required.
Related algorithms
Related problems include approximate sorting (sorting a sequence to within a certain amount of the correct order), partial sorting (sorting only the k smallest elements of a list, or finding the k smallest elements, but unordered) and selection (computing the kth smallest element). These can be solved inefficiently by a total sort, but more efficient algorithms exist, often derived by generalizing a sorting algorithm. The most notable example is quickselect, which is related to quicksort. Conversely, some sorting algorithms can be derived by repeated application of a selection algorithm; quicksort and quickselect can be seen as the same pivoting move, differing only in whether one recurses on both sides (quicksort, divide-and-conquer) or one side (quickselect, decrease-and-conquer).
A kind of opposite of a sorting algorithm is a shuffling algorithm. These are fundamentally different because they require a source of random numbers. Shuffling can also be implemented by a sorting algorithm, namely by a random sort: assigning a random number to each element of the list and then sorting based on the random numbers. This is generally not done in practice, however, and there is a well-known simple and efficient algorithm for shuffling: the Fisher–Yates shuffle.
Sorting algorithms are ineffective for finding an order in many situations. Usually, when elements have no reliable comparison function (crowdsourced preferences like voting systems), comparisons are very costly (sports), or when it would be impossible to pairwise compare all elements for all criteria (search engines). In these cases, the problem is usually referred to as ranking and the goal is to find the "best" result for some criteria according to probabilities inferred from comparisons or rankings. A common example is in chess, where players are ranked with the Elo rating system, and rankings are determined by a tournament system instead of a sorting algorithm.
| Mathematics | Algorithms | null |
28445 | https://en.wikipedia.org/wiki/Sleep%20apnea | Sleep apnea | Sleep apnea (sleep apnoea or sleep apnœa in British English) is a sleep-related breathing disorder in which repetitive pauses in breathing, periods of shallow breathing, or collapse of the upper airway during sleep results in poor ventilation and sleep disruption. Each pause in breathing can last for a few seconds to a few minutes and often occurs many times a night. A choking or snorting sound may occur as breathing resumes. Common symptoms include daytime sleepiness, snoring, and non restorative sleep despite adequate sleep time. Because the disorder disrupts normal sleep, those affected may experience sleepiness or feel tired during the day. It is often a chronic condition.
Sleep apnea may be categorized as obstructive sleep apnea (OSA), in which breathing is interrupted by a blockage of air flow, central sleep apnea (CSA), in which regular unconscious breath simply stops, or a combination of the two. OSA is the most common form. OSA has four key contributors; these include a narrow, crowded, or collapsible upper airway, an ineffective pharyngeal dilator muscle function during sleep, airway narrowing during sleep, and unstable control of breathing (high loop gain). In CSA, the basic neurological controls for breathing rate malfunction and fail to give the signal to inhale, causing the individual to miss one or more cycles of breathing. If the pause in breathing is long enough, the percentage of oxygen in the circulation can drop to a lower than normal level (hypoxemia) and the concentration of carbon dioxide can build to a higher than normal level (hypercapnia). In turn, these conditions of hypoxia and hypercapnia will trigger additional effects on the body such as Cheyne-Stokes Respiration.
Some people with sleep apnea are unaware they have the condition. In many cases it is first observed by a family member. An in-lab sleep study overnight is the preferred method for diagnosing sleep apnea. In the case of OSA, the outcome that determines disease severity and guides the treatment plan is the apnea-hypopnea index (AHI). This measurement is calculated from totaling all pauses in breathing and periods of shallow breathing lasting greater than 10 seconds and dividing the sum by total hours of recorded sleep. In contrast, for CSA the degree of respiratory effort, measured by esophageal pressure or displacement of the thoracic or abdominal cavity, is an important distinguishing factor between OSA and CSA.
A systemic disorder, sleep apnea is associated with a wide array of effects, including increased risk of car accidents, hypertension, cardiovascular disease, myocardial infarction, stroke, atrial fibrillation, insulin resistance, higher incidence of cancer, and neurodegeneration. Further research is being conducted on the potential of using biomarkers to understand which chronic diseases are associated with sleep apnea on an individual basis.
Treatment may include lifestyle changes, mouthpieces, breathing devices, and surgery. Effective lifestyle changes may include avoiding alcohol, losing weight, smoking cessation, and sleeping on one's side. Breathing devices include the use of a CPAP machine. With proper use, CPAP improves outcomes. Evidence suggests that CPAP may improve sensitivity to insulin, blood pressure, and sleepiness. Long term compliance, however, is an issue with more than half of people not appropriately using the device. In 2017, only 15% of potential patients in developed countries used CPAP machines, while in developing countries well under 1% of potential patients used CPAP. Without treatment, sleep apnea may increase the risk of heart attack, stroke, diabetes, heart failure, irregular heartbeat, obesity, and motor vehicle collisions.
OSA is a common sleep disorder. A large analysis in 2019 of the estimated prevalence of OSA found that OSA affects 936 million—1 billion people between the ages of 30–69 globally, or roughly every 1 in 10 people, and up to 30% of the elderly. Sleep apnea is somewhat more common in men than women, roughly a 2:1 ratio of men to women, and in general more people are likely to have it with older age and obesity. Other risk factors include being overweight, a family history of the condition, allergies, and enlarged tonsils.
Signs and symptoms
The typical screening process for sleep apnea involves asking patients about common symptoms such as snoring, witnessed pauses in breathing during sleep and excessive daytime sleepiness. There is a wide range in presenting symptoms in patients with sleep apnea, from being asymptomatic to falling asleep while driving. Due to this wide range in clinical presentation, some people are not aware that they have sleep apnea and are either misdiagnosed or ignore the symptoms altogether. A current area requiring further study involves identifying different subtypes of sleep apnea based on patients who tend to present with different clusters or groupings of particular symptoms.
OSA may increase risk for driving accidents and work-related accidents due to sleep fragmentation from repeated arousals during sleep. If OSA is not treated it results in excessive daytime sleepiness and oxidative stress from the repeated drops in oxygen saturation, people are at increased risk of other systemic health problems, such as diabetes, hypertension or cardiovascular disease. Subtle manifestations of sleep apnea may include treatment refractory hypertension and cardiac arrhythmias and over time as the disease progresses, more obvious symptoms may become apparent. Due to the disruption in daytime cognitive state, behavioral effects may be present. These can include moodiness, belligerence, as well as a decrease in attentiveness and energy. These effects may become intractable, leading to depression.
Risk factors
Obstructive sleep apnea can affect people regardless of sex, race, or age. However, risk factors include:
male gender
obesity
age over 40
large neck circumference
enlarged tonsils or tongue
narrow upper jaw
small lower jaw
tongue fat/tongue scalloping
a family history of sleep apnea
endocrine disorders such as hypothyroidism
lifestyle habits such as smoking or drinking alcohol
Central sleep apnea is more often associated with any of the following risk factors:
transition period from wakefulness to non-REM sleep
older age
heart failure
atrial fibrillation
stroke
spinal cord injury
Mechanism
Obstructive sleep apnea
The causes of obstructive sleep apnea are complex and individualized, but typical risk factors include narrow pharyngeal anatomy and craniofacial structure. When anatomical risk factors are combined with non-anatomical contributors such as an ineffective pharyngeal dilator muscle function during sleep, unstable control of breathing (high loop gain), and premature awakening to mild airway narrowing, the severity of the OSA rapidly increases as more factors are present. When breathing is paused due to upper airway obstruction, carbon dioxide builds up in the bloodstream. Chemoreceptors in the bloodstream note the high carbon dioxide levels. The brain is signaled to awaken the person, which clears the airway and allows breathing to resume. Breathing normally will restore oxygen levels and the person will fall asleep again. This carbon dioxide build-up may be due to the decrease of output of the brainstem regulating the chest wall or pharyngeal muscles, which causes the pharynx to collapse. As a result, people with sleep apnea experience reduced or no slow-wave sleep and spend less time in REM sleep.
Central sleep apnea
There are two main mechanism that drive the disease process of CSA, sleep-related hypoventilation and post-hyperventilation hypocapnia. The most common cause of CSA is post-hyperventilation hypocapnia secondary to heart failure. This occurs because of brief failures of the ventilatory control system but normal alveolar ventilation. In contrast, sleep-related hypoventilation occurs when there is a malfunction of the brain's drive to breathe. The underlying cause of the loss of the wakefulness drive to breathe encompasses a broad set of diseases from strokes to severe kyphoscoliosis.
Complications
OSA is a serious medical condition with systemic effects; patients with untreated OSA have a greater mortality risk from cardiovascular disease than those undergoing appropriate treatment. Other complications include hypertension, congestive heart failure, atrial fibrillation, coronary artery disease, stroke, and type 2 diabetes. Daytime fatigue and sleepiness, a common symptom of sleep apnea, is also an important public health concern regarding transportation crashes caused by drowsiness. OSA may also be a risk factor of COVID-19. People with OSA have a higher risk of developing severe complications of COVID-19.
Alzheimer's disease and severe obstructive sleep apnea are connected because there is an increase in the protein beta-amyloid as well as white-matter damage. These are the main indicators of Alzheimer's, which in this case comes from the lack of proper rest or poorer sleep efficiency resulting in neurodegeneration. Having sleep apnea in mid-life brings a higher likelihood of developing Alzheimer's in older age, and if one has Alzheimer's then one is also more likely to have sleep apnea. This is demonstrated by cases of sleep apnea even being misdiagnosed as dementia. With the use of treatment through CPAP, there is a reversible risk factor in terms of the amyloid proteins. This usually restores brain structure and f cognitive impairment. Evidence continues to be found supporting there is an association between BMI and Alzheimer's. There is also evidence of increased risk of developing Alzheimer's for those with a higher BMI in women ages 70 and above. While continuous positive airway pressure (CPAP) wasn't found to significantly improve cognitive performance, it was found to benefit other symptoms like depression, anxiety, etc.
Diagnosis
Classification
There are three types of sleep apnea. OSA accounts for 84%, CSA for 0.9%, and 15% of cases are mixed.
Obstructive sleep apnea
In a systematic review of published evidence, the United States Preventive Services Task Force in 2017 concluded that there was uncertainty about the accuracy or clinical utility of all potential screening tools for OSA, and recommended that evidence is insufficient to assess the balance of benefits and harms of screening for OSA in asymptomatic adults.
The diagnosis of OSA syndrome is made when the patient shows recurrent episodes of partial or complete collapse of the upper airway during sleep resulting in apneas or hypopneas, respectively. Criteria defining an apnea or a hypopnea vary. The American Academy of Sleep Medicine (AASM) defines an apnea as a reduction in airflow of ≥ 90% lasting at least 10 seconds. A hypopnea is defined as a reduction in airflow of ≥ 30% lasting at least 10 seconds and associated with a ≥ 4% decrease in pulse oxygenation, or as a ≥ 30% reduction in airflow lasting at least 10 seconds and associated either with a ≥ 3% decrease in pulse oxygenation or with an arousal.
To define the severity of the condition, the Apnea-Hypopnea Index (AHI) or the Respiratory Disturbance Index (RDI) are used. While the AHI measures the mean number of apneas and hypopneas per hour of sleep, the RDI adds to this measure the respiratory effort-related arousals (RERAs). The OSA syndrome is thus diagnosed if the AHI is > 5 episodes per hour and results in daytime sleepiness and fatigue or when the RDI is ≥ 15 independently of the symptoms. According to the American Association of Sleep Medicine, daytime sleepiness is determined as mild, moderate and severe depending on its impact on social life. Daytime sleepiness can be assessed with the Epworth Sleepiness Scale (ESS), a self-reported questionnaire on the propensity to fall asleep or doze off during daytime. Screening tools for OSA itself comprise the STOP questionnaire, the Berlin questionnaire and the STOP-BANG questionnaire which has been reported as being a very powerful tool to detect OSA.
Criteria
According to the International Classification of Sleep Disorders, there are 4 types of criteria. The first one concerns sleep – excessive sleepiness, nonrestorative sleep, fatigue or insomnia symptoms. The second and third criteria are about respiration – waking with breath holding, gasping, or choking; snoring, breathing interruptions or both during sleep. The last criterion revolved around medical issues as hypertension, coronary artery disease, stroke, heart failure, atrial fibrillation, type 2 diabetes mellitus, mood disorder or cognitive impairment. Two levels of severity are distinguished, the first one is determined by a polysomnography or home sleep apnea test demonstrating 5 or more predominantly obstructive respiratory events per hour of sleep and the higher levels are determined by 15 or more events. If the events are present less than 5 times per hour, no obstructive sleep apnea is diagnosed.
A considerable night-to-night variability further complicates diagnosis of OSA. In unclear cases, multiple nights of testing might be required to achieve an accurate diagnosis. Since sequential nights of testing would be impractical and cost prohibitive in the sleep lab, home sleep testing for multiple nights can not only be more useful, but more reflective of what is typically happening each night.
Polysomnography
Nighttime in-laboratory Level 1 polysomnography (PSG) is the gold standard test for diagnosis. Patients are monitored with EEG leads, pulse oximetry, temperature and pressure sensors to detect nasal and oral airflow, respiratory impedance plethysmography or similar resistance belts around the chest and abdomen to detect motion, an ECG lead, and EMG sensors to detect muscle contraction in the chin, chest, and legs. A hypopnea can be based on one of two criteria. It can either be a reduction in airflow of at least 30% for more than 10 seconds associated with at least 4% oxygen desaturation or a reduction in airflow of at least 30% for more than 10 seconds associated with at least 3% oxygen desaturation or an arousal from sleep on EEG.
An "event" can be either an apnea, characterized by complete cessation of airflow for at least 10 seconds, or a hypopnea in which airflow decreases by 50 percent for 10 seconds or decreases by 30 percent if there is an associated decrease in the oxygen saturation or an arousal from sleep. To grade the severity of sleep apnea, the number of events per hour is reported as the apnea-hypopnea index (AHI). An AHI of less than 5 is considered normal. An AHI of 5–15 is mild; 15–30 is moderate, and more than 30 events per hour characterizes severe sleep apnea.
Central sleep apnea
The diagnosis of CSA syndrome is made when the presence of at least 5 central apnea events occur per hour. There are multiple mechanisms that drive the apnea events. In individuals with heart failure with Cheyne-Stokes respiration, the brain's respiratory control centers are imbalanced during sleep. This results in ventilatory instability, caused by chemoreceptors that are hyperresponsive to CO2 fluctuations in the blood, resulting in high respiratory drive that leads to apnea. Another common mechanism that causes CSA is the loss of the brain's wakefulness drive to breathe.
CSA is organized into 6 individual syndromes: Cheyne-Stokes respiration, Complex sleep apnea, Primary CSA, High altitude periodic breathing, CSA from medication, CSA from comorbidity. Like in OSA, nocturnal polysomnography is the mainstay of diagnosis for CSA. The degree of respiratory effort, measured by esophageal pressure or displacement of the thoracic or abdominal cavity, is an important distinguishing factor between OSA and CSA.
Mixed apnea
Some people with sleep apnea have a combination of both types; its prevalence ranges from 0.56% to 18%. The condition, also called treatment-emergent central apnea, is generally detected when obstructive sleep apnea is treated with CPAP and central sleep apnea emerges. The exact mechanism of the loss of central respiratory drive during sleep in OSA is unknown but is most likely related to incorrect settings of the CPAP treatment and other medical conditions the person has.
Management
The treatment of obstructive sleep apnea is different than that of central sleep apnea. Treatment often starts with behavioral therapy and some people may be suggested to try a continuous positive airway pressure device. Many people are told to avoid alcohol, sleeping pills, and other sedatives, which can relax throat muscles, contributing to the collapse of the airway at night. The evidence supporting one treatment option compared to another for a particular person is not clear.
Changing sleep position
More than half of people with obstructive sleep apnea have some degree of positional obstructive sleep apnea, meaning that it gets worse when they sleep on their backs. Sleeping on their sides is an effective and cost-effective treatment for positional obstructive sleep apnea.
Continuous positive airway pressure
For moderate to severe sleep apnea, the most common treatment is the use of a continuous positive airway pressure (CPAP) or automatic positive airway pressure (APAP) device. These splint the person's airway open during sleep by means of pressurized air. The person typically wears a plastic facial mask, which is connected by a flexible tube to a small bedside CPAP machine.
Although CPAP therapy is effective in reducing apneas and less expensive than other treatments, some people find it uncomfortable. Some complain of feeling trapped, having chest discomfort, and skin or nose irritation. Other side effects may include dry mouth, dry nose, nosebleeds, sore lips and gums.
Whether or not it decreases the risk of death or heart disease is controversial with some reviews finding benefit and others not. This variation across studies might be driven by low rates of compliance—analyses of those who use CPAP for at least four hours a night suggests a decrease in cardiovascular events.
Weight loss
Excess body weight is thought to be an important cause of sleep apnea. People who are overweight have more tissues in the back of their throat which can restrict the airway, especially when sleeping. In weight loss studies of overweight individuals, those who lose weight show reduced apnea frequencies and improved apnoea–hypopnoea index (AHI). Weight loss effective enough to relieve obesity hypoventilation syndrome (OHS) must be 25–30% of body weight. For some obese people, it can be difficult to achieve and maintain this result without bariatric surgery.
Rapid palatal expansion
In children, orthodontic treatment to expand the volume of the nasal airway, such as nonsurgical rapid palatal expansion is common. The procedure has been found to significantly decrease the AHI and lead to long-term resolution of clinical symptoms.
Since the palatal suture is fused in adults, regular RPE using tooth-borne expanders cannot be performed. Mini-implant assisted rapid palatal expansion (MARPE) has been recently developed as a non-surgical option for the transverse expansion of the maxilla in adults. This method increases the volume of the nasal cavity and nasopharynx, leading to increased airflow and reduced respiratory arousals during sleep. Changes are permanent with minimal complications.
Surgery
Several surgical procedures (sleep surgery) are used to treat sleep apnea, although they are normally a third line of treatment for those who reject or are not helped by CPAP treatment or dental appliances. Surgical treatment for obstructive sleep apnea needs to be individualized to address all anatomical areas of obstruction.
Nasal obstruction
Often, correction of the nasal passages needs to be performed in addition to correction of the oropharynx passage. Septoplasty and turbinate surgery may improve the nasal airway, but has been found to be ineffective at reducing respiratory arousals during sleep.
Pharyngeal obstruction
Tonsillectomy and uvulopalatopharyngoplasty (UPPP or UP3) are available to address pharyngeal obstruction.
The "Pillar" device is a treatment for snoring and obstructive sleep apnea; it is thin, narrow strips of polyester. Three strips are inserted into the roof of the mouth (the soft palate) using a modified syringe and local anesthetic, in order to stiffen the soft palate. This procedure addresses one of the most common causes of snoring and sleep apnea — vibration or collapse of the soft palate. It was approved by the FDA for snoring in 2002 and for obstructive sleep apnea in 2004. A 2013 meta-analysis found that "the Pillar implant has a moderate effect on snoring and mild-to-moderate obstructive sleep apnea" and that more studies with high level of evidence were needed to arrive at a definite conclusion; it also found that the polyester strips work their way out of the soft palate in about 10% of the people in whom they are implanted.
Hypopharyngeal or base of tongue obstruction
Base-of-tongue advancement by means of advancing the genial tubercle of the mandible, tongue suspension, or hyoid suspension (aka hyoid myotomy and suspension or hyoid advancement) may help with the lower pharynx.
Other surgery options may attempt to shrink or stiffen excess tissue in the mouth or throat, procedures done at either a doctor's office or a hospital. Small shots or other treatments, sometimes in a series, are used for shrinkage, while the insertion of a small piece of stiff plastic is used in the case of surgery whose goal is to stiffen tissues.
Multi-level surgery
Maxillomandibular advancement (MMA) is considered the most effective surgery for people with sleep apnea, because it increases the posterior airway space (PAS). However, health professionals are often unsure as to who should be referred for surgery and when to do so: some factors in referral may include failed use of CPAP or device use; anatomy which favors rather than impedes surgery; or significant craniofacial abnormalities which hinder device use.
Potential complications
Several inpatient and outpatient procedures use sedation. Many drugs and agents used during surgery to relieve pain and to depress consciousness remain in the body at low amounts for hours or even days afterwards. In an individual with either central, obstructive or mixed sleep apnea, these low doses may be enough to cause life-threatening irregularities in breathing or collapses in a patient's airways. Use of analgesics and sedatives in these patients postoperatively should therefore be minimized or avoided.
Surgery on the mouth and throat, as well as dental surgery and procedures, can result in postoperative swelling of the lining of the mouth and other areas that affect the airway. Even when the surgical procedure is designed to improve the airway, such as tonsillectomy and adenoidectomy or tongue reduction, swelling may negate some of the effects in the immediate postoperative period. Once the swelling resolves and the palate becomes tightened by postoperative scarring, however, the full benefit of the surgery may be noticed.
A person with sleep apnea undergoing any medical treatment must make sure their doctor and anesthetist are informed about the sleep apnea. Alternative and emergency procedures may be necessary to maintain the airway of sleep apnea patients.
Other
Neurostimulation
Diaphragm pacing, which involves the rhythmic application of electrical impulses to the diaphragm, has been used to treat central sleep apnea.
In April 2014, the U.S. Food and Drug Administration granted pre-market approval for use of an upper airway stimulation system in people who cannot use a continuous positive airway pressure device. The Inspire Upper Airway Stimulation system is a hypoglossal nerve stimulator that senses respiration and applies mild electrical stimulation during inspiration, which pushes the tongue slightly forward to open the airway.
Medications
There is currently insufficient evidence to recommend any medication for OSA. This may result in part because people with sleep apnea have tended to be treated as a single group in clinical trials. Identifying specific physiological factors underlying sleep apnea makes it possible to test drugs specific to those causal factors: airway narrowing, impaired muscle activity, low arousal threshold for waking, and unstable breathing control.
Those who experience low waking thresholds may benefit from eszopiclone, a sedative typically used to treat insomnia. The antidepressant desipramine may stimulate upper airway muscles and lessen pharyngeal collapsibility in people who have limited muscle function in their airways.
There is limited evidence for medication, but 2012 AASM guidelines suggested that acetazolamide "may be considered" for the treatment of central sleep apnea; zolpidem and triazolam may also be considered for the treatment of central sleep apnea, but "only if the patient does not have underlying risk factors for respiratory depression".
Low doses of oxygen are also used as a treatment for hypoxia but are discouraged due to side effects.
In December 2024, the FDA approved tirzepatide, an anti-diabetic and weight loss medication, as a component in the combination treatment of adults with obesity suffering from moderate to severe obstructive sleep apnea. Other components of the therapy are a reduced-calorie diet and increased physical activity.
Oral appliances
An oral appliance, often referred to as a mandibular advancement splint, is a custom-made mouthpiece that shifts the lower jaw forward and opens the bite slightly, opening up the airway. These devices can be fabricated by a general dentist. Oral appliance therapy (OAT) is usually successful in patients with mild to moderate obstructive sleep apnea. While CPAP is more effective for sleep apnea than oral appliances, oral appliances do improve sleepiness and quality of life and are often better tolerated than CPAP.
Nasal EPAP
Nasal EPAP is a bandage-like device placed over the nostrils that uses a person's own breathing to create positive airway pressure to prevent obstructed breathing.
Oral pressure therapy
Oral pressure therapy uses a device that creates a vacuum in the mouth, pulling the soft palate tissue forward. It has been found useful in about 25 to 37% of people.
Prognosis
Death could occur from untreated OSA due to lack of oxygen to the body.
There is increasing evidence that sleep apnea may lead to liver function impairment, particularly fatty liver diseases (see steatosis).
It has been revealed that people with OSA show tissue loss in brain regions that help store memory, thus linking OSA with memory loss. Using magnetic resonance imaging (MRI), the scientists discovered that people with sleep apnea have mammillary bodies that are about 20% smaller, particularly on the left side. One of the key investigators hypothesized that repeated drops in oxygen lead to the brain injury.
The immediate effects of central sleep apnea on the body depend on how long the failure to breathe endures. At worst, central sleep apnea may cause sudden death. Short of death, drops in blood oxygen may trigger seizures, even in the absence of epilepsy. In people with epilepsy, the hypoxia caused by apnea may trigger seizures that had previously been well controlled by medications. In other words, a seizure disorder may become unstable in the presence of sleep apnea. In adults with coronary artery disease, a severe drop in blood oxygen level can cause angina, arrhythmias, or heart attacks (myocardial infarction). Longstanding recurrent episodes of apnea, over months and years, may cause an increase in carbon dioxide levels that can change the pH of the blood enough to cause a respiratory acidosis.
Epidemiology
The Wisconsin Sleep Cohort Study estimated in 1993 that roughly one in every 15 Americans was affected by at least moderate sleep apnea. It also estimated that in middle-age as many as 9% of women and 24% of men were affected, undiagnosed and untreated.
The costs of untreated sleep apnea reach further than just health issues. It is estimated that in the U.S., the average untreated sleep apnea patient's annual health care costs $1,336 more than an individual without sleep apnea. This may cause $3.4 billion/year in additional medical costs. Whether medical cost savings occur with treatment of sleep apnea remains to be determined.
Frequency and population
Sleep disorders including sleep apnea have become an important health issue in the United States. Twenty-two million Americans have been estimated to have sleep apnea, with 80% of moderate and severe OSA cases undiagnosed.
OSA can occur at any age, but it happens more frequently in men who are over 40 and overweight.
History
A type of CSA was described in the German myth of Ondine's curse where the person when asleep would forget to breathe. The clinical picture of this condition has long been recognized as a character trait, without an understanding of the disease process. The term "Pickwickian syndrome" that is sometimes used for the syndrome was coined by the famous early 20th-century physician William Osler, who must have been a reader of Charles Dickens. The description of Joe, "the fat boy" in Dickens's novel The Pickwick Papers, is an accurate clinical picture of an adult with obstructive sleep apnea syndrome.
The early reports of obstructive sleep apnea in the medical literature described individuals who were very severely affected, often presenting with severe hypoxemia, hypercapnia and congestive heart failure.
Treatment
The management of obstructive sleep apnea was improved with the introduction of continuous positive airway pressure (CPAP), first described in 1981 by Colin Sullivan and associates in Sydney, Australia. The first models were bulky and noisy, but the design was rapidly improved and by the late 1980s, CPAP was widely adopted. The availability of an effective treatment stimulated an aggressive search for affected individuals and led to the establishment of hundreds of specialized clinics dedicated to the diagnosis and treatment of sleep disorders. Though many types of sleep problems are recognized, the vast majority of patients attending these centers have sleep-disordered breathing. Sleep apnea awareness day is 18 April in recognition of Colin Sullivan.
| Biology and health sciences | Mental disorders | Health |
28469 | https://en.wikipedia.org/wiki/Stimulated%20emission | Stimulated emission | Stimulated emission is the process by which an incoming photon of a specific frequency can interact with an excited atomic electron (or other excited molecular state), causing it to drop to a lower energy level. The liberated energy transfers to the electromagnetic field, creating a new photon with a frequency, polarization, and direction of travel that are all identical to the photons of the incident wave. This is in contrast to spontaneous emission, which occurs at a characteristic rate for each of the atoms/oscillators in the upper energy state regardless of the external electromagnetic field.
According to the American Physical Society, the first person to correctly predict the phenomenon of stimulated emission was Albert Einstein in a series of papers starting in 1916, culminating in what is now called the Einstein B Coefficient. Einstein's work became the theoretical foundation of the maser and the laser. The process is identical in form to atomic absorption in which the energy of an absorbed photon causes an identical but opposite atomic transition: from the lower level to a higher energy level. In normal media at thermal equilibrium, absorption exceeds stimulated emission because there are more electrons in the lower energy states than in the higher energy states. However, when a population inversion is present, the rate of stimulated emission exceeds that of absorption, and a net optical amplification can be achieved. Such a gain medium, along with an optical resonator, is at the heart of a laser or maser.
Lacking a feedback mechanism, laser amplifiers and superluminescent sources also function on the basis of stimulated emission.
Overview
Electrons and their interactions with electromagnetic fields are important in our understanding of chemistry and physics.
In the classical view, the energy of an electron orbiting an atomic nucleus is larger for orbits further from the nucleus of an atom. However, quantum mechanical effects force electrons to take on discrete positions in orbitals. Thus, electrons are found in specific energy levels of an atom, two of which are shown below:
When an electron absorbs energy either from light (photons) or heat (phonons), it receives that incident quantum of energy. But transitions are only allowed between discrete energy levels such as the two shown above.
This leads to emission lines and absorption lines.
When an electron is excited from a lower to a higher energy level, it is unlikely for it to stay that way forever.
An electron in an excited state may decay to a lower energy state which is not occupied, according to a particular time constant characterizing that transition. When such an electron decays without external influence, emitting a photon, that is called "spontaneous emission". The phase and direction associated with the photon that is emitted is random. A material with many atoms in such an excited state may thus result in radiation which has a narrow spectrum (centered around one wavelength of light), but the individual photons would have no common phase relationship and would also emanate in random directions. This is the mechanism of fluorescence and thermal emission.
An external electromagnetic field at a frequency associated with a transition can affect the quantum mechanical state of the atom without being absorbed. As the electron in the atom makes a transition between two stationary states (neither of which shows a dipole field), it enters a transition state which does have a dipole field, and which acts like a small electric dipole, and this dipole oscillates at a characteristic frequency. In response to the external electric field at this frequency, the probability of the electron entering this transition state is greatly increased. Thus, the rate of transitions between two stationary states is increased beyond that of spontaneous emission. A transition from the higher to a lower energy state produces an additional photon with the same phase and direction as the incident photon; this is the process of stimulated emission.
History
Stimulated emission was a theoretical discovery by Albert Einstein within the framework of the old quantum theory, wherein the emission is described in terms of photons that are the quanta of the EM field. Stimulated emission can also occur in classical models, without reference to photons or quantum-mechanics. ( | Physical sciences | Atomic physics | Physics |
28481 | https://en.wikipedia.org/wiki/Statistical%20mechanics | Statistical mechanics | In physics, statistical mechanics is a mathematical framework that applies statistical methods and probability theory to large assemblies of microscopic entities. Sometimes called statistical physics or statistical thermodynamics, its applications include many problems in the fields of physics, biology, chemistry, neuroscience, computer science, information theory and sociology. Its main purpose is to clarify the properties of matter in aggregate, in terms of physical laws governing atomic motion.
Statistical mechanics arose out of the development of classical thermodynamics, a field for which it was successful in explaining macroscopic physical properties—such as temperature, pressure, and heat capacity—in terms of microscopic parameters that fluctuate about average values and are characterized by probability distributions.
While classical thermodynamics is primarily concerned with thermodynamic equilibrium, statistical mechanics has been applied in non-equilibrium statistical mechanics to the issues of microscopically modeling the speed of irreversible processes that are driven by imbalances. Examples of such processes include chemical reactions and flows of particles and heat. The fluctuation–dissipation theorem is the basic knowledge obtained from applying non-equilibrium statistical mechanics to study the simplest non-equilibrium situation of a steady state current flow in a system of many particles.
History
In 1738, Swiss physicist and mathematician Daniel Bernoulli published Hydrodynamica which laid the basis for the kinetic theory of gases. In this work, Bernoulli posited the argument, still used to this day, that gases consist of great numbers of molecules moving in all directions, that their impact on a surface causes the gas pressure that we feel, and that what we experience as heat is simply the kinetic energy of their motion.
The founding of the field of statistical mechanics is generally credited to three physicists:
Ludwig Boltzmann, who developed the fundamental interpretation of entropy in terms of a collection of microstates
James Clerk Maxwell, who developed models of probability distribution of such states
Josiah Willard Gibbs, who coined the name of the field in 1884
In 1859, after reading a paper on the diffusion of molecules by Rudolf Clausius, Scottish physicist James Clerk Maxwell formulated the Maxwell distribution of molecular velocities, which gave the proportion of molecules having a certain velocity in a specific range. This was the first-ever statistical law in physics. Maxwell also gave the first mechanical argument that molecular collisions entail an equalization of temperatures and hence a tendency towards equilibrium. Five years later, in 1864, Ludwig Boltzmann, a young student in Vienna, came across Maxwell's paper and spent much of his life developing the subject further.
Statistical mechanics was initiated in the 1870s with the work of Boltzmann, much of which was collectively published in his 1896 Lectures on Gas Theory. Boltzmann's original papers on the statistical interpretation of thermodynamics, the H-theorem, transport theory, thermal equilibrium, the equation of state of gases, and similar subjects, occupy about 2,000 pages in the proceedings of the Vienna Academy and other societies. Boltzmann introduced the concept of an equilibrium statistical ensemble and also investigated for the first time non-equilibrium statistical mechanics, with his H-theorem.
The term "statistical mechanics" was coined by the American mathematical physicist J. Willard Gibbs in 1884. According to Gibbs, the term "statistical", in the context of mechanics, i.e. statistical mechanics, was first used by the Scottish physicist James Clerk Maxwell in 1871:
"Probabilistic mechanics" might today seem a more appropriate term, but "statistical mechanics" is firmly entrenched. Shortly before his death, Gibbs published in 1902 Elementary Principles in Statistical Mechanics, a book which formalized statistical mechanics as a fully general approach to address all mechanical systems—macroscopic or microscopic, gaseous or non-gaseous. Gibbs' methods were initially derived in the framework classical mechanics, however they were of such generality that they were found to adapt easily to the later quantum mechanics, and still form the foundation of statistical mechanics to this day.
Principles: mechanics and ensembles
In physics, two types of mechanics are usually examined: classical mechanics and quantum mechanics. For both types of mechanics, the standard mathematical approach is to consider two concepts:
The complete state of the mechanical system at a given time, mathematically encoded as a phase point (classical mechanics) or a pure quantum state vector (quantum mechanics).
An equation of motion which carries the state forward in time: Hamilton's equations (classical mechanics) or the Schrödinger equation (quantum mechanics)
Using these two concepts, the state at any other time, past or future, can in principle be calculated.
There is however a disconnect between these laws and everyday life experiences, as we do not find it necessary (nor even theoretically possible) to know exactly at a microscopic level the simultaneous positions and velocities of each molecule while carrying out processes at the human scale (for example, when performing a chemical reaction). Statistical mechanics fills this disconnection between the laws of mechanics and the practical experience of incomplete knowledge, by adding some uncertainty about which state the system is in.
Whereas ordinary mechanics only considers the behaviour of a single state, statistical mechanics introduces the statistical ensemble, which is a large collection of virtual, independent copies of the system in various states. The statistical ensemble is a probability distribution over all possible states of the system. In classical statistical mechanics, the ensemble is a probability distribution over phase points (as opposed to a single phase point in ordinary mechanics), usually represented as a distribution in a phase space with canonical coordinate axes. In quantum statistical mechanics, the ensemble is a probability distribution over pure states and can be compactly summarized as a density matrix.
As is usual for probabilities, the ensemble can be interpreted in different ways:
an ensemble can be taken to represent the various possible states that a single system could be in (epistemic probability, a form of knowledge), or
the members of the ensemble can be understood as the states of the systems in experiments repeated on independent systems which have been prepared in a similar but imperfectly controlled manner (empirical probability), in the limit of an infinite number of trials.
These two meanings are equivalent for many purposes, and will be used interchangeably in this article.
However the probability is interpreted, each state in the ensemble evolves over time according to the equation of motion. Thus, the ensemble itself (the probability distribution over states) also evolves, as the virtual systems in the ensemble continually leave one state and enter another. The ensemble evolution is given by the Liouville equation (classical mechanics) or the von Neumann equation (quantum mechanics). These equations are simply derived by the application of the mechanical equation of motion separately to each virtual system contained in the ensemble, with the probability of the virtual system being conserved over time as it evolves from state to state.
One special class of ensemble is those ensembles that do not evolve over time. These ensembles are known as equilibrium ensembles and their condition is known as statistical equilibrium. Statistical equilibrium occurs if, for each state in the ensemble, the ensemble also contains all of its future and past states with probabilities equal to the probability of being in that state. (By contrast, mechanical equilibrium is a state with a balance of forces that has ceased to evolve.) The study of equilibrium ensembles of isolated systems is the focus of statistical thermodynamics. Non-equilibrium statistical mechanics addresses the more general case of ensembles that change over time, and/or ensembles of non-isolated systems.
Statistical thermodynamics
The primary goal of statistical thermodynamics (also known as equilibrium statistical mechanics) is to derive the classical thermodynamics of materials in terms of the properties of their constituent particles and the interactions between them. In other words, statistical thermodynamics provides a connection between the macroscopic properties of materials in thermodynamic equilibrium, and the microscopic behaviours and motions occurring inside the material.
Whereas statistical mechanics proper involves dynamics, here the attention is focussed on statistical equilibrium (steady state). Statistical equilibrium does not mean that the particles have stopped moving (mechanical equilibrium), rather, only that the ensemble is not evolving.
Fundamental postulate
A sufficient (but not necessary) condition for statistical equilibrium with an isolated system is that the probability distribution is a function only of conserved properties (total energy, total particle numbers, etc.).
There are many different equilibrium ensembles that can be considered, and only some of them correspond to thermodynamics. Additional postulates are necessary to motivate why the ensemble for a given system should have one form or another.
A common approach found in many textbooks is to take the equal a priori probability postulate. This postulate states that
For an isolated system with an exactly known energy and exactly known composition, the system can be found with equal probability in any microstate consistent with that knowledge.
The equal a priori probability postulate therefore provides a motivation for the microcanonical ensemble described below. There are various arguments in favour of the equal a priori probability postulate:
Ergodic hypothesis: An ergodic system is one that evolves over time to explore "all accessible" states: all those with the same energy and composition. In an ergodic system, the microcanonical ensemble is the only possible equilibrium ensemble with fixed energy. This approach has limited applicability, since most systems are not ergodic.
Principle of indifference: In the absence of any further information, we can only assign equal probabilities to each compatible situation.
Maximum information entropy: A more elaborate version of the principle of indifference states that the correct ensemble is the ensemble that is compatible with the known information and that has the largest Gibbs entropy (information entropy).
Other fundamental postulates for statistical mechanics have also been proposed. For example, recent studies shows that the theory of statistical mechanics can be built without the equal a priori probability postulate. One such formalism is based on the fundamental thermodynamic relation together with the following set of postulates:
where the third postulate can be replaced by the following:
Three thermodynamic ensembles
There are three equilibrium ensembles with a simple form that can be defined for any isolated system bounded inside a finite volume. These are the most often discussed ensembles in statistical thermodynamics. In the macroscopic limit (defined below) they all correspond to classical thermodynamics.
Microcanonical ensemble
describes a system with a precisely given energy and fixed composition (precise number of particles). The microcanonical ensemble contains with equal probability each possible state that is consistent with that energy and composition.
Canonical ensemble
describes a system of fixed composition that is in thermal equilibrium with a heat bath of a precise temperature. The canonical ensemble contains states of varying energy but identical composition; the different states in the ensemble are accorded different probabilities depending on their total energy.
Grand canonical ensemble
describes a system with non-fixed composition (uncertain particle numbers) that is in thermal and chemical equilibrium with a thermodynamic reservoir. The reservoir has a precise temperature, and precise chemical potentials for various types of particle. The grand canonical ensemble contains states of varying energy and varying numbers of particles; the different states in the ensemble are accorded different probabilities depending on their total energy and total particle numbers.
For systems containing many particles (the thermodynamic limit), all three of the ensembles listed above tend to give identical behaviour. It is then simply a matter of mathematical convenience which ensemble is used. The Gibbs theorem about equivalence of ensembles was developed into the theory of concentration of measure phenomenon, which has applications in many areas of science, from functional analysis to methods of artificial intelligence and big data technology.
Important cases where the thermodynamic ensembles do not give identical results include:
Microscopic systems.
Large systems at a phase transition.
Large systems with long-range interactions.
In these cases the correct thermodynamic ensemble must be chosen as there are observable differences between these ensembles not just in the size of fluctuations, but also in average quantities such as the distribution of particles. The correct ensemble is that which corresponds to the way the system has been prepared and characterized—in other words, the ensemble that reflects the knowledge about that system.
Calculation methods
Once the characteristic state function for an ensemble has been calculated for a given system, that system is 'solved' (macroscopic observables can be extracted from the characteristic state function). Calculating the characteristic state function of a thermodynamic ensemble is not necessarily a simple task, however, since it involves considering every possible state of the system. While some hypothetical systems have been exactly solved, the most general (and realistic) case is too complex for an exact solution. Various approaches exist to approximate the true ensemble and allow calculation of average quantities.
Exact
There are some cases which allow exact solutions.
For very small microscopic systems, the ensembles can be directly computed by simply enumerating over all possible states of the system (using exact diagonalization in quantum mechanics, or integral over all phase space in classical mechanics).
Some large systems consist of many separable microscopic systems, and each of the subsystems can be analysed independently. Notably, idealized gases of non-interacting particles have this property, allowing exact derivations of Maxwell–Boltzmann statistics, Fermi–Dirac statistics, and Bose–Einstein statistics.
A few large systems with interaction have been solved. By the use of subtle mathematical techniques, exact solutions have been found for a few toy models. Some examples include the Bethe ansatz, square-lattice Ising model in zero field, hard hexagon model.
Monte Carlo
Although some problems in statistical physics can be solved analytically using approximations and expansions, most current research utilizes the large processing power of modern computers to simulate or approximate solutions. A common approach to statistical problems is to use a Monte Carlo simulation to yield insight into the properties of a complex system. Monte Carlo methods are important in computational physics, physical chemistry, and related fields, and have diverse applications including medical physics, where they are used to model radiation transport for radiation dosimetry calculations.
The Monte Carlo method examines just a few of the possible states of the system, with the states chosen randomly (with a fair weight). As long as these states form a representative sample of the whole set of states of the system, the approximate characteristic function is obtained. As more and more random samples are included, the errors are reduced to an arbitrarily low level.
The Metropolis–Hastings algorithm is a classic Monte Carlo method which was initially used to sample the canonical ensemble.
Path integral Monte Carlo, also used to sample the canonical ensemble.
Other
For rarefied non-ideal gases, approaches such as the cluster expansion use perturbation theory to include the effect of weak interactions, leading to a virial expansion.
For dense fluids, another approximate approach is based on reduced distribution functions, in particular the radial distribution function.
Molecular dynamics computer simulations can be used to calculate microcanonical ensemble averages, in ergodic systems. With the inclusion of a connection to a stochastic heat bath, they can also model canonical and grand canonical conditions.
Mixed methods involving non-equilibrium statistical mechanical results (see below) may be useful.
Non-equilibrium statistical mechanics
Many physical phenomena involve quasi-thermodynamic processes out of equilibrium, for example:
heat transport by the internal motions in a material, driven by a temperature imbalance,
electric currents carried by the motion of charges in a conductor, driven by a voltage imbalance,
spontaneous chemical reactions driven by a decrease in free energy,
friction, dissipation, quantum decoherence,
systems being pumped by external forces (optical pumping, etc.),
and irreversible processes in general.
All of these processes occur over time with characteristic rates. These rates are important in engineering. The field of non-equilibrium statistical mechanics is concerned with understanding these non-equilibrium processes at the microscopic level. (Statistical thermodynamics can only be used to calculate the final result, after the external imbalances have been removed and the ensemble has settled back down to equilibrium.)
In principle, non-equilibrium statistical mechanics could be mathematically exact: ensembles for an isolated system evolve over time according to deterministic equations such as Liouville's equation or its quantum equivalent, the von Neumann equation. These equations are the result of applying the mechanical equations of motion independently to each state in the ensemble. These ensemble evolution equations inherit much of the complexity of the underlying mechanical motion, and so exact solutions are very difficult to obtain. Moreover, the ensemble evolution equations are fully reversible and do not destroy information (the ensemble's Gibbs entropy is preserved). In order to make headway in modelling irreversible processes, it is necessary to consider additional factors besides probability and reversible mechanics.
Non-equilibrium mechanics is therefore an active area of theoretical research as the range of validity of these additional assumptions continues to be explored. A few approaches are described in the following subsections.
Stochastic methods
One approach to non-equilibrium statistical mechanics is to incorporate stochastic (random) behaviour into the system. Stochastic behaviour destroys information contained in the ensemble. While this is technically inaccurate (aside from hypothetical situations involving black holes, a system cannot in itself cause loss of information), the randomness is added to reflect that information of interest becomes converted over time into subtle correlations within the system, or to correlations between the system and environment. These correlations appear as chaotic or pseudorandom influences on the variables of interest. By replacing these correlations with randomness proper, the calculations can be made much easier.
Near-equilibrium methods
Another important class of non-equilibrium statistical mechanical models deals with systems that are only very slightly perturbed from equilibrium. With very small perturbations, the response can be analysed in linear response theory. A remarkable result, as formalized by the fluctuation–dissipation theorem, is that the response of a system when near equilibrium is precisely related to the fluctuations that occur when the system is in total equilibrium. Essentially, a system that is slightly away from equilibrium—whether put there by external forces or by fluctuations—relaxes towards equilibrium in the same way, since the system cannot tell the difference or "know" how it came to be away from equilibrium.
This provides an indirect avenue for obtaining numbers such as ohmic conductivity and thermal conductivity by extracting results from equilibrium statistical mechanics. Since equilibrium statistical mechanics is mathematically well defined and (in some cases) more amenable for calculations, the fluctuation–dissipation connection can be a convenient shortcut for calculations in near-equilibrium statistical mechanics.
A few of the theoretical tools used to make this connection include:
Fluctuation–dissipation theorem
Onsager reciprocal relations
Green–Kubo relations
Landauer–Büttiker formalism
Mori–Zwanzig formalism
GENERIC formalism
Hybrid methods
An advanced approach uses a combination of stochastic methods and linear response theory. As an example, one approach to compute quantum coherence effects (weak localization, conductance fluctuations) in the conductance of an electronic system is the use of the Green–Kubo relations, with the inclusion of stochastic dephasing by interactions between various electrons by use of the Keldysh method.
Applications
The ensemble formalism can be used to analyze general mechanical systems with uncertainty in knowledge about the state of a system. Ensembles are also used in:
propagation of uncertainty over time,
regression analysis of gravitational orbits,
ensemble forecasting of weather,
dynamics of neural networks,
bounded-rational potential games in game theory and non-equilibrium economics.
Statistical physics explains and quantitatively describes superconductivity, superfluidity, turbulence, collective phenomena in solids and plasma, and the structural features of liquid. It underlies the modern astrophysics and virial theorem. In solid state physics, statistical physics aids the study of liquid crystals, phase transitions, and critical phenomena. Many experimental studies of matter are entirely based on the statistical description of a system. These include the scattering of cold neutrons, X-ray, visible light, and more. Statistical physics also plays a role in materials science, nuclear physics, astrophysics, chemistry, biology and medicine (e.g. study of the spread of infectious diseases).
Analytical and computational techniques derived from statistical physics of disordered systems, can be extended to large-scale problems, including machine learning, e.g., to analyze the weight space of deep neural networks. Statistical physics is thus finding applications in the area of medical diagnostics.
Quantum statistical mechanics
Quantum statistical mechanics is statistical mechanics applied to quantum mechanical systems. In quantum mechanics, a statistical ensemble (probability distribution over possible quantum states) is described by a density operator S, which is a non-negative, self-adjoint, trace-class operator of trace 1 on the Hilbert space H describing the quantum system. This can be shown under various mathematical formalisms for quantum mechanics. One such formalism is provided by quantum logic.
| Physical sciences | Statistical mechanics | null |
28483 | https://en.wikipedia.org/wiki/Solstice | Solstice | A solstice is the time when the Sun reaches its most northerly or southerly excursion relative to the celestial equator on the celestial sphere. Two solstices occur annually, around 20–22 June and 20–22 December. In many countries, the seasons of the year are defined by reference to the solstices and the equinoxes.
The term solstice can also be used in a broader sense, as the day when this occurs. For locations not too close to the equator or the poles, the dates with the longest and shortest periods of daylight are the summer and winter solstices, respectively. Terms with no ambiguity as to which hemisphere is the context are "June solstice" and "December solstice", referring to the months in which they take place every year.
The word solstice is derived from the Latin sol ("sun") and sistere ("to stand still"), because at the solstices, the Sun's declination appears to "stand still"; that is, the seasonal movement of the Sun's daily path (as seen from Earth) pauses at a northern or southern limit before reversing direction.
Definitions and frames of reference
For an observer at the North Pole, the Sun reaches the highest position in the sky once a year in June. The day this occurs is called the June solstice day. Similarly, for an observer on the South Pole, the Sun reaches the highest position on the December solstice day. When it is the summer solstice at one Pole, it is the winter solstice on the other. The Sun's westerly motion never ceases as Earth is continually in rotation. However, the Sun's motion in declination (i.e. vertically) comes to a stop, before reversing, at the moment of solstice. In that sense, solstice means "sun-standing".
This modern scientific word descends from a Latin scientific word in use in the late Roman Republic of the 1st century BC: solstitium. Pliny uses it a number of times in his Natural History with a similar meaning that it has today. It contains two Latin-language morphemes, sol, "sun", and -stitium, "stoppage". The Romans used "standing" to refer to a component of the relative velocity of the Sun as it is observed in the sky. Relative velocity is the motion of an object from the point of view of an observer in a frame of reference. From a fixed position on the ground, the Sun appears to orbit around Earth.
To an observer in an inertial frame of reference, planet Earth is seen to rotate about an axis and orbit around the Sun in an elliptical path with the Sun at one focus. Earth's axis is tilted with respect to the plane of Earth's orbit and this axis maintains a position that changes little with respect to the background of stars. An observer on Earth therefore sees a solar path that is the result of both rotation and revolution.
The component of the Sun's motion seen by an earthbound observer caused by the revolution of the tilted axis—which, keeping the same angle in space, is oriented toward or away from the Sun—is an observed daily increment (and lateral offset) of the elevation of the Sun at noon for approximately six months and observed daily decrement for the remaining six months. At maximum or minimum elevation, the relative yearly motion of the Sun perpendicular to the horizon stops and reverses direction.
Outside of the tropics, the maximum elevation occurs at the summer solstice and the minimum at the winter solstice. The path of the Sun, or ecliptic, sweeps north and south between the northern and southern hemispheres. The lengths of time when the sun is up are longer around the summer solstice and shorter around the winter solstice, except near the equator. When the Sun's path crosses the equator, the length of the nights at latitudes +L° and −L° are of equal length. This is known as an equinox. There are two solstices and two equinoxes in a tropical year.
Because of the variation in the rate at which the sun's right ascension changes, the days of longest and shortest daylight do not coincide with the solstices for locations very close to the equator. At the equator, the longest day is around 23 December and the shortest around 16 September (see graph). Inside the Arctic or Antarctic Circles the sun is up all the time for days or even months.
Relationship to seasons
The seasons occur because the Earth's axis of rotation is not perpendicular to its orbital plane (the plane of the ecliptic) but currently makes an angle of about 23.44° (called the obliquity of the ecliptic), and because the axis keeps its orientation with respect to an inertial frame of reference. As a consequence, for half the year the Northern Hemisphere is inclined toward the Sun while for the other half year the Southern Hemisphere has this distinction. The two moments when the inclination of Earth's rotational axis has maximum effect are the solstices.
At the June solstice the subsolar point is further north than any other time: at latitude 23.44° north, known as the Tropic of Cancer. Similarly at the December solstice the subsolar point is further south than any other time: at latitude 23.44° south, known as the Tropic of Capricorn. The subsolar point will cross every latitude between these two extremes exactly twice per year.
Also during the June solstice, places on the Arctic Circle (latitude 66.56° north) will see the Sun just on the horizon during midnight, and all places north of it will see the Sun above horizon for 24 hours. That is the midnight sun or midsummer-night sun or polar day. On the other hand, places on the Antarctic Circle (latitude 66.56° south) will see the Sun just on the horizon during midday, and all places south of it will not see the Sun above horizon at any time of the day. That is the polar night. During the December Solstice, the effects on both hemispheres are just the opposite. This sees polar sea ice re-grow annually due to lack of sunlight on the air above and surrounding sea. The warmest and coldest periods of the year in temperate regions are offset by about one month from the solstices, delayed by the earth's thermal inertia.
Cultural aspects
Ancient Greek names and concepts
The concept of the solstices was embedded in ancient Greek celestial navigation. As soon as they discovered that the Earth was spherical they devised the concept of the celestial sphere, an imaginary spherical surface rotating with the heavenly bodies (ouranioi) fixed in it (the modern one does not rotate, but the stars in it do). As long as no assumptions are made concerning the distances of those bodies from Earth or from each other, the sphere can be accepted as real and is in fact still in use. The Ancient Greeks use the term "ηλιοστάσιο" (heliostāsio), meaning stand of the Sun.
The stars move across the inner surface of the celestial sphere along the circumferences of circles in parallel planes perpendicular to the Earth's axis extended indefinitely into the heavens and intersecting the celestial sphere in a celestial pole. The Sun and the planets do not move in these parallel paths but along another circle, the ecliptic, whose plane is at an angle, the obliquity of the ecliptic, to the axis, bringing the Sun and planets across the paths of and in among the stars.*
Cleomedes states:The band of the Zodiac (zōdiakos kuklos, "zodiacal circle") is at an oblique angle (loksos) because it is positioned between the tropical circles and equinoctial circle touching each of the tropical circles at one point ... This Zodiac has a determinable width (set at 8° today) ... that is why it is described by three circles: the central one is called "heliacal" (hēliakos, "of the sun").
The term heliacal circle is used for the ecliptic, which is in the center of the zodiacal circle, conceived as a band including the noted constellations named on mythical themes. Other authors use Zodiac to mean ecliptic, which first appears in a gloss of unknown author in a passage of Cleomedes where he is explaining that the Moon is in the zodiacal circle as well and periodically crosses the path of the Sun. As some of these crossings represent eclipses of the Moon, the path of the Sun is given a synonym, the ekleiptikos (kuklos) from ekleipsis, "eclipse".
English names
The two solstices can be distinguished by different pairs of names, depending on which feature one wants to stress.
Summer solstice and winter solstice are the most common names, referring to the seasons they are associated with. However, these can be ambiguous since the Northern Hemisphere's summer is the Southern Hemisphere's winter, and vice versa. The Latinate names estival solstice (summer) and hibernal solstice (winter) are sometimes used to the same effect, as are midsummer and midwinter.
June solstice and December solstice refer to the months of year in which they take place, with no ambiguity as to which hemisphere is the context. They are still not universal, however, as not all cultures use a solar-based calendar where the solstices occur every year in the same month (as they do not in the Islamic calendar and Hebrew calendar, for example).
Northern solstice and southern solstice indicate the hemisphere of the Sun's location. The northern solstice is in June, when the Sun is directly over the Tropic of Cancer in the Northern Hemisphere, and the southern solstice is in December, when the Sun is directly over the Tropic of Capricorn in the Southern Hemisphere. These terms can be used unambiguously for other planets.
First point of Cancer and first point of Capricorn refer to the astrological signs that the sun "is entering" (a system rooted in Roman Classical period dates). Due to the precession of the equinoxes, the constellations the sun appears in at solstices are currently Taurus in June and Sagittarius in December.
Solstice terms in East Asia
The traditional East Asian calendars divide a year into 24 solar terms (節氣). Xiàzhì (pīnyīn) or Geshi (rōmaji) () is the 10th solar term, and marks the summer solstice. It begins when the Sun reaches the celestial longitude of 90° (around 21 June) and ends when the Sun reaches the longitude of 105° (around 7 July). Xiàzhì more often refers in particular to the day when the Sun is exactly at the celestial longitude of 90°.
Dōngzhì (pīnyīn) or Tōji (rōmaji) () is the 22nd solar term, and marks the winter solstice. It begins when the Sun reaches the celestial longitude of 270° (around 23 December) and ends when the Sun reaches the longitude of 285° (around 5 January). Dōngzhì more often refers in particular to the day when the Sun is exactly at the celestial longitude of 270°.
The solstices (as well as the equinoxes) mark the middle of the seasons in East Asian calendars. Here, the Chinese character 至 means "extreme", so the terms for the solstices directly signify the summits of summer and winter.
Solstice celebrations
The term solstice can also be used in a wider sense, as the date (day) that such a passage happens. The solstices, together with the equinoxes, are connected with the seasons. In some languages they are considered to start or separate the seasons; in others they are considered to be centre points (in England, in the Northern Hemisphere, for example, the period around the northern solstice is known as midsummer). Midsummer's Day, defined as St. Johns Day by the Christian Church, is 24 June, about three days after the solstice itself). Similarly 25 December is the start of the Christmas celebration, and is the day the Sun begins to return to the Northern Hemisphere. The traditional British and Irish main rent and meeting days of the year, "the usual quarter days," were often those of the solstices and equinoxes.
Many cultures celebrate various combinations of the winter and summer solstices, the equinoxes, and the midpoints between them, leading to various holidays arising around these events. During the southern or winter solstice, Christmas is the most widespread contemporary holiday, while Yalda, Saturnalia, Karachun, Hanukkah, Kwanzaa, and Yule are also celebrated around this time. In East Asian cultures, the Dongzhi Festival is celebrated on the winter solstice. For the northern or summer solstice, Christian cultures celebrate the feast of St. John from June 23 to 24 (see St. John's Eve, Ivan Kupala Day), while Modern Pagans observe Midsummer, known as Litha among Wiccans. For the vernal (spring) equinox, several springtime festivals are celebrated, such as the Persian Nowruz, the observance in Judaism of Passover, the rites of Easter in most Christian churches, as well as the Wiccan Ostara. The autumnal equinox is associated with the Jewish holiday of Sukkot and the Wiccan Mabon.
In the southern tip of South America, the Mapuche people celebrate We Tripantu (the New Year) a few days after the northern solstice, on 24 June. Further north, the Atacama people formerly celebrated this date with a noise festival, to call the Sun back. Further east, the Aymara people celebrate their New Year on 21 June. A celebration occurs at sunrise, when the sun shines directly through the Gate of the Sun in Tiwanaku. Other Aymara New Year feasts occur throughout Bolivia, including at the site of El Fuerte de Samaipata.
In the Hindu calendar, two sidereal solstices are named Makara Sankranti which marks the start of Uttarayana and Karka Sankranti which marks the start of Dakshinayana. The former occurs around 14 January each year, while the latter occurs around 14 July each year. These mark the movement of the Sun along a sidereally fixed zodiac (precession is ignored) into Makara, the zodiacal sign which corresponds with Capricorn, and into Karka, the zodiacal sign which corresponds with Cancer, respectively.
The Amundsen–Scott South Pole Station celebrates every year on 21 June a midwinter party, to celebrate that the Sun is at its lowest point and coming back.
The Fremont Solstice Parade takes place every summer solstice in Fremont, Seattle, Washington in the United States.
The reconstructed Cahokia Woodhenge, a large timber circle located at the Mississippian culture Cahokia archaeological site near Collinsville, Illinois, is the site of annual equinox and solstice sunrise observances. Out of respect for Native American beliefs these events do not feature ceremonies or rituals of any kind.
Solstice determination
Unlike the equinox, the solstice time is not easy to determine. The changes in solar declination become smaller as the Sun gets closer to its maximum/minimum declination. The days before and after the solstice, the declination speed is less than 30 arcseconds per day which is less than of the angular size of the Sun, or the equivalent to just 2 seconds of right ascension.
This difference is hardly detectable with indirect viewing based devices like sextant equipped with a vernier, and impossible with more traditional tools like a gnomon or an astrolabe. It is also hard to detect the changes in sunrise/sunset azimuth due to the atmospheric refraction changes. Those accuracy issues render it impossible to determine the solstice day based on observations made within the 3 (or even 5) days surrounding the solstice without the use of more complex tools.
Accounts do not survive but Greek astronomers must have used an approximation method based on interpolation, which is still used by some amateurs. This method consists of recording the declination angle at noon during some days before and after the solstice, trying to find two separate days with the same declination. When those two days are found, the halfway time between both noons is estimated solstice time. An interval of 45 days has been postulated as the best one to achieve up to a quarter-day precision, in the solstice determination.
In 2012, the journal DIO found that accuracy of one or two hours with balanced errors can be attained by observing the Sun's equal altitudes about S = twenty degrees (or d = about 20 days) before and after the summer solstice because the average of the two times will be early by q arc minutes where q is (πe cosA)/3 times the square of S in degrees (e = earth orbit eccentricity, A = earth's perihelion or Sun's apogee), and the noise in the result will be about 41 hours divided by d if the eye's sharpness is taken as one arc minute.
Astronomical almanacs define the solstices as the moments when the Sun passes through the solstitial colure, i.e. the times when the apparent geocentric celestial longitude of the Sun is equal to 90° (June solstice) or 270° (December solstice). The dates of the solstice varies each year and may occur a day earlier or later depending on the time zone. Because the earth's orbit takes slightly longer than a calendar year of 365 days, the solstices occur slightly later each calendar year, until a leap day re-aligns the calendar with the orbit. Thus the solstices always occur between June 20 and 22 and between December 20 and 23 in a four-year-long cycle with the 21st and 22nd being the most common dates, as can be seen in the schedule at the start of the article.
In the constellations
Using the current official IAU constellation boundaries—and taking into account the variable precession speed and the rotation of the ecliptic—the solstices shift through the constellations as follows (expressed in astronomical year numbering in which the year 0 = 1 BC, −1 = 2 BC, etc.):
The northern solstice passed from Leo into Cancer in year −1458, passed into Gemini in year −10, passed into Taurus in December 1989, and is expected to pass into Aries in year 4609.
The southern solstice passed from Capricornus into Sagittarius in year −130, is expected to pass into Ophiuchus in year 2269, and is expected to pass into Scorpius in year 3597.
| Physical sciences | Celestial sphere | null |
28484 | https://en.wikipedia.org/wiki/Sputnik%201 | Sputnik 1 | Sputnik 1 (, , Satellite 1), sometimes referred to as simply Sputnik, was the first artificial Earth satellite. It was launched into an elliptical low Earth orbit by the Soviet Union on 4 October 1957 as part of the Soviet space program. It sent a radio signal back to Earth for three weeks before its three silver-zinc batteries became depleted. Aerodynamic drag caused it to fall back into the atmosphere on 4 January 1958.
It was a polished metal sphere in diameter with four external radio antennas to broadcast radio pulses. Its radio signal was easily detectable by amateur radio operators, and the 65° orbital inclination made its flight path cover virtually the entire inhabited Earth.
The satellite's success was unanticipated by the United States. This precipitated the American Sputnik crisis and triggered the Space Race. The launch was the beginning of a new era of political, military, technological, and scientific developments. The word sputnik is Russian for satellite when interpreted in an astronomical context; its other meanings are spouse or traveling companion.
Tracking and studying Sputnik 1 from Earth provided scientists with valuable information. The density of the upper atmosphere could be deduced from its drag on the orbit, and the propagation of its radio signals gave data about the ionosphere.
Sputnik 1 was launched during the International Geophysical Year from Site No.1/5, at the 5th Tyuratam range, in Kazakh SSR (now known as the Baikonur Cosmodrome). The satellite traveled at a peak speed of about , taking 96.20 minutes to complete each orbit. It transmitted on 20.005 and 40.002 MHz, which were monitored by radio operators throughout the world. The signals continued for 22 days until the transmitter batteries depleted on 26 October 1957. On 4 January 1958, after three months in orbit, Sputnik 1 burned up while reentering Earth's atmosphere, having completed 1,440 orbits of the Earth, and travelling a distance of approximately .
Etymology
, romanized as (), means 'Satellite-One'. The Russian word for satellite, , was coined in the 18th century by combining the prefix ('fellow') and ('traveler'), thereby meaning 'fellow-traveler', a meaning corresponding to the Latin root ('guard, attendant or companion'), which is the origin of English satellite.
In English, 'Sputnik' is widely recognized as a proper name; however, this is not the case in Russian. In the Russian language, sputnik is the general term for the artificial satellites of any country and the natural satellites of any planet. The incorrect attribution of 'Sputnik' as a proper name can be traced back to an article released by The New York Times on 6 October 1957, titled "Soviet 'Sputnik' Means A Traveler's Traveler". In the referenced article, the term 'Sputnik' was portrayed as bearing a poetic connotation arising from its linguistic origins. This connotation incorrectly indicated that it was bestowed with the specific proper name 'Fellow-Traveler-One', rather than being designated by the general term 'Satellite-One'. In Russian-language references, Sputnik 1 is recognized by the technical name of 'Satellite-One'.
Before the launch
Satellite construction project
On 17 December 1954, chief Soviet rocket scientist Sergei Korolev proposed a developmental plan for an artificial satellite to the Minister of the Defense Industry, Dimitri Ustinov. Korolev forwarded a report by Mikhail Tikhonravov, with an overview of similar projects abroad. Tikhonravov had emphasized that the launch of an orbital satellite was an inevitable stage in the development of rocket technology.
On 29 July 1955, U.S. President Dwight D. Eisenhower announced through his press secretary that, during the International Geophysical Year (IGY), the United States would launch an artificial satellite. Four days later, Leonid Sedov, a leading Soviet physicist, announced that they too would launch an artificial satellite. On 8 August, the Politburo of the Communist Party of the Soviet Union approved the proposal to create an artificial satellite. On 30 August, Vasily Ryabikov—the head of the State Commission on the R-7 rocket test launches—held a meeting where Korolev presented calculation data for a spaceflight trajectory to the Moon. They decided to develop a three-stage version of the R-7 rocket for satellite launches.
On 30 January 1956, the Council of Ministers approved practical work on an artificial Earth-orbiting satellite. This satellite, named Object D, was planned to be completed in 1957–58; it would have a mass of and would carry of scientific instruments. The first test launch of "Object D" was scheduled for 1957. Work on the satellite was to be divided among institutions as follows:
The USSR Academy of Sciences was responsible for the general scientific leadership and the supply of research instruments.
The Ministry of the Defense Industry and its primary design bureau, OKB-1, were assigned the task of building the satellite.
The Ministry of the Radio technical Industry would develop the control system, radio/technical instruments, and the telemetry system.
The Ministry of the Ship Building Industry would develop gyroscope devices.
The Ministry of the Machine Building would develop ground launching, refueling, and transportation means.
The Ministry of Defense was responsible for conducting launches.
Preliminary design work was completed in July 1956 and the scientific tasks to be carried out by the satellite were defined. These included measuring the density of the atmosphere and its ion composition, the solar wind, magnetic fields, and cosmic rays. These data would be valuable in the creation of future artificial satellites; a system of ground stations was to be developed to collect data transmitted by the satellite, observe the satellite's orbit, and transmit commands to the satellite. Because of the limited time frame, observations were planned for only 7 to 10 days and orbit calculations were not expected to be extremely accurate.
By the end of 1956, it became clear that the complexity of the ambitious design meant that 'Object D' could not be launched in time because of difficulties creating scientific instruments and the low specific impulse produced by the completed R-7 engines (304 seconds instead of the planned 309 to 310 seconds). Consequently, the government rescheduled the launch for April 1958. Object D would later fly as Sputnik 3.
Fearing the U.S. would launch a satellite before the USSR, OKB-1 suggested the creation and launch of a satellite in April–May 1957, before the IGY began in July 1957. The new satellite would be simple, light (), and easy to construct, forgoing the complex, heavy scientific equipment in favour of a simple radio transmitter. On 15 February 1957 the Council of Ministers of the USSR approved this simple satellite, designated 'Object PS', PS meaning "prosteishiy sputnik", or "elementary satellite". This version allowed the satellite to be tracked visually by Earth-based observers, and it could transmit tracking signals to ground-based receiving stations. The launch of two satellites, PS-1 and PS-2, with two R-7 rockets (8K71), was approved, provided that the R-7 completed at least two successful test flights.
Launch vehicle preparation and launch site selection
The R-7 rocket was initially designed as an intercontinental ballistic missile (ICBM) by OKB-1. The decision to build it was made by the Central Committee of the Communist Party of the Soviet Union and the Council of Ministers of the USSR on 20 May 1954. The rocket was the most powerful in the world; it was designed with excess thrust since they were unsure how heavy the hydrogen bomb payload would be. The R-7 was also known by its GRAU (later GURVO, the Russian abbreviation for "Chief Directorate of the Rocket Forces") designation 8K71. At the time, the R-7 was known to NATO sources as the T-3 or M-104, and Type A.
Several modifications were made to the R-7 rocket to adapt it to 'Object D', including upgrades to the main engines, the removal of a radio package on the booster, and a new payload fairing that made the booster almost four meters shorter than its ICBM version. Object D would later be launched as Sputnik 3 after the much lighter 'Object PS' (Sputnik 1) was launched first. The trajectory of the launch vehicle and the satellite were initially calculated using arithmometers and six-digit trigonometric tables. More complex calculations were carried out on a newly-installed computer at the Academy of Sciences.
A special reconnaissance commission selected Tyuratam for the construction of a rocket proving ground, the 5th Tyuratam range, usually referred to as "NIIP-5", or "GIK-5" in the post-Soviet time. The selection was approved on 12 February 1955 by the Council of Ministers of the USSR, but the site would not be completed until 1958. Actual work on the construction of the site began on 20 July by military building units.
The first launch of an R-7 rocket (8K71 No.5L) occurred on 15 May 1957. A fire began in the Blok D strap-on almost immediately at liftoff, but the booster continued flying until 98 seconds after launch when the strap-on broke away and the vehicle crashed downrange. Three attempts to launch the second rocket (8K71 No.6) were made on 10–11 June, but an assembly defect prevented launch. The unsuccessful launch of the third R-7 rocket (8K71 No.7) took place on 12 July. An electrical short caused the vernier engines to put the missile into an uncontrolled roll which resulted in all of the strap-ons separating 33 seconds into the launch. The R-7 crashed about from the pad.
The launch of the fourth rocket (8K71 No.8), on 21 August at 15:25 Moscow Time, was successful. The rocket's core boosted the dummy warhead to the target altitude and velocity, reentered the atmosphere, and broke apart at a height of after traveling . On 27 August, the TASS issued a statement on the successful launch of a long-distance multistage ICBM. The launch of the fifth R-7 rocket (8K71 No.9), on 7 September, was also successful, but the dummy was also destroyed on atmospheric re-entry, and hence needed a redesign to completely fulfill its military purpose. The rocket, however, was deemed suitable for satellite launches, and Korolev was able to convince the State Commission to allow the use of the next R-7 to launch PS-1, allowing the delay in the rocket's military exploitation to launch the PS-1 and PS-2 satellites.
On 22 September a modified R-7 rocket, named Sputnik and indexed as 8K71PS, arrived at the proving ground and preparations for the launch of PS-1 began. Compared to the military R-7 test vehicles, the mass of 8K71PS was reduced from , its length with PS-1 was and the thrust at liftoff was .
Observation complex
PS-1 was not designed to be controlled; it could only be observed. Initial data at the launch site would be collected at six separate observatories and telegraphed to NII-4. Located back in Moscow (at Bolshevo), NII-4 was a scientific research arm of the Ministry of Defence that was dedicated to missile development. The six observatories were clustered around the launch site, with the closest situated from the launch pad.
A second, nationwide observation complex was established to track the satellite after its separation from the rocket. Called the Command-Measurement Complex, it consisted of the coordination center in NII-4 and seven distant stations situated along the line of the satellite's ground track. These tracking stations were located at Tyuratam, Sary-Shagan, Yeniseysk, Klyuchi, Yelizovo, Makat in Guryev Oblast, and Ishkup in Krasnoyarsk Krai. Stations were equipped with radar, optical instruments, and communications systems. Data from stations were transmitted by telegraphs into NII-4 where ballistics specialists calculated orbital parameters.
The observatories used a trajectory measurement system called "Tral", developed by OKB MEI (Moscow Energy Institute), by which they received and monitored data from transponders mounted on the R-7 rocket's core stage. The data were useful even after the satellite's separation from the second stage of the rocket; Sputnik's location was calculated from data on the location of the second stage, which followed Sputnik at a known distance. Tracking of the booster during launch had to be accomplished through purely passive means, such as visual coverage and radar detection. R-7 test launches demonstrated that the tracking cameras were only good up to an altitude of , but radar could track it for almost .
Outside the Soviet Union, the satellite was tracked by amateur radio operators in many countries. The booster rocket was located and tracked by the British using the Lovell Telescope at the Jodrell Bank Observatory, the only telescope in the world able to do so by radar. Canada's Newbrook Observatory was the first facility in North America to photograph Sputnik 1.
Design
Sputnik 1 was designed to meet a set of guidelines and objectives such as:
simplicity and reliability that could be adapted to future projects
a spherical body to help determine atmospheric density from its lifetime in orbit
radio equipment to facilitate tracking and to obtain data on radio waves propagation through the atmosphere
verification of the satellite's pressurization scheme
The chief constructor of Sputnik 1 at OKB-1 was Mikhail S. Khomyakov. The satellite was a diameter sphere, assembled from two hemispheres that were hermetically sealed with O-rings and connected by 36 bolts. It had a mass of . The hemispheres were 2 mm thick, and were covered with a highly polished 1 mm-thick heat shield made of an aluminium–magnesium–titanium alloy, AMG6T. The satellite carried two pairs of antennas designed by the Antenna Laboratory of OKB-1, led by Mikhail V. Krayushkin. Each antenna was made up of two whip-like parts, in length, and had an almost spherical radiation pattern.
The power supply, with a mass of , was in the shape of an octagonal nut with the radio transmitter in its hole. It consisted of three silver-zinc batteries, developed at the All-Union Research Institute of Power Sources (VNIIT) under the leadership of Nikolai S. Lidorenko. Two of these batteries powered the radio transmitter and one powered the temperature regulation system. The batteries had an expected lifetime of two weeks, and operated for 22 days. The power supply was turned on automatically at the moment of the satellite's separation from the second stage of the rocket.
The satellite had a one-watt, radio transmitting unit inside, developed by Vyacheslav I. Lappo from NII-885, the Moscow Electronics Research Institute, that worked on two frequencies, 20.005 and 40.002 MHz. Signals on the first frequency were transmitted in 0.3 s pulses (near f = 3 Hz) (under normal temperature and pressure conditions on board), with pauses of the same duration filled by pulses on the second frequency. Analysis of the radio signals was used to gather information about the electron density of the ionosphere. Temperature and pressure were encoded in the duration of radio beeps. A temperature regulation system contained a fan, a dual thermal switch, and a control thermal switch. If the temperature inside the satellite exceeded , the fan was turned on; when it fell below , the fan was turned off by the dual thermal switch. If the temperature exceeded or fell below , another control thermal switch was activated, changing the duration of the radio signal pulses. Sputnik 1 was filled with dry nitrogen, pressurized to . The satellite had a barometric switch, activated if the pressure inside the satellite fell below 130 kPa, which would have indicated failure of the pressure vessel or puncture by a meteor, and would have changed the duration of radio signal impulse.
While attached to the rocket, Sputnik 1 was protected by a cone-shaped payload fairing, with a height of . The fairing separated from both Sputnik and the spent R-7 second stage at the same time as the satellite was ejected. Tests of the satellite were conducted at OKB-1 under the leadership of Oleg G. Ivanovsky.
Launch and mission
The control system of the Sputnik rocket was adjusted to an intended orbit of , with an orbital period of 101.5 minutes. The trajectory had been calculated earlier by Georgi Grechko, using the USSR Academy of Sciences' mainframe computer.
The Sputnik rocket was launched on 4 October 1957 at 19:28:34 UTC (5 October at the launch site) from Site No.1 at NIIP-5. Telemetry indicated that the strap-ons separated 116 seconds into the flight and the core stage engine shut down 295.4 seconds into the flight. At shutdown, the 7.5-tonne core stage (with PS-1 attached) had attained an altitude of above sea level, a velocity of , and a velocity vector inclination to the local horizon of 0 degrees 24 minutes. This resulted in an initial elliptical orbit of by , with an apogee approximately lower than intended, and an inclination of 65.10° and a period of 96.20 minutes.
Several engines did not fire on time, almost aborting the mission. A fuel regulator in the booster also failed around 16 seconds into launch, which resulted in excessive RP-1 consumption for most of the powered flight and the engine thrust being 4% above nominal. Core stage cutoff was intended for T+296 seconds, but the premature propellant depletion caused thrust termination to occur one second earlier when a sensor detected overspeed of the empty RP-1 turbopump. There were of LOX remaining at cutoff.
At 19.9 seconds after engine cut-off, PS-1 separated from the second stage and the satellite's transmitter was activated. These signals were detected at the IP-1 station by Junior Engineer-Lieutenant V.G. Borisov, where reception of Sputnik 1's "beep-beep-beep" tones confirmed the satellite's successful deployment. Reception lasted for two minutes, until PS-1 passed below the horizon. The Tral telemetry system on the R-7 core stage continued to transmit and was detected on its second orbit.
The designers, engineers, and technicians who developed the rocket and satellite watched the launch from the range. After the launch they drove to the mobile radio station to listen for signals from the satellite. They waited about 90 minutes to ensure that the satellite had made one orbit and was transmitting before Korolev called Soviet premier Nikita Khrushchev.
On the first orbit the Telegraph Agency of the Soviet Union (TASS) transmitted: "As result of great, intense work of scientific institutes and design bureaus the first artificial Earth satellite has been built". The R-7 core stage, with a mass of 7.5 tonnes and a length of 26 metres, also reached Earth orbit. It was a first magnitude object following behind the satellite and visible at night. Deployable reflective panels were placed on the booster in order to increase its visibility for tracking. A small highly polished sphere, the satellite was barely visible at sixth magnitude, and thus harder to follow optically. The batteries ran out on 26 October 1957, after the satellite completed 326 orbits.
The core stage of the R-7 remained in orbit for two months until 2 December 1957, while Sputnik 1 orbited for three months, until 4 January 1958, having completed 1,440 orbits of the Earth.
Reception
Organized through the citizen science project Operation Moonwatch, teams of visual observers at 150 stations in the United States and other countries were alerted during the night to watch for the satellite at dawn and during the evening twilight as it passed overhead. The USSR requested amateur and professional radio operators to tape record the signal being transmitted from the satellite. One of the first observations of it in the western world were made at the school observatory in Rodewisch (Saxony).
News reports at the time pointed out that "anyone possessing a short wave receiver can hear the new Russian earth satellite as it hurtles over this area of the globe." Directions, provided by the American Radio Relay League, were to "Tune in 20 megacycles sharply, by the time signals, given on that frequency. Then tune to slightly higher frequencies. The 'beep, beep' sound of the satellite can be heard each time it rounds the globe." The first recording of Sputnik 1's signal was made by RCA engineers near Riverhead, Long Island. They then drove the tape recording into Manhattan for broadcast to the public over NBC radio. However, as Sputnik rose higher over the East Coast, its signal was picked up by W2AEE, the ham radio station of Columbia University. Students working in the university's FM station, WKCR, made a tape of this, and were the first to rebroadcast the Sputnik signal to the American public (or whoever could receive the FM station).
The Soviet Union agreed to transmit on frequencies that worked with the United States' existing infrastructure, but later announced the lower frequencies. Asserting that the launch "did not come as a surprise", the White House refused to comment on any military aspects. On 5 October, the Naval Research Laboratory captured recordings of Sputnik 1 during four crossings over the United States. The USAF Cambridge Research Center collaborated with Bendix-Friez, Westinghouse Broadcasting, and the Smithsonian Astrophysical Observatory to obtain a video of Sputnik's rocket body crossing the pre-dawn sky of Baltimore, broadcast on 12 October by WBZ-TV in Boston.
The success of Sputnik 1 seemed to have changed minds around the world regarding a shift in power to the Soviets.
The USSR's launch of Sputnik 1 spurred the United States to create the Advanced Research Projects Agency (ARPA, later DARPA) in February 1958 to regain a technological lead.
In Britain, the media and population initially reacted with a mixture of fear for the future, but also amazement about human progress. Many newspapers and magazines heralded the arrival of the Space Age. However, when the USSR launched Sputnik 2, containing the dog Laika, the media narrative returned to one of anti-Communism and many people sent protests to the Soviet embassy and the RSPCA.
Propaganda
Sputnik 1 was not immediately used for Soviet propaganda. The Soviets had kept quiet about their earlier accomplishments in rocketry, fearing that it would lead to secrets being revealed and failures being exploited by the West. When the Soviets began using Sputnik in their propaganda, they emphasized pride in the achievement of Soviet technology, arguing that it demonstrated the Soviets' superiority over the West. People were encouraged to listen to Sputnik's signals on the radio and to look out for Sputnik in the night sky. While Sputnik itself had been highly polished, its small size made it barely visible to the naked eye. What most watchers actually saw was the much more visible 26-metre core stage of the R-7. Shortly after the launch of PS-1, Khrushchev pressed Korolev to launch another satellite to coincide with the 40th anniversary of the October Revolution, on 7 November 1957.
The launch of Sputnik 1 surprised the American public, and shattered the perception created by American propaganda of the United States as the technological superpower, and the Soviet Union as a backward country. Privately, however, the CIA and President Eisenhower were aware of progress being made by the Soviets on Sputnik from secret spy plane imagery. Together with the Jet Propulsion Laboratory (JPL), the Army Ballistic Missile Agency built Explorer 1, and launched it on 31 January 1958. Before work was completed, however, the Soviet Union launched a second satellite, Sputnik 2, on 3 November 1957. Meanwhile, the televised failure of Vanguard TV-3 on 6 December 1957 deepened American dismay over the country's position in the Space Race. The Americans took a more aggressive stance in the emerging space race, resulting in an emphasis on science and technological research, and reforms in many areas from the military to education systems. The federal government began investing in science, engineering, and mathematics at all levels of education. An advanced research group was assembled for military purposes. These research groups developed weapons such as ICBMs and missile defense systems, as well as spy satellites for the U.S.
Legacy
Initially, U.S. President Dwight Eisenhower was not surprised by Sputnik 1. He had been forewarned of the R-7's capabilities by information derived from U-2 spy plane overflight photos, as well as signals and telemetry intercepts. General James M. Gavin wrote in 1958 that he had predicted to the Army Scientific Advisory Panel on 12 September 1957 that the Soviets would launch a satellite within 30 days, and that on 4 October he and Wernher von Braun had agreed that a launch was imminent. The Eisenhower administration's first response was low-key and almost dismissive. Eisenhower was even pleased that the USSR, not the U.S., would be the first to test the waters of the still-uncertain legal status of orbital satellite overflights. Eisenhower had suffered the Soviet protests and shoot-downs of Project Genetrix (Moby Dick) balloons and was concerned about the probability of a U-2 being shot down. To set a precedent for "freedom of space" before the launch of America's secret WS-117L spy satellites, the U.S. had launched Project Vanguard as its own "civilian" satellite entry for the International Geophysical Year. Eisenhower greatly underestimated the reaction of the American public, who were shocked by the launch of Sputnik and by the televised failure of the Vanguard Test Vehicle 3 launch attempt. The sense of anxiety was inflamed by Democratic politicians, who portrayed the United States as woefully behind. One of the many books that suddenly appeared for the lay-audience noted seven points of "impact" upon the nation: Western leadership, Western strategy and tactics, missile production, applied research, basic research, education, and democratic culture. As public and the government became interested in space and related science and technology, the phenomenon was sometimes dubbed the "Sputnik craze".
The U.S. soon had a number of successful satellites, including Explorer 1, Project SCORE, and Courier 1B. However, public reaction to the Sputnik crisis spurred America to action in the Space Race, leading to the creation of both the Advanced Research Projects Agency (renamed the Defense Advanced Research Projects Agency, or DARPA, in 1972), and NASA (through the National Aeronautics and Space Act), as well as increased U.S. government spending on scientific research and education through the National Defense Education Act.
Sputnik also contributed directly to a new emphasis on science and technology in American schools. With a sense of urgency, Congress enacted the 1958 National Defense Education Act, which provided low-interest loans for college tuition to students majoring in mathematics and science. After the launch of Sputnik, a poll conducted and published by the University of Michigan showed that 26% of Americans surveyed thought that Russian sciences and engineering were superior to that of the United States. (A year later, however, that figure had dropped to 10% as the U.S. began launching its own satellites into space.)
One consequence of the Sputnik shock was the perception of a "missile gap". This became a dominant issue in the 1960 presidential campaign.
The Communist Party newspaper Pravda only printed a few paragraphs about Sputnik 1 on 4 October.
Sputnik also inspired a generation of engineers and scientists. Harrison Storms, the North American designer who was responsible for the X-15 rocket plane, and went on to head the effort to design the Apollo command and service module and Saturn V launch vehicle's second stage, was moved by the launch of Sputnik to think of space as being the next step for America. Astronauts Alan Shepard (who was the first American in space) and Deke Slayton later wrote of how the sight of Sputnik 1 passing overhead inspired them to their new careers.
The launch of Sputnik 1 led to the resurgence of the suffix -nik in the English language. The American writer Herb Caen was inspired to coin the term "beatnik" in an article about the Beat Generation in the San Francisco Chronicle on 2 April 1958.
The flag of the Russian city of Kaluga, (which, due to it being Konstantin Tsiolkovsky's place of work and residency, is very dedicated to space and space travel) features a small Sputnik in the canton.
On 3 October 2007 Google celebrated its 50th anniversary with a Google Doodle.
Satellite navigation
The launch of Sputnik also planted the seeds for the development of modern satellite navigation. Two American physicists, William Guier and George Weiffenbach, at Johns Hopkins University's Applied Physics Laboratory (APL) decided to monitor Sputnik's radio transmissions and within hours realized that, because of the Doppler effect, they could pinpoint where the satellite was along its orbit. The Director of the APL gave them access to their UNIVAC computer to do the then heavy calculations required.
Early the next year, Frank McClure, the deputy director of the APL, asked Guier and Weiffenbach to investigate the inverse problem: pinpointing the user's location, given the satellite's. At the time, the Navy was developing the submarine-launched Polaris missile, which required them to know the submarine's location. This led them and APL to develop the TRANSIT system, a forerunner of modern Global Positioning System (GPS) satellites.
Surviving examples
Backups
At least two vintage duplicates of Sputnik 1 exist, built apparently as backup units. The first resides near Moscow in the corporate museum of Energia, the modern descendant of Korolev's design bureau, where it is on display by appointment only. The second is a flight-ready backup at the Cosmosphere space museum in Hutchinson, Kansas, which also has an engineering model of the Sputnik 2.
Models
The Museum of Flight in Seattle, Washington has a Sputnik 1, but it has no internal components, though it does have casings and molded fittings inside (as well as evidence of battery wear), which may be an engineering model. Authenticated by the Memorial Museum of Cosmonautics in Moscow, the unit was auctioned in 2001 and purchased by an anonymous private buyer, who donated it to the museum.
The Sputnik 1 EMC/EMI is a class of full-scale laboratory models of the satellite. The models, manufactured by OKB-1 and NII-885 (headed by Mikhail Ryazansky), were introduced on 15 February 1957. They were made to test ground electromagnetic compatibility (EMC) and electromagnetic interference (EMI).
Replicas
In 1959, the Soviet Union donated a replica of Sputnik to the United Nations. There are other full-size Sputnik replicas (with varying degrees of accuracy) on display in locations around the world, including the National Air and Space Museum in the United States, the Science Museum in the United Kingdom, the Powerhouse Museum in Australia, and outside the Russian embassy in Spain.
Three one-third scale student-built replicas of Sputnik 1 were deployed from the Mir space station between 1997 and 1999. The first, named Sputnik 40 to commemorate the fortieth anniversary of the launch of Sputnik 1, was deployed in November 1997. Sputnik 41 was launched a year later, and Sputnik 99 was deployed in February 1999. A fourth replica was launched, but never deployed, and was destroyed when Mir was deorbited.
Private owners
Two more Sputniks are claimed to be in the personal collections of American entrepreneurs Richard Garriott and Jay S. Walker.
| Technology | Uncrewed spacecraft | null |
28491 | https://en.wikipedia.org/wiki/Submachine%20gun | Submachine gun | A submachine gun (SMG) is a magazine-fed automatic carbine designed to fire handgun cartridges. The term "submachine gun" was coined by John T. Thompson, the inventor of the Thompson submachine gun, to describe its design concept as an automatic firearm with notably less firepower than a machine gun (hence the prefix "sub-"). As a machine gun must fire rifle cartridges to be classified as such, submachine guns are not considered machine guns.
The submachine gun was developed during World War I (1914–1918) as a close quarter offensive weapon, mainly for trench raiding. At its peak during World War II (1939–1945), millions of submachine guns were made for assault troops and auxiliaries whose doctrines emphasized close-quarter suppressive fire. New submachine gun designs appeared frequently during the Cold War, especially among special forces, covert operation commandos and mechanized infantrymen. Submachine gun usage for frontline combat decreased in the 1980s and 1990s, and by the early 21st century, submachine guns have largely been replaced by assault rifles, which have a longer effective range, have increased stopping power, and can better penetrate the helmets and body armor used by modern soldiers. However, they are still used by security forces, police tactical units, paramilitary and bodyguards for close-quarters combat because they are "a pistol-caliber weapon that's easy to control, and less likely to overpenetrate the target".
Name
There are some inconsistencies in the classification of submachine guns. British Commonwealth sources often refer to SMGs as "machine carbines". Other sources refer to SMGs as "machine pistols" because they fire pistol-caliber ammunition, for example, the MP-40 and MP5, where "MP" stands for Maschinenpistole ("submachine gun" in German, but cognate with the English term "machine pistol"). However, the term "machine pistol" is also used to describe a handgun-style firearm capable of fully automatic or burst fire, such as the Stechkin, Beretta 93R, Glock 18, and the H&K VP70. Furthermore, personal defense weapons such as the FN P90 and H&K MP7 are often called submachine guns.
History
In 1895, Hiram Maxim produced the 'miniature Maxim' which was a pistol-calibre Maxim machinegun weighing that was sold in small quantities to various countries and tested by the US military but not adopted. In 1896, a select-fire pistol was patented by the British inventor Hugh Gabbett-Fairfax.
In April 1914, Abiel Bethel Revelli, an Italian military officer patented a twin-barreled, magazine-fed automatic gun in a pistol caliber, lighter than a machine gun and shorter than a rifle. A common myth is that this weapon was originally designed as an aircraft gun. In reality ground use was taken into consideration from the very beginning, particularly for the Bersaglieri's cyclist battalions.
World War I
Stocked pistols were common at the beginning of the 20th century, the Germans initially used heavier versions of the P08 pistol equipped with a detachable stock, larger-capacity snail-drum magazine and a longer barrel.
In 1915, the Kingdom of Italy adopted Revelli's design as the FIAT Mod. 1915. It fired pistol-caliber 9mm Glisenti ammunition, but was not a true submachine gun, as it was originally designed as a mounted weapon.In late 1915, the first submachine gun with a buttstock was built: the Austro-Hungarian Standschütze Hellriegel M1915 although the weapon was never used in combat.
In February 1916, the Austro-Hungarian first fielded the M.12/P16 machine pistol. This was the first machine pistol to be adopted by any military, being issued to Tyrolean units fighting in the alps
In 1916, Heinrich Senn of Bern designed a modification of the Swiss Luger pistol to fire in single shots or in full-automatic. Around the same time Georg Luger demonstrated a similar Luger machine pistol which inspired the German Army to develop submachine guns.
Colonel Bethel-Abiel Revelli had already conceived the principles of the submachine gun in September 1915, when he wrote that his gun could be converted to a single-barreled version that "may be mounted in the manner of a rifle so that it may be fired from the shoulder". The FIAT Mod. 1915 would be later modified into the OVP 1918 automatic carbine. The OVP 1918 had a traditional wooden stock, a 25-round top-fed box magazine, and had a cyclic rate of fire of 900 rounds per minute.
By 1918, Bergmann Waffenfabrik had developed the 9x19mm Parabellum MP 18, the first practical submachine gun. This weapon used the same 32-round snail-drum magazine as the Luger P-08. The MP 18 was used in significant numbers by German stormtroopers employing infiltration tactics, achieving some notable successes in the final year of the war. However, these were not enough to prevent Germany's collapse in November 1918. After World War I, the MP 18 evolved into the MP28/II SMG, which incorporated a simple 32-round box magazine, selective fire, and other minor improvements. Though the MP18 had a rather short service life, it was influential in the design of later submachine guns, such as the Lanchester, Sten and PPD-40.
The .45 ACP Thompson submachine gun had been in development at approximately the same time as the Bergmann and the Beretta. However, the war ended before prototypes could be shipped to Europe. Although it had missed its chance to be the first purpose-designed submachine gun to enter service, it became the basis for later weapons, and was much more successful than the submachine guns produced during World War I.
Interwar period
The Thompson entered production as the M1921. It was available to civilians, but, because of the weapon's high price, initially saw poor sales. The Thompson (with one Type XX 20 round "stick" magazine) had been priced at $200 in 1921 (roughly ). The Thompson was used in combat that same year:
West Virginia state police bought 37 guns and used them during the Battle of Blair Mountain. Some of the first batches of Thompsons were bought by agents of the Irish Republican Army. They purchased a total of 653 units, though U.S. customs authorities in New York seized 495 of the units in June 1921.
The Thompson, nicknamed "Tommy Gun" or "Chicago Typewriter" became notorious in the U.S. due to its employment by the Mafia: the image of pinstripe-suited James Cagney types wielding drum-magazine Thompsons caused some military planners to shun the weapon. However, the FBI and other U.S. police forces themselves showed no reluctance to use and prominently display these weapons. Eventually, the submachine gun was gradually accepted by many military organizations, especially as World War II loomed, with many countries developing their own designs. The U.S. Marine Corps adopted the Thompson during this period, they used them during the Banana Wars in Central America and it was also used by the China Marines.
During the 1924 uprising the Soviets supplied four Thompsons to Estonian Communist militants; those were used against Estonian soldiers in a failed attempt to storm the Tallinn barracks. Some of the defenders were armed with the MP18s; and this was possibly the first engagement where submachine guns were used on both sides.
Germany transferred its MP 18s to the German police forces after World War I. They also saw use in the hands of various paramilitary Freikorps during the aftermath of the German Revolution. In the 1920s a new, more reliable box magazine was developed for the MP 18 to replace the older snail-drum magazines. In 1928 a new version of the MP 18, the MP 28, saw the light of day, it featured the new box magazine as standard, a bayonet lug and a single shot mode. The MP 28 was manufactured in Belgium and Spain and was widely exported from there, including to China and South America. Another variant based on the MP 18 was the MP 34 that was manufactured by the Germans through the Swiss front company Solothurn. The MP 34 was manufactured from the very best materials available and finished to the highest possible standard. Consequently, its production costs were extremely high. It was adopted by the Austrian police and army in the 1930s, and they were taken over by the Germans after German annexation of Austria in 1938. The MP35 was another interwar German submachine gun, designed by the Bergmann brothers. It was exported to Sweden and Ethiopia and also saw extensive use in the Spanish Civil War. About 40,000 of the type were manufactured until 1944, with many going into the hands of the Waffen SS. The Erma EMP was yet another submachine gun from this period, based on a design by Heinrich Vollmer, about 10,000 were manufactured. It was exported to Spain, Mexico, China and Yugoslavia, but also used domestically by the SS, as well as being produced under license in Francoist Spain.
World War II
Changes in design accelerated during the war, with one major trend being the abandonment of complex and finely made pre-war designs like the Thompson submachine gun to weapons designed for cheap mass production and easy replacement like the M3 Grease Gun.
While the Italians were among the first to develop submachine guns during World War I, they were slow to produce them under Mussolini; the 9mm Parabellum Beretta Model 38 (MAB 38) was not available in large numbers until 1943. The MAB 38 was made in a series of improved and simplified models all sharing the same basic layout. The MAB 38 has two triggers, the front for semi-auto and rear for full-auto. Most models use standard wooden stocks, although some models were fitted with an MP40-style under-folding stock and are commonly mistaken for it. The MAB 38 series was extremely robust and proved very popular with both Axis and Allied troops (who used captured MAB 38s). It is considered the most successful and effective Italian small arm of World War II. During the later years of the war, the TZ-45 submachine gun was manufactured in small numbers in the Italian Social Republic. A cheaper alternative to the MAB 38, it also sported an unusual grip safety.
In 1939, the Germans introduced the 9mm Parabellum MP38 which was first used during the invasion of Poland of September that year. The MP38 production was still just starting and only a few thousand were in service at the time. It proved to be far more practical and effective in close-quarters combat than the standard-issue German Karabiner 98k bolt-action rifle. From this experience, the simplified and modernized MP40 (commonly and erroneously referred to as the Schmeisser) was developed and made in large numbers; about a million were made during World War II. The MP40 was lighter than the MP38. It also used more stamped parts, making it faster and cheaper to produce. The MP38 and MP40 were the first SMGs to use plastic furniture and a practical folding stock, which became standard for all future SMG designs. The Germans used a large number of captured Soviet PPSh-41 submachine guns, some were converted to fire 9mm Parabellum while others were used unmodified (the German 7.63×25mm Mauser cartridge had identical dimensions to the 7.62×25mm Tokarev, albeit slightly less powerful).
During the Winter War, the badly outnumbered Finnish used the Suomi KP/-31 in large numbers against the Russians with devastating effect. Finnish ski troops became known for appearing out of the woods on one side of a road, raking Soviet columns with SMG fire and disappearing back into the woods on the other side. During the Continuation War, the Finnish Sissi patrols often equipped every soldier with KP/-31s. The Suomi fired 9mm Parabellum ammunition from a 71-round drum magazine (although often loaded with 74 rounds). "This SMG showed the world the importance of the submachine gun in modern warfare", prompting the development, adoption and mass production of submachine guns by most of the world's armies. The Suomi was used in combat until the end of the Lapland war, was widely exported and remained in service to the late 1970s. Inspired by captured examples of the Soviet PPS submachine gun, a gun that was cheaper and quicker to manufacture than the Suomi, the Finns introduced the KP m/44 submachine gun in 1944.
In 1940, the Soviets introduced the 7.62×25mm PPD-40 and later the more easily manufactured PPSh-41 in response to their experience during the Winter War against Finland. The PPSh's 71-round drum magazine is a copy of the Suomi's. Later in the war they developed the even more readily mass-produced PPS submachine gun - all firing the same small-caliber but high-powered Tokarev cartridges. The USSR went on to make over 6 million PPSh-41s and 2 million PPS-43s by the end of World War II. Thus, the Soviet Union could field huge numbers of submachine guns against the Wehrmacht, with whole infantry battalions being armed with little else. Even in the hands of conscripts with minimal training, the volume of fire produced by massed submachine guns could be overwhelming.
Britain entered the war with no domestic submachine gun design but instead imported the expensive US M1928 Thompson. After evaluating their battlefield experience in the Battle of France and losing many weapons in the Dunkirk evacuation, the Royal Navy adopted the 9mm Parabellum Lanchester submachine gun. With no time for the usual research and development for a new weapon, it was decided to make a direct copy of the German MP 28. Like other early submachine guns it was difficult and expensive to manufacture. Shortly thereafter, the simpler Sten submachine gun was developed for general use by the British armed forces, it was much cheaper and faster to make. Over 4 million Sten guns were made during World War II. The Sten was so cheap and easy to produce that towards the end of the war as their economic base approached crisis, Germany started manufacturing their own copy, the MP 3008. After the war, the British replaced the Sten with the Sterling submachine gun.
The United States and its allies used the Thompson submachine gun, especially the simplified M1. The Thompson was still expensive and slow to produce. Therefore, the U.S. developed the M3 submachine gun or "Grease Gun" in 1942, followed by the improved M3A1 in 1944. While the M3 was no more effective than the Tommy Gun, it was made primarily of stamped parts and welded together and could be produced much faster and at a fraction of the cost of a Thompson; its much lower rate of fire made it a lot more controllable. It could be configured to fire either .45 ACP or 9mm Luger ammunition. The M3A1 was among the longest-serving submachine gun designs, being produced into the 1960s and serving in US forces into the 1990s.
France produced only about 2,000 of the MAS-38 submachine gun (chambered in 7.65×20mm Longue) before the Fall of France in June 1940. Production was taken over by the occupying Germans, who used them for themselves and also put them into the hands of the Vichy French.
The Owen Gun is a 9mm Parabellum Australian submachine gun designed by Evelyn Owen in 1939. The Owen is a simple, highly reliable, open bolt, blowback SMG. It was designed to be fired either from the shoulder or the hip. It is easily recognisable, owing to its unconventional appearance, including a quick-release barrel and butt-stock, double pistol grips, top-mounted magazine, and unusual offset right-side-mounted sights. The Owen was the only entirely Australian-designed and constructed service submachine gun of World War II and was used by the Australian Army from 1943 until the mid-1960s, when it was replaced by the F1 submachine gun. Only about 45,000 Owens were produced during the war for a unit cost of about A$30.
While most other countries during World War II developed submachine guns, the Empire of Japan had only produced one, the Type 100 submachine gun, based heavily on the German MP28. Like most other small arms created in Imperial Japan, the Type 100 could be fitted with the Type 30 bayonet. It used the 8×22mm Nambu cartridge, which was about half as powerful as a standard Western 9mm Parabellum round. Production of the gun was even more inadequate: by the war's end, Japan had only manufactured about 7,500 of the Type 100, whereas Germany, America, and other countries in the war had produced well over a million of their own SMG designs.
The German military concluded that most firefights took place at ranges of no more than ~. They therefore sought to develop a new class of weapon that would combine the high volume of fire of the submachine gun with an intermediate cartridge that enabled the shooter to place accurate shots at medium ranges (beyond that of the range of the typical submachine gun). After a false start with the FG 42, this led to the development of the select-fire assault rifle (assault rifle or storm rifle is a translation of the German ). In the years following the war, this new format began to replace the submachine gun in military use to a large extent. Based on the StG44, the Soviet Union created the AK-47, which is to date the world's most produced firearm, with over 100 million made.
Post–World War II
After World War II, "new submachine gun designs appeared almost every week to replace the admittedly rough and ready designs which had appeared during the war. Some (the better ones) survived, most rarely got past the glossy brochure stage." Most of these survivors were cheaper, easier, and faster to make than their predecessors. As such, they were widely distributed.
In 1945, Sweden introduced the 9 mm Parabellum Carl Gustaf m/45 with a design borrowing from and improving on many design elements of earlier submachine-gun designs. It has a tubular stamped steel receiver with a side folding stock. The m/45 was widely exported and especially popular with CIA operatives and U.S. special forces during the Vietnam War. In U.S. service it was known as the "Swedish-K". In 1966, the Swedish government blocked the sale of firearms to the United States because it opposed the Vietnam War. As a result, in the following year Smith & Wesson began to manufacture an m/45 clone called the M76. The m/45 was used in combat by Swedish troops as part of the United Nations Operation in the Congo, during the Congo Crisis during the early 1960s. Battlefield reports of the lack of penetrative power of the 9mm Parabellum during this operation led to Sweden developing a more powerful 9 mm round designated "9mm m/39B".
In 1946, Denmark introduced the Madsen M-46 and, in 1950, an improved model, the Madsen M-50. These 9 mm Parabellum stamped steel SMGs featured a unique clamshell type design, a side-folding stock, and a grip safety on the magazine housing. The Madsen was widely exported and especially popular in Latin America, with variants made by several countries.
In 1948, Czechoslovakia introduced the Sa vz. 23 series. This 9 mm Parabellum SMG introduced several innovations: a progressive trigger for selecting between semi-automatic and full auto fire, a telescoping bolt that extends forward wrapping around the barrel, and a vertical handgrip housing the magazine and trigger mechanism. The vz. 23 series was widely exported and especially popular in Africa and the Middle East with variants made by several countries. The vz. 23 inspired the development of the Uzi submachine gun.
In 1949, France introduced the MAT-49 to replace the hodgepodge of French, American, British, German, and Italian SMGs in French service after World War II. The 9 mm Parabellum MAT-49 is an inexpensive stamped-steel SMG with a telescoping wire stock, a pronounced folding magazine housing, and a grip safety. This "wildebeast-like design" proved to be an extremely reliable and effective SMG and was used by the French well into the 1980s. It was also widely exported to Africa, Asia, and the Middle East.
1950s
In 1954, Israel introduced a 9mm Parabellum open-bolt, blowback-operated submachine gun called the Uzi (after its designer Uziel Gal). The Uzi was one of the first weapons to use a telescoping bolt design with the magazine housed in the pistol grip for a shorter weapon. The Uzi has become the most popular submachine gun in the world, with over 10 million units sold, more than any other submachine gun.
In 1959, Beretta introduced the Model 12. This 9mm Parabellum submachine gun was a complete break with previous Beretta designs. It is a small, compact, very well made SMG and among the first to use a telescoping bolt design. The M12 was designed for mass production and was made largely of stamped steel and welded together. It is identified by its tubular shape receiver, double pistol grips, a side folding stock and the magazine housed in front of the trigger guard. The M12 uses the same magazines as the Model 38 series.
Submachine guns in the Korean War
Submachine guns again proved to be an important weapon system in the Korean War (25 June 1950 – 27 July 1953). The Korean People's Army (KPA) and the Chinese People's Volunteer Army (PVA) fighting in Korea received massive numbers of the PPSh-41, in addition to the North Korean Type 49 and the Chinese Type 50, which were both licensed copies of the PPSh-41 with small mechanical revisions. While lacking the accuracy of the U.S. M1 Garand and M1 carbine, it provided more firepower at short distances and was well-suited to the close-range firefights that typically occurred in that conflict, especially at night. United Nations Command forces in defensive outposts or on patrol often had trouble returning a sufficient volume of fire when attacked by companies of infantry armed with the PPSh. As infantry Captain (later General) Hal Moore stated: "on full automatic it sprayed a lot of bullets and most of the killing in Korea was done at very close ranges and it was done quickly—a matter of who responded faster. In situations like that it outclassed and outgunned what we had. A close-in patrol fight was over very quickly and usually we lost because of it." U.S. servicemen, however, felt that their M2 carbines were superior to the PPSh-41 at the typical engagement range of 100–150 meters.
Other older designs also saw use in the Korean War. The Thompson had seen much use by the U.S. and South Korean militaries, even though the Thompson had been replaced as standard-issue by the M3/M3A1. With huge numbers of guns available in army ordnance arsenals, the Thompson remained classed as "limited standard" or "substitute standard" long after the standardization of the M3/M3A1. Many Thompsons were distributed to the US-backed Nationalist Chinese armed forces as military aid before the fall of Chiang Kai-shek's government to Mao Zedong's communist forces at the end of the Chinese Civil War in 1949. (Thompsons had already been widely used throughout China since the 1920s, at a time when several Chinese warlords and their military factions running various parts of the fragmented country made purchases of the weapon and then subsequently produced many local copies.) US troops were surprised to encounter communist Chinese troops armed with Thompsons (among other captured US-made Nationalist Chinese and American firearms), especially during unexpected night assaults, which became a prominent Chinese combat tactic in the conflict. The gun's ability to deliver large quantities of short-range automatic assault fire proved very useful in both defense and assault during the early part of the war when it was constantly mobile and shifting back and forth. Many Chinese Thompsons were captured and placed into service with American soldiers and marines for the remaining period of the war.
1960s
In the 1960s, Heckler & Koch developed the 9mm Parabellum MP5 submachine gun. The MP5 is based on the G3 rifle and uses the same closed-bolt roller-delayed blowback operation system. This makes the MP5 more accurate than open-bolt SMGs, such as the Uzi. The MP5 is one of the most widely used submachine guns in the world, having been adopted by over 40 nations and numerous military, law enforcement, and security organizations.
In 1969, Steyr introduced the MPi 69, which is similar in appearance to the Uzi SMG. The MPi 69's receiver is a squared stamped steel tube that partly nestles inside a large plastic molding (resembling a lower receiver) which contains the forward hand-grip, vertical pistol-grip and the fire control group, making the MPi 69 one of the first firearms to use plastic construction in this way.
1970s
In the 1970s, extremely compact submachine guns, such as the .45ACP Mac-10 and .380 ACP Mac-11, were developed to be used with silencers or suppressors. While these SMGs received enormous publicity, and were prominently displayed in films and television, they were not widely adopted by military or law enforcement agencies.
1980s
By the 1980s, the demand for new submachine guns was very low and could be easily met by existing makers with existing designs. However, following H&K's lead, other manufacturers began designing submachine guns based on their existing assault rifle patterns. These new SMGs offered a high degree of parts commonality with parent weapons, thereby easing logistical concerns.
In 1982, Colt introduced the Colt 9mm SMG based on the M16 rifle. The Colt SMG is a closed bolt, blowback operated SMG and the overall aesthetics are identical to most M16 type rifles. The magazine well is modified using a special adapter to allow the use of the smaller 9mm magazines. The magazines themselves are a copy of the Israeli UZI SMG magazine, modified to fit the Colt and lock the bolt back after the last shot. The Colt was widely used by US law enforcement and the USMC.
1990s
In 1999, H&K introduced the UMP "Universal Machine Pistol". The UMP is a 9mm Parabellum, .40 S&W, or .45 ACP, closed-bolt blowback-operated SMG, based on the H&K G36 assault rifle. It features a predominantly polymer construction and was designed to be a more cost effective, lighter weight, and less complex design alternative to the MP5. The UMP has a side-folding stock and is available with four different trigger group configurations. It was also designed to use a wide range of Picatinny rail mounted accessories
2000s
In 2004, Izhmash introduced the Vityaz-SN a 9mm Parabellum, closed bolt straight blowback operated submachine gun. It is based on the AK-74 rifle and offers a high degree of parts commonality with the AK-74. It is the standard submachine gun for all branches of Russian military and police forces.
In 2009, KRISS USA introduced the KRISS Vector family of submachine guns. Futuristic in appearance, the KRISS uses an unconventional delayed blowback system combined with in-line design to reduce perceived recoil and muzzle climb. The KRISS comes in 9mm Parabellum, .40 S&W, .45 ACP, 9×21mm, 10mm Auto, and .357 SIG. It also uses standard Glock pistol magazines.
2010s
By the early 2010s, compact assault rifles and personal defense weapons had replaced submachine guns in most roles. Factors such as the increasing use of body armor and logistical concerns have combined to limit the appeal of submachine guns. However, SMGs are still used by police (especially SWAT teams) for dealing with heavily armed suspects and by military special forces units for close-quarters combat, due to their reduced size, recoil and muzzle blast, and capability for sound suppression. Submachine gun designs adopted during this period include the Brügger & Thomet APC and SIG MPX.
Land defence pistol
During the Apartheid era of South Africa and the Rhodesian Bush War/South African Border War, a semi-automatic only pistol calibre carbine based on submachine guns existed for civilian personal protection as Land Defence Pistols (LDP). Known examples were the Bell & White 84, BHS Rhogun, Cobra Mk1, GM-16, Kommando LDP, Northwood R-76, Paramax, Sanna 77 and TS III.
Personal defense weapons
First developed during the 1980s, personal defense weapons (PDWs) were created in response to a NATO request for a replacement for 9×19mm Parabellum submachine guns. PDWs are compact automatic weapons that are sufficiently light to be issued to non-combat arms or support troops, particularly those in vehicles, while being capable of greater range and terminal ballistics than a handgun. As a result of these characteristics, most PDWs can be used as close quarters battle weapons for special forces and counter-terrorist groups.
Introduced in 1991, the FN P90 features an unusual appearance, having a 50-round magazine housed horizontally above the barrel, an integrated reflex sight and fully ambidextrous controls. A simple blowback automatic weapon, it was designed to fire the proprietary FN 5.7×28mm cartridge which can penetrate soft body armor. The FN P90 was designed to have a length no greater than an average-sized man's shoulder width, to allow it to be easily carried and maneuvered in tight spaces, such as the inside of an infantry fighting vehicle. The FN P90 is currently in service with military and police forces in over 40 nations.
| Technology | Firearms | null |
28492 | https://en.wikipedia.org/wiki/Squirrel | Squirrel | Squirrels are members of the family Sciuridae (), a family that includes small or medium-sized rodents. The squirrel family includes tree squirrels, ground squirrels (including chipmunks and prairie dogs, among others), and flying squirrels. Squirrels are indigenous to the Americas, Eurasia, and Africa, and were introduced by humans to Australia. The earliest known fossilized squirrels date from the Eocene epoch, and among other living rodent families, the squirrels are most closely related to the mountain beaver and dormice.
Etymology
The word squirrel, first attested in 1327, comes from the Anglo-Norman which is from the Old French , the reflex of a Latin word , which was taken from the Ancient Greek word (; from ) 'shadow-tailed', referring to the long bushy tail which many of its members have. Sciurus is also the name of one of its genuses.
The native Old English word for the squirrel, , only survived into Middle English (as ) before being replaced. The Old English word is of Common Germanic origin, cognates of which are still used in other Germanic languages, including the German (diminutive of , which is not as frequently used); the Norwegian /; the Dutch ; the Swedish and the Danish .
A group of squirrels is called a "dray" or a "scurry".
Characteristics
Squirrels are generally small animals, ranging in size from the African pygmy squirrel and least pygmy squirrel at in total length and just in weight, to the Bhutan giant flying squirrel at up to in total length, and several marmot species, which can weigh or more. Squirrels typically have slender bodies with very long very bushy tails and large eyes. In general, their fur is soft and silky, though much thicker in some species than others. The coat color of squirrels is highly variable between—and often even within—species.
In most squirrel species, the hind limbs are longer than the forelimbs, while all species have either four or five toes on each foot. The feet, which include an often poorly developed thumb, have soft pads on the undersides and versatile, sturdy claws for grasping and climbing. Tree squirrels, unlike most mammals, can descend a tree headfirst. They do so by rotating their ankles 180 degrees, enabling the hind feet to point backward and thus grip the tree bark from the opposite direction.
Head
As their large eyes indicate, squirrels have excellent vision, which is especially important for the tree-dwelling species. Many also have a good sense of touch, with vibrissae on their limbs as well as their heads.
The teeth of sciurids follow the typical rodent pattern, with large incisors (for gnawing) that grow throughout life, and cheek teeth (for grinding) that are set back behind a wide gap, or diastema. The typical dental formula for sciurids is .
Tail
The purposes of squirrels' tails, to benefit the squirrel, include:
To keep rain, wind, or cold off itself.
To cool off when hot, by pumping more blood through its tail.
As a counterbalance when jumping about in trees
As a parachute when jumping.
To signal with.
The hairs from squirrel tails are prized in fly fishing when tying fishing flies. Squirrel hair is very fine, making it better for tying fishing flies.
When the squirrel sits upright, its tail folded up its back may stop predators looking from behind from seeing the characteristic shape of a small mammal.
Lifetime
Squirrels live in almost every habitat, from tropical rainforest to semiarid desert, avoiding only the high polar regions and the driest of deserts. They are predominantly herbivorous, subsisting on seeds and nuts, but many will eat insects and even small vertebrates.
Many juvenile squirrels die in the first year of life. Adult squirrels can have a lifespan of 5 to 10 years in the wild. Some can survive 10 to 20 years in captivity. Premature death may occur when a nest falls from the tree, in which case the mother may abandon her young if their body temperature is not correct. Many such baby squirrels have been rescued and fostered by a professional wildlife rehabilitator until they could be safely returned to the wild, although the density of squirrel populations in many places and the constant care required by premature squirrels means that few rehabilitators are willing to spend their time doing this and such animals are routinely euthanized instead.
Behavior
Squirrels mate either once or twice a year and, following a gestation period of three to six weeks, give birth to a number of offspring that varies by species. The young are altricial, being born naked, toothless, and blind. In most species of squirrel, the female alone looks after the young, which are weaned at six to ten weeks and become sexually mature by the end of their first year. In general, the ground-dwelling squirrel species are social, often living in well-developed colonies, while the tree-dwelling species are more solitary.
Ground squirrels and tree squirrels are usually either diurnal or crepuscular, while the flying squirrels tend to be nocturnal—except for lactating flying squirrels and their young, which have a period of diurnality during the summer.
During hot periods, squirrels have been documented to sploot, or lay their stomachs down on cool surfaces.
Squirrels, like other rodents, employ species-specific strategies to store food, buffering against periods of scarcity. In temperate regions, squirrels commonly cache nuts beneath leaf litter, inside hollow trees, or underground. However, in subtropical and humid environments, traditional caching can lead to mold growth, decomposition, or premature germination. To counteract these challenges, some squirrels, particularly in subtropical zones, hang nuts or mushrooms on tree branches. This behavior, believed to minimize fungal infections and reduce the risk of food loss, also inadvertently aids certain trees, like Cyclobalanopsis, in expanding their range, with forgotten or dislodged nuts sprouting in new locations, influencing forest ecology. Two species of flying squirrel, the particolored flying squirrel and Hainan flying squirrel aid such cacheing by carving grooves into the nuts to fix the nuts tightly between small intersecting twigs, akin to the mortise-tenon joint in carpentry.
Feeding
Because squirrels cannot digest cellulose, they must rely on foods rich in protein, carbohydrates, and fats. In temperate regions, early spring is the hardest time of year for squirrels because the nuts they buried are beginning to sprout (and thus are no longer available to eat), while many of the usual food sources are not yet available. During these times, squirrels rely heavily on tree buds. Squirrels, being primarily herbivores, eat a wide variety of plants, as well as nuts, seeds, conifer cones, fruits, fungi, and green vegetation. Some squirrels, however, also consume meat, especially when faced with hunger. Squirrels have been known to eat small birds, young snakes, and smaller rodents, as well as bird eggs and insects. Some tropical squirrel species have shifted almost entirely to a diet of insects.
Squirrels, like pigeons and other fauna, are synanthropes, in that they benefit and thrive from their interaction in human environments. This gradual process of successful interaction is called synurbanization, wherein squirrels lose their inherent fear of humans in an urban environment. When squirrels were almost completely eradicated during the Industrial Revolution in New York, they were later re-introduced to "entertain and remind" humans of nature. The squirrel blended into the urban environment so efficiently that when synanthropic behavior stops (i.e. people do not leave trash outside during particularly cold winters), they can become aggressive in their search for food.
Aggression and predatory behavior has been observed in various species of ground squirrels, in particular the thirteen-lined ground squirrel. For example, Bernard Bailey, a scientist in the 1920s, observed a thirteen-lined ground squirrel preying upon a young chicken. Wistrand reported seeing this same species eating a freshly killed snake. There has also been at least one report of squirrels preying on atypical animals, such as an incident in 2005 where a pack of black squirrels killed and ate a large stray dog in Lazo, Russia. Squirrel attacks on humans are exceedingly rare, but do occur.
Whitaker examined the stomachs of 139 thirteen-lined ground squirrels and found bird flesh in four of the specimens and the remains of a short-tailed shrew in one; Bradley, examining the stomachs of white-tailed antelope squirrels, found at least 10% of his 609 specimens' stomachs contained some type of vertebrate, mostly lizards and rodents. Morgart observed a white-tailed antelope squirrel capturing and eating a silky pocket mouse.
Taxonomy
The living squirrels are divided into five subfamilies, with about 58 genera and some 285 species. The oldest squirrel fossil, Hesperopetes, dates back to the Chadronian (late Eocene, about 40–35 million years ago) and is similar to modern flying squirrels.
A variety of fossil squirrels, from the latest Eocene to the Miocene, have not been assigned with certainty to any living lineage. At least some of these probably were variants of the oldest basal "protosquirrels" (in the sense that they lacked the full range of living squirrels' autapomorphies). The distribution and diversity of such ancient and ancestral forms suggest the squirrels as a group may have originated in North America.
Apart from these sometimes little-known fossil forms, the phylogeny of the living squirrels is fairly straightforward. The three main lineages are the Ratufinae (Oriental giant squirrels), Sciurillinae and all other subfamilies. The Ratufinae contain a mere handful of living species in tropical Asia. The neotropical pygmy squirrel of tropical South America is the sole living member of the Sciurillinae. The third lineage, by far the largest, has a near-cosmopolitan distribution. This further supports the hypothesis that the common ancestor of all squirrels, living and fossil, lived in North America, as these three most ancient lineages seem to have radiated from there; if squirrels had originated in Eurasia, for example, one would expect quite ancient lineages in Africa, but African squirrels seem to be of more recent origin.
The main group of squirrels can be split into five subfamilies: the Callosciurinae, 60 species mostly found in South East Asia; the Ratufinae, 4 cat-sized species found in south and southeast Asia; the Sciurinae, which contains the flying squirrels (Pteromyini) and the tree squirrels, 83 species found worldwide; Sciurillinae, a single South American species; and Xerinae, which includes three tribes of mostly terrestrial squirrels, including the Marmotini (marmots, chipmunks, prairie dogs, and other Holarctic ground squirrels), Xerini (African and some Eurasian ground squirrels), and Protoxerini (African tree squirrels).
Taxonomy list
Basal and incertae sedis Sciuridae (all fossil)
Hesperopetes
Kherem
Lagrivea
Oligosciurus
Plesiosciurus
Prospermophilus
Sciurion
Similisciurus
Sinotamias
Vulcanisciurus
Subfamily Cedromurinae (fossil)
Subfamily Ratufinae – Oriental giant squirrels (1 genus, 4 species)
Subfamily Sciurillinae – neotropical pygmy squirrel (monotypic)
Subfamily Sciurinae
Tribe Sciurini – tree squirrels (5 genera, about 38 species)
Tribe Pteromyini – true flying squirrels (15 genera, about 45 species)
Subfamily Callosciurinae – Asian ornate squirrels
Tribe Callosciurini (13 genera, nearly 60 species)
Tribe Funambulini palm squirrels (1 genus, 5 species)
Subfamily Xerinae – terrestrial squirrels
Tribe Xerini – spiny squirrels (3 genera, 6 species)
Tribe Protoxerini (6 genera, about 50 species)
Tribe Marmotini – ground squirrels, marmots, chipmunks, prairie dogs, etc. (6 genera, about 90 species)
Relationship with humans
| Biology and health sciences | Rodents | null |
28524 | https://en.wikipedia.org/wiki/RNA%20splicing | RNA splicing | RNA splicing is a process in molecular biology where a newly-made precursor messenger RNA (pre-mRNA) transcript is transformed into a mature messenger RNA (mRNA). It works by removing all the introns (non-coding regions of RNA) and splicing back together exons (coding regions). For nuclear-encoded genes, splicing occurs in the nucleus either during or immediately after transcription. For those eukaryotic genes that contain introns, splicing is usually needed to create an mRNA molecule that can be translated into protein. For many eukaryotic introns, splicing occurs in a series of reactions which are catalyzed by the spliceosome, a complex of small nuclear ribonucleoproteins (snRNPs). There exist self-splicing introns, that is, ribozymes that can catalyze their own excision from their parent RNA molecule. The process of transcription, splicing and translation is called gene expression, the central dogma of molecular biology.
Splicing pathways
Several methods of RNA splicing occur in nature; the type of splicing depends on the structure of the spliced intron and the catalysts required for splicing to occur.
Spliceosomal complex
Introns
The word intron is derived from the terms intragenic region, and intracistron, that is, a segment of DNA that is located between two exons of a gene. The term intron refers to both the DNA sequence within a gene and the corresponding sequence in the unprocessed RNA transcript. As part of the RNA processing pathway, introns are removed by RNA splicing either shortly after or concurrent with transcription. Introns are found in the genes of most organisms and many viruses. They can be located in a wide range of genes, including those that generate proteins, ribosomal RNA (rRNA), and transfer RNA (tRNA).
Within introns, a donor site (5' end of the intron), a branch site (near the 3' end of the intron) and an acceptor site (3' end of the intron) are required for splicing. The splice donor site includes an almost invariant sequence GU at the 5' end of the intron, within a larger, less highly conserved region. The splice acceptor site at the 3' end of the intron terminates the intron with an almost invariant AG sequence. Upstream (5'-ward) from the AG there is a region high in pyrimidines (C and U), or polypyrimidine tract. Further upstream from the polypyrimidine tract is the branchpoint, which includes an adenine nucleotide involved in lariat formation. The consensus sequence for an intron (in IUPAC nucleic acid notation) is: G-G-[cut]-G-U-R-A-G-U (donor site) ... intron sequence ... Y-U-R-A-C (branch sequence 20-50 nucleotides upstream of acceptor site) ... Y-rich-N-C-A-G-[cut]-G (acceptor site). However, it is noted that the specific sequence of intronic splicing elements and the number of nucleotides between the branchpoint and the nearest 3' acceptor site affect splice site selection. Also, point mutations in the underlying DNA or errors during transcription can activate a cryptic splice site in part of the transcript that usually is not spliced. This results in a mature messenger RNA with a missing section of an exon. In this way, a point mutation, which might otherwise affect only a single amino acid, can manifest as a deletion or truncation in the final protein.
Formation and activity
Splicing is catalyzed by the spliceosome, a large RNA-protein complex composed of five small nuclear ribonucleoproteins (snRNPs). Assembly and activity of the spliceosome occurs during transcription of the pre-mRNA. The RNA components of snRNPs interact with the intron and are involved in catalysis. Two types of spliceosomes have been identified (major and minor) which contain different snRNPs.
The major spliceosome splices introns containing GU at the 5' splice site and AG at the 3' splice site. It is composed of the U1, U2, U4, U5, and U6 snRNPs and is active in the nucleus. In addition, a number of proteins including U2 small nuclear RNA auxiliary factor 1 (U2AF35), U2AF2 (U2AF65) and SF1 are required for the assembly of the spliceosome. The spliceosome forms different complexes during the splicing process:
Complex E
The U1 snRNP binds to the GU sequence at the 5' splice site of an intron;
Splicing factor 1 binds to the intron branch point sequence;
U2AF1 binds at the 3' splice site of the intron;
U2AF2 binds to the polypyrimidine tract;
Complex A (pre-spliceosome)
The U2 snRNP displaces SF1 and binds to the branch point sequence and ATP is hydrolyzed;
Complex B (pre-catalytic spliceosome)
The U5/U4/U6 snRNP trimer binds, and the U5 snRNP binds exons at the 5' site, with U6 binding to U2;
Complex B*
The U1 snRNP is released, U5 shifts from exon to intron, and the U6 binds at the 5' splice site;
Complex C (catalytic spliceosome)
U4 is released, U6/U2 catalyzes transesterification, making the 5'-end of the intron ligate to the A on intron and form a lariat, U5 binds exon at 3' splice site, and the 5' site is cleaved, resulting in the formation of the lariat;
Complex C* (post-spliceosomal complex)
U2/U5/U6 remain bound to the lariat, and the 3' site is cleaved and exons are ligated using ATP hydrolysis. The spliced RNA is released, the lariat is released and degraded, and the snRNPs are recycled.
This type of splicing is termed canonical splicing or termed the lariat pathway, which accounts for more than 99% of splicing. By contrast, when the intronic flanking sequences do not follow the GU-AG rule, noncanonical splicing is said to occur (see "minor spliceosome" below).
The minor spliceosome is very similar to the major spliceosome, but instead it splices out rare introns with different splice site sequences. While the minor and major spliceosomes contain the same U5 snRNP, the minor spliceosome has different but functionally analogous snRNPs for U1, U2, U4, and U6, which are respectively called U11, U12, U4atac, and U6atac.
Recursive splicing
In most cases, splicing removes introns as single units from precursor mRNA transcripts. However, in some cases, especially in mRNAs with very long introns, splicing happens in steps, with part of an intron removed and then the remaining intron is spliced out in a following step. This has been found first in the Ultrabithorax (Ubx) gene of the fruit fly, Drosophila melanogaster, and a few other Drosophila genes, but cases in humans have been reported as well.
Trans-splicing
Trans-splicing is a form of splicing that removes introns or outrons, and joins two exons that are not within the same RNA transcript. Trans-splicing can occur between two different endogenous pre-mRNAs or between an endogenous and an exogenous (such as from viruses) or artificial RNAs.
Self-splicing
Self-splicing occurs for rare introns that form a ribozyme, performing the functions of the spliceosome by RNA alone. There are three kinds of self-splicing introns, Group I, Group II and Group III. Group I and II introns perform splicing similar to the spliceosome without requiring any protein. This similarity suggests that Group I and II introns may be evolutionarily related to the spliceosome. Self-splicing may also be very ancient, and may have existed in an RNA world present before protein.
Two transesterifications characterize the mechanism in which group I introns are spliced:
3'OH of a free guanine nucleoside (or one located in the intron) or a nucleotide cofactor (GMP, GDP, GTP) attacks phosphate at the 5' splice site.
3'OH of the 5' exon becomes a nucleophile and the second transesterification results in the joining of the two exons.
The mechanism in which group II introns are spliced (two transesterification reaction like group I introns) is as follows:
The 2'OH of a specific adenosine in the intron attacks the 5' splice site, thereby forming the lariat
The 3'OH of the 5' exon triggers the second transesterification at the 3' splice site, thereby joining the exons together.
tRNA splicing
tRNA (also tRNA-like) splicing is another rare form of splicing that usually occurs in tRNA. The splicing reaction involves a different biochemistry than the spliceosomal and self-splicing pathways.
In the yeast Saccharomyces cerevisiae, a yeast tRNA splicing endonuclease heterotetramer, composed of TSEN54, TSEN2, TSEN34, and TSEN15, cleaves pre-tRNA at two sites in the acceptor loop to form a 5'-half tRNA, terminating at a 2',3'-cyclic phosphodiester group, and a 3'-half tRNA, terminating at a 5'-hydroxyl group, along with a discarded intron. Yeast tRNA kinase then phosphorylates the 5'-hydroxyl group using adenosine triphosphate. Yeast tRNA cyclic phosphodiesterase cleaves the cyclic phosphodiester group to form a 2'-phosphorylated 3' end. Yeast tRNA ligase adds an adenosine monophosphate group to the 5' end of the 3'-half and joins the two halves together. NAD-dependent 2'-phosphotransferase then removes the 2'-phosphate group.
Evolution
Splicing occurs in all the kingdoms or domains of life, however, the extent and types of splicing can be very different between the major divisions. Eukaryotes splice many protein-coding messenger RNAs and some non-coding RNAs. Prokaryotes, on the other hand, splice rarely and mostly non-coding RNAs. Another important difference between these two groups of organisms is that prokaryotes completely lack the spliceosomal pathway.
Because spliceosomal introns are not conserved in all species, there is debate concerning when spliceosomal splicing evolved. Two models have been proposed: the intron late and intron early models (see intron evolution).
Biochemical mechanism
Spliceosomal splicing and self-splicing involve a two-step biochemical process. Both steps involve transesterification reactions that occur between RNA nucleotides. tRNA splicing, however, is an exception and does not occur by transesterification.
Spliceosomal and self-splicing transesterification reactions occur via two sequential transesterification reactions. First, the 2'OH of a specific branchpoint nucleotide within the intron, defined during spliceosome assembly, performs a nucleophilic attack on the first nucleotide of the intron at the 5' splice site, forming the lariat intermediate. Second, the 3'OH of the released 5' exon then performs a nucleophilic attack at the first nucleotide following the last nucleotide of the intron at the 3' splice site, thus joining the exons and releasing the intron lariat.
Alternative splicing
In many cases, the splicing process can create a range of unique proteins by varying the exon composition of the same mRNA. This phenomenon is then called alternative splicing. Alternative splicing can occur in many ways. Exons can be extended or skipped, or introns can be retained. It is estimated that 95% of transcripts from multiexon genes undergo alternative splicing, some instances of which occur in a tissue-specific manner and/or under specific cellular conditions. Development of high throughput mRNA sequencing technology can help quantify the expression levels of alternatively spliced isoforms. Differential expression levels across tissues and cell lineages allowed computational approaches to be developed to predict the functions of these isoforms.
Given this complexity, alternative splicing of pre-mRNA transcripts is regulated by a system of trans-acting proteins (activators and repressors) that bind to cis-acting sites or "elements" (enhancers and silencers) on the pre-mRNA transcript itself. These proteins and their respective binding elements promote or reduce the usage of a particular splice site. The binding specificity comes from the sequence and structure of the cis-elements, e.g. in HIV-1 there are many donor and acceptor splice sites. Among the various splice sites, ssA7, which is 3' acceptor site, folds into three stem loop structures, i.e. Intronic splicing silencer (ISS), Exonic splicing enhancer (ESE), and Exonic splicing silencer (ESSE3). Solution structure of Intronic splicing silencer and its interaction to host protein hnRNPA1 give insight into specific recognition. However, adding to the complexity of alternative splicing, it is noted that the effects of regulatory factors are many times position-dependent. For example, a splicing factor that serves as a splicing activator when bound to an intronic enhancer element may serve as a repressor when bound to its splicing element in the context of an exon, and vice versa. In addition to the position-dependent effects of enhancer and silencer elements, the location of the branchpoint (i.e., distance upstream of the nearest 3' acceptor site) also affects splicing. The secondary structure of the pre-mRNA transcript also plays a role in regulating splicing, such as by bringing together splicing elements or by masking a sequence that would otherwise serve as a binding element for a splicing factor.
Role of nuclear speckles in RNA splicing
The location of pre-mRNA splicing is throughout the nucleus, and once mature mRNA is generated, it is transported to the cytoplasm for translation. In both plant and animal cells, nuclear speckles are regions with high concentrations of splicing factors. These speckles were once thought to be mere storage centers for splicing factors. However, it is now understood that nuclear speckles help concentrate splicing factors near genes that are physically located close to them. Genes located farther from speckles can still be transcribed and spliced, but their splicing is less efficient compared to those closer to speckles. Cells can vary their genomic positions of genes relative to nuclear speckles as a mechanism to modulate the expression of genes via splicing.
Role of splicing/alternative splicing in HIV-integration
The process of splicing is linked with HIV integration, as HIV-1 targets highly spliced genes.
Splicing response to DNA damage
DNA damage affects splicing factors by altering their post-translational modification, localization, expression and activity. Furthermore, DNA damage often disrupts splicing by interfering with its coupling to transcription. DNA damage also has an impact on the splicing and alternative splicing of genes intimately associated with DNA repair. For instance, DNA damages modulate the alternative splicing of the DNA repair genes Brca1 and Ercc1.
Experimental manipulation of splicing
Splicing events can be experimentally altered by binding steric-blocking antisense oligos, such as Morpholinos or Peptide nucleic acids to snRNP binding sites, to the branchpoint nucleotide that closes the lariat, or to splice-regulatory element binding sites.
The use of antisense oligonucleotides to modulate splicing has shown great promise as a therapeutic strategy for a variety of genetic diseases caused by splicing defects.
Recent studies have shown that RNA splicing can be regulated by a variety of epigenetic modifications, including DNA methylation and histone modifications.
Splicing errors and variation
It has been suggested that one third of all disease-causing mutations impact on splicing. Common errors include:
Mutation of a splice site resulting in loss of function of that site. Results in exposure of a premature stop codon, loss of an exon, or inclusion of an intron.
Mutation of a splice site reducing specificity. May result in variation in the splice location, causing insertion or deletion of amino acids, or most likely, a disruption of the reading frame.
Displacement of a splice site, leading to inclusion or exclusion of more RNA than expected, resulting in longer or shorter exons.
Although many splicing errors are safeguarded by a cellular quality control mechanism termed nonsense-mediated mRNA decay (NMD), a number of splicing-related diseases also exist, as suggested above.
Allelic differences in mRNA splicing are likely to be a common and important source of phenotypic diversity at the molecular level, in addition to their contribution to genetic disease susceptibility. Indeed, genome-wide studies in humans have identified a range of genes that are subject to allele-specific splicing.
In plants, variation for flooding stress tolerance correlated with stress-induced alternative splicing of transcripts associated with gluconeogenesis and other processes.
Protein splicing
In addition to RNA, proteins can undergo splicing. Although the biomolecular mechanisms are different, the principle is the same: parts of the protein, called inteins instead of introns, are removed. The remaining parts, called exteins instead of exons, are fused together.
Protein splicing has been observed in a wide range of organisms, including bacteria, archaea, plants, yeast and humans.
Splicing and genesis of circRNAs
The existence of backsplicing was first suggested in 2012. This backsplicing explains the genesis of circular RNAs resulting from the exact junction between the 3' boundary of an exon with the 5' boundary of an exon located upstream. In these exonic circular RNAs, the junction is a classic 3'-5'link.
The exclusion of intronic sequences during splicing can also leave traces, in the form of circular RNAs. In some cases, the intronic lariat is not destroyed and the circular part remains as a lariat-derived circRNA.In these lariat-derived circular RNAs, the junction is a 2'-5'link.
| Biology and health sciences | Molecular biology | Biology |
28538 | https://en.wikipedia.org/wiki/Solar%20wind | Solar wind | The solar wind is a stream of charged particles released from the Sun's outermost atmospheric layer, the corona. This plasma mostly consists of electrons, protons and alpha particles with kinetic energy between . The composition of the solar wind plasma also includes a mixture of particle species found in the solar plasma: trace amounts of heavy ions and atomic nuclei of elements such as carbon, nitrogen, oxygen, neon, magnesium, silicon, sulfur, and iron. There are also rarer traces of some other nuclei and isotopes such as phosphorus, titanium, chromium, and nickel's isotopes 58Ni, 60Ni, and 62Ni. Superimposed with the solar-wind plasma is the interplanetary magnetic field. The solar wind varies in density, temperature and speed over time and over solar latitude and longitude. Its particles can escape the Sun's gravity because of their high energy resulting from the high temperature of the corona, which in turn is a result of the coronal magnetic field. The boundary separating the corona from the solar wind is called the Alfvén surface.
At a distance of more than a few solar radii from the Sun, the solar wind reaches speeds of and is supersonic, meaning it moves faster than the speed of fast magnetosonic waves. The flow of the solar wind is no longer supersonic at the termination shock. Other related phenomena include the aurora (northern and southern lights), comet tails that always point away from the Sun, and geomagnetic storms that can change the direction of magnetic field lines.
History
Observations from Earth
The existence of particles flowing outward from the Sun to the Earth was first suggested by British astronomer Richard C. Carrington. In 1859, Carrington and Richard Hodgson independently made the first observations of what would later be called a solar flare. This is a sudden, localised increase in brightness on the solar disc, which is now known to often occur in conjunction with an episodic ejection of material and magnetic flux from the Sun's atmosphere, known as a coronal mass ejection. The following day, a powerful geomagnetic storm was observed, and Carrington suspected that there might be a connection; the geomagnetic storm is now attributed to the arrival of the coronal mass ejection in near-Earth space and its subsequent interaction with the Earth's magnetosphere. Irish academic George FitzGerald later suggested that matter was being regularly accelerated away from the Sun, reaching the Earth after several days.
In 1910, British astrophysicist Arthur Eddington essentially suggested the existence of the solar wind, without naming it, in a footnote to an article on Comet Morehouse. Eddington's proposition was never fully embraced, even though he had also made a similar suggestion at a Royal Institution address the previous year, in which he had postulated that the ejected material consisted of electrons, whereas in his study of Comet Morehouse he had supposed them to be ions.
The idea that the ejected material consisted of both ions and electrons was first suggested by Norwegian scientist Kristian Birkeland. His geomagnetic surveys showed that auroral activity was almost uninterrupted. As these displays and other geomagnetic activity were being produced by particles from the Sun, he concluded that the Earth was being continually bombarded by "rays of electric corpuscles emitted by the Sun". He proposed in 1916 that, "From a physical point of view it is most probable that solar rays are neither exclusively negative nor positive rays, but of both kinds"; in other words, the solar wind consists of both negative electrons and positive ions. Three years later, in 1919, British physicist Frederick Lindemann also suggested that the Sun ejects particles of both polarities: protons as well as electrons.
Around the 1930s, scientists had concluded that the temperature of the solar corona must be a million degrees Celsius because of the way it extended into space (as seen during a total solar eclipse). Later spectroscopic work confirmed this extraordinary temperature to be the case. In the mid-1950s, British mathematician Sydney Chapman calculated the properties of a gas at such a temperature and determined that the corona being such a superb conductor of heat, it must extend way out into space, beyond the orbit of Earth. Also in the 1950s, German astronomer Ludwig Biermann became interested in the fact that the tail of a comet always points away from the Sun, regardless of the direction in which the comet is travelling. Biermann postulated that this happens because the Sun emits a steady stream of particles that pushes the comet's tail away. German astronomer Paul Ahnert is credited (by Wilfried Schröder) as being the first to relate solar wind to the direction of a comet's tail based on observations of the comet Whipple–Fedke (1942g).
American astrophysicist Eugene Parker realised that heat flowing from the Sun in Chapman's model, and the comet tail blowing away from the Sun in Biermann's hypothesis, had to be the result of the same phenomenon which he termed the "solar wind". In 1957, Parker showed that although the Sun's corona is strongly attracted by solar gravity, it is such a good conductor of heat that it is still very hot at large distances from the Sun. As solar gravity weakens with increasing distance from the Sun, the outer coronal atmosphere is able to escape supersonically into interstellar space. Parker was also the first person to notice that the weakening influence of the Sun's gravity has the same effect on hydrodynamic flow as a de Laval nozzle, inciting a transition from subsonic to supersonic flow. There was strong opposition to Parker's hypothesis on the solar wind; the paper he submitted to The Astrophysical Journal in 1958 was rejected by two reviewers, before being accepted by the editor Subrahmanyan Chandrasekhar.
Observations from space
In January 1959, the Soviet spacecraft Luna 1 first directly observed the solar wind and measured its strength, using hemispherical ion traps. The discovery, made by , was verified by Luna 2, Luna 3, and the more distant measurements of Venera 1. Three years later, a similar measurement was performed by American geophysicist Marcia Neugebauer and collaborators using the Mariner 2 spacecraft.
The first numerical simulation of the solar wind in the solar corona, including closed and open field lines, was performed by Pneuman and Kopp in 1971. The magnetohydrodynamics equations in steady state were solved iteratively starting with an initial dipolar configuration.
In 1990, the Ulysses probe was launched to study the solar wind from high solar latitudes. All prior observations had been made at or near the Solar System's ecliptic plane.
In the late 1990s, the Ultraviolet Coronal Spectrometer (UVCS) instrument on board the SOHO spacecraft observed the acceleration region of the fast solar wind emanating from the poles of the Sun and found that the wind accelerates much faster than can be accounted for by thermodynamic expansion alone. Parker's model predicted that the wind should make the transition to supersonic flow at an altitude of about four solar radii (approx. 3,000,000 km) from the photosphere (surface); but the transition (or "sonic point") now appears to be much lower, perhaps only one solar radius (approx. 700,000 km) above the photosphere, suggesting that some additional mechanism accelerates the solar wind away from the Sun. The acceleration of the fast wind is still not understood and cannot be fully explained by Parker's theory. However, the gravitational and electromagnetic explanation for this acceleration is detailed in an earlier paper by 1970 Nobel laureate in Physics, Hannes Alfvén.
From May 10 to May 12, 1999, NASA's Advanced Composition Explorer (ACE) and WIND spacecraft observed a 98% decrease of solar wind density. This allowed energetic electrons from the Sun to flow to Earth in narrow beams known as "strahl", which caused a highly unusual "polar rain" event, in which a visible aurora appeared over the North Pole. In addition, Earth's magnetosphere increased to between 5 and 6 times its normal size.
The STEREO mission was launched in 2006 to study coronal mass ejections and the solar corona, using stereoscopy from two widely separated imaging systems. Each STEREO spacecraft carried two heliospheric imagers: highly sensitive wide-field cameras capable of imaging the solar wind itself, via Thomson scattering of sunlight off of free electrons. Movies from STEREO revealed the solar wind near the ecliptic, as a large-scale turbulent flow.
On December 13, 2010, Voyager 1 determined that the velocity of the solar wind, at its location from Earth had slowed to zero. "We have gotten to the point where the wind from the Sun, which until now has always had an outward motion, is no longer moving outward; it is only moving sideways so that it can end up going down the tail of the heliosphere, which is a comet-shaped-like object", said Voyager project scientist Edward Stone.
In 2018, NASA launched the Parker Solar Probe, named in honor of American astrophysicist Eugene Parker, on a mission to study the structure and dynamics of the solar corona, in an attempt to understand the mechanisms that cause particles to be heated and accelerated as solar wind. During its seven-year mission, the probe will make twenty-four orbits of the Sun, passing further into the corona with each orbit's perihelion, ultimately passing within 0.04 astronomical units of the Sun's surface. It is the first NASA spacecraft named for a living person, and Parker, at age 91, was on hand to observe the launch.
Acceleration mechanism
While early models of the solar wind relied primarily on thermal energy to accelerate the material, by the 1960s it was clear that thermal acceleration alone cannot account for the high speed of solar wind. An additional unknown acceleration mechanism is required and likely relates to magnetic fields in the solar atmosphere.
The Sun's corona, or extended outer layer, is a region of plasma that is heated to over a megakelvin. As a result of thermal collisions, the particles within the inner corona have a range and distribution of speeds described by a Maxwellian distribution. The mean velocity of these particles is about , which is well below the solar escape velocity of . However, a few of the particles achieve energies sufficient to reach the terminal velocity of , which allows them to feed the solar wind. At the same temperature, electrons, due to their much smaller mass, reach escape velocity and build up an electric field that further accelerates ions away from the Sun.
The total number of particles carried away from the Sun by the solar wind is about per second. Thus, the total mass loss each year is about solar masses, or about 1.3–1.9 million tonnes per second. This is equivalent to losing a mass equal to the Earth every 150 million years. However, since the Sun's formation, only about 0.01% of its initial mass has been lost through the solar wind. Other stars have much stronger stellar winds that result in significantly higher mass-loss rates.
Jetlets
In March 2023 solar extreme ultraviolet observations have shown that small-scale magnetic reconnection could be a driver of the solar wind as a swarm of nanoflares in the form omnipresent jetting activity a.k.a. jetlets producing short-lived streams of hot plasma and Alfvén waves at the base of the solar corona. This activity could also be connected to the magnetic switchback phenomenon of the solar wind.
Properties and structure
Fast and slow solar wind
The solar wind is observed to exist in two fundamental states, termed the slow solar wind and the fast solar wind, though their differences extend well beyond their speeds. In near-Earth space, the slow solar wind is observed to have a velocity of , a temperature of ~ and a composition that is a close match to the corona. By contrast, the fast solar wind has a typical velocity of , a temperature of and it nearly matches the composition of the Sun's photosphere. The slow solar wind is twice as dense and more variable in nature than the fast solar wind.
The slow solar wind appears to originate from a region around the Sun's equatorial belt that is known as the "streamer belt", where coronal streamers are produced by magnetic flux open to the heliosphere draping over closed magnetic loops. The exact coronal structures involved in slow solar wind formation and the method by which the material is released is still under debate. Observations of the Sun between 1996 and 2001 showed that emission of the slow solar wind occurred at latitudes up to 30–35° during the solar minimum (the period of lowest solar activity), then expanded toward the poles as the solar cycle approached maximum. At solar maximum, the poles were also emitting a slow solar wind.
The fast solar wind originates from coronal holes, which are funnel-like regions of open field lines in the Sun's magnetic field. Such open lines are particularly prevalent around the Sun's magnetic poles. The plasma source is small magnetic fields created by convection cells in the solar atmosphere. These fields confine the plasma and transport it into the narrow necks of the coronal funnels, which are located only 20,000 km above the photosphere. The plasma is released into the funnel when these magnetic field lines reconnect.
Velocity and density
Near the Earth's orbit at 1 astronomical unit (AU) the plasma flows at speeds ranging from 250 to 750 km/s with a density ranging between 3 and 10 particles per cubic centimeter and temperature ranging from 104 to 106 kelvin.
On average, the plasma density decreases with the square of the distance from the Sun, while the velocity decreases and flattens out at 1 AU.
Voyager 1 and Voyager 2 reported plasma density n between 0.001 and 0.005 particles/cm3 at distances of 80 to 120 AU, increasing rapidly beyond 120 AU at heliopause to between 0.05 and 0.2 particles/cm3.
Pressure
At , the wind exerts a pressure typically in the range of (), although it can readily vary outside that range.
The ram pressure is a function of wind speed and density. The formula is
where mp is the proton mass, pressure P is in Pa (pascals), n is the density in particles/cm3 and V is the speed in km/s of the solar wind.
Coronal mass ejection
Both the fast and slow solar wind can be interrupted by large, fast-moving bursts of plasma called coronal mass ejections, or CMEs. CMEs are caused by a release of magnetic energy at the Sun. CMEs are often called "solar storms" or "space storms" in the popular media. They are sometimes, but not always, associated with solar flares, which are another manifestation of magnetic energy release at the Sun. CMEs cause shock waves in the thin plasma of the heliosphere, launching electromagnetic waves and accelerating particles (mostly protons and electrons) to form showers of ionizing radiation that precede the CME.
When a CME impacts the Earth's magnetosphere, it temporarily deforms the Earth's magnetic field, changing the direction of compass needles and inducing large electrical ground currents in Earth itself; this is called a geomagnetic storm and it is a global phenomenon. CME impacts can induce magnetic reconnection in Earth's magnetotail (the midnight side of the magnetosphere); this launches protons and electrons downward toward Earth's atmosphere, where they form the aurora.
CMEs are not the only cause of space weather. Different patches on the Sun are known to give rise to slightly different speeds and densities of wind depending on local conditions. In isolation, each of these different wind streams would form a spiral with a slightly different angle, with fast-moving streams moving out more directly and slow-moving streams wrapping more around the Sun. Fast-moving streams tend to overtake slower streams that originate westward of them on the Sun, forming turbulent co-rotating interaction regions that give rise to wave motions and accelerated particles, and that affect Earth's magnetosphere in the same way as, but more gently than, CMEs.
CMEs have a complex internal structure, with a highly turbulent region of hot and compressed plasma (known as sheath) preceding an arrival of relatively cold and strongly magnetized plasma region (known as magnetic cloud or ejecta). Sheath and ejecta have very different impact on the Earth's magnetosphere and on various space weather phenomena, such as the behavior of Van Allen radiation belts.
Magnetic switchbacks
Magnetic switchbacks are sudden reversals in the magnetic field of the solar wind. They can also be described as traveling disturbances in the solar wind that caused the magnetic field to bend back on itself. They were first observed by the NASA–ESA mission Ulysses, the first spacecraft to fly over the Sun's poles. Parker Solar Probe observed first switchbacks in 2018.
Solar System effects
Over the Sun's lifetime, the interaction of its surface layers with the escaping solar wind has significantly decreased its surface rotation rate. The wind is considered responsible for comets' tails, along with the Sun's radiation. The solar wind contributes to fluctuations in celestial radio waves observed on the Earth, through an effect called interplanetary scintillation.
Magnetospheres
Where the solar wind intersects with a planet that has a well-developed magnetic field (such as Earth, Jupiter or Saturn), the particles are deflected by the Lorentz force. This region, known as the magnetosphere, causes the particles to travel around the planet rather than bombarding the atmosphere or surface. The magnetosphere is roughly shaped like a hemisphere on the side facing the Sun, then is drawn out in a long wake on the opposite side. The boundary of this region is called the magnetopause, and some of the particles are able to penetrate the magnetosphere through this region by partial reconnection of the magnetic field lines.
The solar wind is responsible for the overall shape of Earth's magnetosphere. Fluctuations in its speed, density, direction, and entrained magnetic field strongly affect Earth's local space environment. For example, the levels of ionizing radiation and radio interference can vary by factors of hundreds to thousands; and the shape and location of the magnetopause and bow shock wave upstream of it can change by several Earth radii, exposing geosynchronous satellites to the direct solar wind. These phenomena are collectively called space weather.
From the European Space Agency's Cluster mission, a new study has taken place that proposes that it is easier for the solar wind to infiltrate the magnetosphere than previously believed. A group of scientists directly observed the existence of certain waves in the solar wind that were not expected. A recent study shows that these waves enable incoming charged particles of solar wind to breach the magnetopause. This suggests that the magnetic bubble forms more as a filter than a continuous barrier. This latest discovery occurred through the distinctive arrangement of the four identical Cluster spacecraft, which fly in a controlled configuration through near-Earth space. As they sweep from the magnetosphere into interplanetary space and back again, the fleet provides exceptional three-dimensional insights on the phenomena that connect the sun to Earth.
The research characterised variances in formation of the interplanetary magnetic field (IMF) largely influenced by Kelvin–Helmholtz instability (which occur at the interface of two fluids) as a result of differences in thickness and numerous other characteristics of the boundary layer. Experts believe that this was the first occasion that the appearance of Kelvin–Helmholtz waves at the magnetopause had been displayed at high latitude downward orientation of the IMF. These waves are being seen in unforeseen places under solar wind conditions that were formerly believed to be undesired for their generation. These discoveries show how Earth's magnetosphere can be penetrated by solar particles under specific IMF circumstances. The findings are also relevant to studies of magnetospheric progressions around other planetary bodies. This study suggests that Kelvin–Helmholtz waves can be a somewhat common, and possibly constant, instrument for the entrance of solar wind into terrestrial magnetospheres under various IMF orientations.
Atmospheres
The solar wind affects other incoming cosmic rays interacting with planetary atmospheres. Moreover, planets with a weak or non-existent magnetosphere are subject to atmospheric stripping by the solar wind.
Venus, the nearest and most similar planet to Earth, has 100 times denser atmosphere, with little or no geo-magnetic field. Space probes discovered a comet-like tail that extends to Earth's orbit.
Earth itself is largely protected from the solar wind by its magnetic field, which deflects most of the charged particles; however, some of the charged particles are trapped in the Van Allen radiation belt. A smaller number of particles from the solar wind manage to travel, as though on an electromagnetic energy transmission line, to the Earth's upper atmosphere and ionosphere in the auroral zones. The only time the solar wind is observable on the Earth is when it is strong enough to produce phenomena such as the aurora and geomagnetic storms. Bright auroras strongly heat the ionosphere, causing its plasma to expand into the magnetosphere, increasing the size of the plasma geosphere and injecting atmospheric matter into the solar wind. Geomagnetic storms result when the pressure of plasmas contained inside the magnetosphere is sufficiently large to inflate and thereby distort the geomagnetic field.
Although Mars is larger than Mercury and four times farther from the Sun, it is thought that the solar wind has stripped away up to a third of its original atmosphere, leaving a layer 1/100 as dense as the Earth's. It is believed the mechanism for this atmospheric stripping is gas caught in bubbles of the magnetic field, which are ripped off by the solar wind. In 2015 the NASA Mars Atmosphere and Volatile Evolution (MAVEN) mission measured the rate of atmospheric stripping caused by the magnetic field carried by the solar wind as it flows past Mars, which generates an electric field, much as a turbine on Earth can be used to generate electricity. This electric field accelerates electrically charged gas atoms, called ions, in Mars's upper atmosphere and shoots them into space. The MAVEN mission measured the rate of atmospheric stripping at about 100 grams (≈1/4 lb) per second.
Moons and planetary surfaces
Mercury, the nearest planet to the Sun, bears the full brunt of the solar wind, and since its atmosphere is vestigial and transient, its surface is bathed in radiation.
Mercury has an intrinsic magnetic field, so under normal solar wind conditions, the solar wind cannot penetrate its magnetosphere and particles only reach the surface in the cusp regions. During coronal mass ejections, however, the magnetopause may get pressed into the surface of the planet, and under these conditions, the solar wind may interact freely with the planetary surface.
The Earth's Moon has no atmosphere or intrinsic magnetic field, and consequently its surface is bombarded with the full solar wind. The Project Apollo missions deployed passive aluminum collectors in an attempt to sample the solar wind, and lunar soil returned for study confirmed that the lunar regolith is enriched in atomic nuclei deposited from the solar wind. These elements may prove useful resources for future lunar expeditions.
Limits
Alfvén surface
The Alfvén surface is the boundary separating the corona from the solar wind defined as where the coronal plasma's Alfvén speed and the large-scale solar wind speed are equal.
Researchers were unsure exactly where the Alfvén critical surface of the Sun lay. Based on remote images of the corona, estimates had put it somewhere between 10 and 20 solar radii from the surface of the Sun. On April 28, 2021, during its eighth flyby of the Sun, NASA's Parker Solar Probe encountered the specific magnetic and particle conditions at 18.8 solar radii that indicated that it penetrated the Alfvén surface.
Outer limits
The solar wind "blows a bubble" in the interstellar medium (the rarefied hydrogen and helium gas that permeates the galaxy). The point where the solar wind's strength is no longer great enough to push back the interstellar medium is known as the heliopause and is often considered to be the outer border of the Solar System. The distance to the heliopause is not precisely known and probably depends on the current velocity of the solar wind and the local density of the interstellar medium, but it is far outside Pluto's orbit. Scientists hope to gain perspective on the heliopause from data acquired through the Interstellar Boundary Explorer (IBEX) mission, launched in October 2008.
The heliopause is noted as one of the ways of defining the extent of the Solar System, along with the Kuiper Belt and the radius at which the Sun's gravitational influence is matched by other stars. The maximum extent of that influence has been estimated at between 50,000 AU and 2 light-years, compared to the heliopause (the outer boundary of the heliosphere), which has been detected at about 120 AU by the Voyager 1 spacecraft.
The Voyager 2 spacecraft crossed the termination shock more than five times between August 30 and December 10, 2007. Voyager 2 crossed the shock about a Tm closer to the Sun than the 13.5 Tm distance where Voyager 1 came upon the termination shock. The spacecraft moved outward through the termination shock into the heliosheath and onward toward the interstellar medium.
| Physical sciences | Stellar astronomy | null |
28580 | https://en.wikipedia.org/wiki/Springbok | Springbok | The springbok or springbuck (Antidorcas marsupialis) is an antelope found mainly in south and southwest Africa. The sole member of the genus Antidorcas, this bovid was first described by the German zoologist Eberhard August Wilhelm von Zimmermann in 1780. Three subspecies are identified. A slender, long-legged antelope, the springbok reaches at the shoulder and weighs between . Both sexes have a pair of black, long horns that curve backwards. The springbok is characterised by a white face, a dark stripe running from the eyes to the mouth, a light brown coat marked by a reddish-brown stripe that runs from the upper foreleg to the buttocks across the flanks like the Thomson's gazelle, and a white rump flap.
Active mainly at dawn and dusk, springbok form harems (mixed-sex herds). In earlier times, springbok of the Kalahari Desert and Karoo migrated in large numbers across the countryside, a practice known as trekbokking. A feature, peculiar but not unique, to the springbok is pronking, in which the springbok performs multiple leaps into the air, up to above the ground, in a stiff-legged posture, with the back bowed and the white flap lifted. Primarily a browser, the springbok feeds on shrubs and succulents; this antelope can live without drinking water for years, meeting its requirements through eating succulent vegetation. Breeding takes place year-round, and peaks in the rainy season, when forage is most abundant. A single calf is born after a five- to six-month-long pregnancy; weaning occurs at nearly six months of age, and the calf leaves its mother a few months later.
Springbok inhabit the dry areas of south and southwestern Africa. The International Union for Conservation of Nature and Natural Resources classifies the springbok as a least concern species. No major threats to the long-term survival of the species are known; the springbok, in fact, is one of the few antelope species considered to have an expanding population. They are popular game animals, and are valued for their meat and skin. The springbok is the national animal of South Africa.
Etymology
The common name "springbok", first recorded in 1775, comes from the Afrikaans words ("jump") and ("antelope" or "goat"). The scientific name of the springbok is Antidorcas marsupialis. is Greek for "opposite", and for "gazelle" – identifying the animal as not a gazelle. The specific epithet marsupialis comes from the Latin ("pocket"), and refers to a pocket-like skin flap which extends along the midline of the back from the tail, which distinguishes the springbok from true gazelles.
Taxonomy and evolution
The springbok, in the family Bovidae, was first described by the German zoologist Eberhard August Wilhelm von Zimmermann in 1780, who assigned the genus Antilope (blackbuck) to the springbok. In 1845, Swedish zoologist Carl Jakob Sundevall placed the springbok as the sole living member of the genus Antidorcas.
Subspecies
Three subspecies of Antidorcas marsupialis are recognised:
A. m. angolensis (Blaine, 1922) – Occurs in Benguela and Moçâmedes (southwestern Angola).
A. m. hofmeyri (Thomas, 1926) – Occurs in Berseba and Great Namaqualand (southwestern Africa). Its range lies north of the Orange River, stretching from Upington and Sandfontein through Botswana to Namibia.
A. m. marsupialis (Zimmermann, 1780) – Its range lies south of the Orange River, extending from the northeastern Cape of Good Hope to the Free State and Kimberley.
Evolution
Fossil springbok are known from the Pliocene; the antelope appears to have evolved about three million years ago from a gazelle-like ancestor. Three fossil species of Antidorcas have been identified, in addition to the extant form, and appear to have been widespread across Africa. Two of these, A. bondi and A. australis, became extinct around 7,000 years ago (early Holocene). The third species, A. recki, probably gave rise to the extant form A. marsupialis during the Pleistocene, about 100,000 years ago. Fossils have been reported from Pliocene, Pleistocene, and Holocene sites in northern, southern, and eastern Africa. Fossils dating back to 80 and 100 thousand years ago have been excavated at Herolds Bay Cave (Western Cape Province, South Africa) and Florisbad (Free State), respectively.
Description
The springbok is a slender antelope with long legs and neck. Both sexes reach at the shoulder with a head-and-body length typically between . The weights for both sexes range between . The tail, long, ends in a short, black tuft. Major differences in the size and weight of the subspecies are seen. A study tabulated average body measurements for the three subspecies. A. m. angolensis males stand tall at the shoulder, while females are tall. The males weigh around , while the females weigh . A. m. hofmeyri is the largest subspecies; males are nearly tall, and the notably shorter females are tall. The males, weighing , are heavier than females, that weigh . However, A. m. marsupialis is the smallest subspecies; males are tall and females tall. Average weight of males is , while for females it is . Another study showed a strong correlation between the availability of winter dietary protein and the body mass.
Dark stripes extend across the white face, from the corner of the eyes to the mouth. A dark patch marks the forehead. In juveniles, the stripes and the patch are light brown. The ears, narrow and pointed, measure . Typically light brown, the springbok has a dark reddish-brown band running horizontally from the upper foreleg to the edge of the buttocks, separating the dark back from the white underbelly. The tail (except the terminal black tuft), buttocks, the insides of the legs and the rump are all white. Two other varieties – pure black and pure white forms – are artificially selected in some South African ranches. Though born with a deep black sheen, adult black springbok are two shades of chocolate-brown and develop a white marking on the face as they mature. White springbok, as the name suggests, are predominantly white with a light tan stripe on the flanks.
The three subspecies also differ in their colour. A. m. angolensis has a brown to tawny coat, with thick, dark brown stripes on the face extending two-thirds down to the snout. While the lateral stripe is nearly black, the stripe on the rump is dark brown. The medium brown forehead patch extends to eye level and is separated from the bright white face by a dark brown border. A brown spot is seen on the nose. A. m. hofmeyri is a light fawn, with thin, dark brown face stripes. The stripes on the flanks are dark brown to black, and the posterior stripes are moderately brown. The forehead patch, dark brown or fawn, extends beyond the level of the eyes and mixes with the white of the face without any clear barriers. The nose may have a pale smudge. A. m. marsupialis is a rich chestnut brown, with thin, light face stripes. The stripe near the rump is well-marked, and that on the flanks is deep brown. The forehead is brown, fawn, or white, the patch not extending beyond the eyes and having no sharp boundaries. The nose is white or marked with brown.
The skin along the middle of the dorsal side is folded in, and covered with white hair erected by arrector pili muscles (located between hair follicles). This white hair is almost fully concealed by the surrounding brown hairs until the fold opens up, and this is a major feature distinguishing this antelope from gazelles. Springbok differ from gazelles in several other ways; for instance, springbok have two premolars on both sides of either jaw, rather than the three observed in gazelles. This gives a total of 28 teeth in the springbok, rather than 32 of gazelles. Other points of difference include a longer, broader, and rigid bridge to the nose and more muscular cheeks in springbok, and differences in the structure of the horns.
Both sexes have black horns, about long, that are straight at the base and then curve backward. In A. m. marsupialis, females have thinner horns than males; the horns of females are only 60 to 70% as long as those of males. Horns have a girth of at the base; this thins to towards the tip. In the other two subspecies, horns of both sexes are nearly similar. The spoor, narrow and sharp, is long.
Ecology and behaviour
Springbok are mainly active around dawn and dusk. Activity is influenced by weather; springbok can feed at night in hot weather, and at midday in colder months. They rest in the shade of trees or bushes, and often bed down in the open when weather is cooler.
The social structure of the springbok is similar to that of Thomson's gazelle. Mixed-sex herds or harems have a roughly 3:1 sex ratio; bachelor individuals are also observed. In the mating season, males generally form herds and wander in search of mates. Females live with their offspring in herds, that very rarely include dominant males. Territorial males round up female herds that enter their territories and keep out the bachelors; mothers and juveniles may gather in nursery herds separate from harem and bachelor herds. After weaning, female juveniles stay with their mothers until the birth of their next calves, while males join bachelor groups.
A study of vigilance behaviour of herds revealed that individuals on the borders of herds tend to be more cautious, and vigilance decreases with group size. Group size and distance from roads and bushes were found to have major influence on vigilance, more among the grazing springbok than among their browsing counterparts. Adults were found to be more vigilant than juveniles, and males more vigilant than females. Springbok passing through bushes tend to be more vulnerable to predator attacks as they cannot be easily alerted, and predators usually conceal themselves in bushes. Another study calculated that the time spent in vigilance by springbok on the edges of herds is roughly double that spent by those in the centre and the open. Springbok were found to be more cautious in the late morning than at dawn or in the afternoon, and more at night than in the daytime. Rates and methods of vigilance were found to vary with the aim of lowering risk from predators.
During the rut, males establish territories, ranging from , which they mark by urinating and depositing large piles of dung. Males in neighbouring territories frequently fight for access to females, which they do by twisting and levering at each other with their horns, interspersed with stabbing attacks. Females roam the territories of different males. Outside of the rut, mixed-sex herds can range from as few as three to as many as 180 individuals, while all-male bachelor herds are of typically no more than 50 individuals. Harem and nursery herds are much smaller, typically including no more than 10 individuals.
In earlier times, when large populations of springbok roamed the Kalahari Desert and Karoo, millions of migrating springbok formed herds hundreds of kilometres long that could take several days to pass a town. These mass treks, known as trekbokking in Afrikaans, took place during long periods of drought. Herds could efficiently retrace their paths to their territories after long migrations. Trekbokking is still observed occasionally in Botswana, though on a much smaller scale than earlier.
Springbok often go into bouts of repeated high leaps of up to into the air – a practice known as pronking (derived from the Afrikaans pronk, "to show off") or stotting. In pronking, the springbok performs multiple leaps into the air in a stiff-legged posture, with the back bowed and the white flap lifted. When the male shows off his strength to attract a mate, or to ward off predators, he starts off in a stiff-legged trot, leaping into the air with an arched back every few paces and lifting the flap along his back. Lifting the flap causes the long white hairs under the tail to stand up in a conspicuous fan shape, which in turn emits a strong scent of sweat. Although the exact cause of this behaviour is unknown, springbok exhibit this activity when they are nervous or otherwise excited. The most accepted theory for pronking is that it is a method to raise alarm against a potential predator or confuse it, or to get a better view of a concealed predator; it may also be used for display.
Springbok are very fast antelopes, clocked at . They generally tend to be ignored by carnivores unless they are breeding. Cheetahs, lions, leopards, spotted hyenas, wild dogs, caracals, crocodiles and pythons are major predators of the springbok. Southern African wildcats, black-backed jackals, Verreaux's Eagles, martial eagles, and tawny eagles target juveniles. Springbok are generally quiet animals, though they may make occasional low-pitched bellows as a greeting and high-pitched snorts when alarmed.
Parasites
A 2012 study on the effects of rainfall patterns and parasite infections on the body of the springbok in Etosha National Park observed that males and juveniles were in better health toward the end of the rainy season. Health of females was more affected by parasites than by rainfall; parasite count in females peaked prior to and immediately after parturition. Studies show that springbok host helminths (Haemonchus, Longistrongylus and Trichostrongylus), ixodid ticks (Rhipicephalus species), lice (Damalinia and Linognathus species). Eimeria species mainly affect juveniles.
Diet
Springbok are primarily browsers and may switch to grazing occasionally; they feed on shrubs and young succulents (such as Lampranthus species) before they lignify. They prefer grasses such as Themeda triandra. Springbok can meet their water needs from the food they eat, and are able to survive without drinking water through dry season. In extreme cases, they do not drink any water over the course of their lives. Springbok may accomplish this by selecting flowers, seeds, and leaves of shrubs before dawn, when the food items are most succulent. In places such as Etosha National Park, springbok seek out water bodies where they are available. Springbok gather in the wet season and disperse during the dry season, unlike other African mammals.
Reproduction
Springbok mate year-round, though females are more likely to enter oestrus during the rainy season, when food is more plentiful. Females are able to conceive at as early as six to seven months, whereas males do not attain sexual maturity until two years; rut lasts 5 to 21 days. When a female approaches a rutting male, the male holds his head and tail at level with the ground, lowers his horns, and makes a loud grunting noise to attract her. The male then urinates and sniffs the female's perineum. If the female is receptive, she urinates, as well, and the male makes a flehmen gesture, and taps his leg till the female leaves or permits him to mate. Copulation consists of a single pelvic thrust.
Gestation lasts five to six months, after which a single calf (or rarely twins) is born. Most births take place in the spring (October to November), prior to the onset of the rainy season. The infant weighs . The female keeps her calf hidden in cover while she is away. Mother and calf rejoin the herd about three to four weeks after parturition; the young are weaned at five or six months. When the mother gives birth again, the previous offspring, now 6 to 12 months old, deserts her to join herds of adult springbok. Thus, a female can calve twice a year, and even thrice if one calf dies. Springbok live for up to 10 years in the wild.
Distribution and habitat
Springbok inhabit the dry areas of south and southwestern Africa. Their range extends from northwestern South Africa through the Kalahari Desert into Namibia and Botswana. The Transvaal marks the eastern limit of the range, from where it extends westward to the Atlantic and northward to southern Angola and Botswana. In Botswana, they mostly occur in the Kalahari Desert in the southwestern and central parts of the country. They are widespread across Namibia and the vast grasslands of the Free State and the shrublands of Karoo in South Africa; however, they are confined to the Namib Desert in Angola.
The historic range of the springbok stretched across the dry grasslands, bushlands, and shrublands of south-western and southern Africa; springbok migrated sporadically in southern parts of the range. These migrations are rarely seen nowadays, but seasonal congregations can still be observed in preferred areas of short vegetation, such as the Kalahari Desert.
Threats and conservation
The springbok has been classified as least concern on the IUCN Red List. No major threats to the long-term survival of the species are known. The springbok is one of the few antelope species with a positive population trend.
Springbok occur in several protected areas across their range: Makgadikgadi and Nxai National Park (Botswana); Kgalagadi Transfrontier Park between Botswana and South Africa; Etosha National Park and Namib-Naukluft Park (Namibia); Mokala and Karoo National Parks and a number of provincial reserves in South Africa. In 1999, Rod East of the IUCN SSC Antelope Specialist Group estimated the springbok population in South Africa at more than 670,000, noting that it might be an underestimate. However, estimates for Namibia, Angola, Botswana, Transvaal, Karoo, and the Free State (which gave a total population estimate of nearly 2,000,000 – 2,500,000 animals in southern Africa), were in complete disagreement with East's estimate. Springbok are under active management in several private lands. Small populations have been introduced into private lands and provincial areas of KwaZulu-Natal.
Relationship with humans
Springbok are hunted as game throughout Namibia, Botswana, and South Africa because of their attractive coats; they are common hunting targets due to their large numbers and the ease with which they can be supported on farmlands. The export of springbok skins, mainly from Namibia and South Africa, is a booming industry; these skins serve as taxidermy models. The meat is a prized fare, and is readily available in South African supermarkets. As of 2011, the springbok, the gemsbok, and the greater kudu collectively account for around two-thirds of the game meat production from Namibian farmlands; nearly of the springbok meat is exported as mechanically deboned meat to overseas markets.
The latissimus dorsi muscle of the springbok comprises 1.1–1.3% ash, 1.3–3.5% fat, 72–75% moisture and 18–22% protein. Stearic acid is the main fatty acid, accounting for 24–27% of the fatty acids. The cholesterol content varies from per of meat. The pH of the meat increases if the springbok is under stress or cropping is done improperly; consequently, the quality deteriorates and the colour darkens. The meat might be adversely affected if the animal is killed by shooting. The meat may be consumed raw or used in prepared dishes. Biltong can be prepared by preserving the raw meat with vinegar, spices, and table salt, without fermentation, followed by drying. Springbok meat may also be used in preparing salami; a study found that the flavour of this salami is better than mutton salami, and feels oilier than salami of beef, horse meat, or mutton.
The springbok has been a national symbol of South Africa since the white minority rule in the 20th century. It was adopted as a nickname or mascot by several South African sports teams, most famously by the national rugby union team. Also, the winged springbok served as the logo of South African Airways from 1934 to 1997. The springbok is the national animal of South Africa. Even after the decline of apartheid, Nelson Mandela intervened to keep the name of the animal for the reconciliation of rugby fans, the majority of whom were whites. The springbok is featured on the reverse of the South African Krugerrand coin.
The cap badge of The Royal Canadian Dragoons has featured a springbok since 1913, a reference to the unit's involvement in the Second Boer War.
| Biology and health sciences | Bovidae | Animals |
28603 | https://en.wikipedia.org/wiki/Star%20cluster | Star cluster | Star clusters are large groups of stars held together by self-gravitation. Two main types of star clusters can be distinguished. Globular clusters are tight groups of ten thousand to millions of old stars which are gravitationally bound. Open clusters are more loosely clustered groups of stars, generally containing fewer than a few hundred members, that are often very young. As they move through the galaxy, over time, open clusters become disrupted by the gravitational influence of giant molecular clouds. Even though they are no longer gravitationally bound, they will continue to move in broadly the same direction through space and are then known as stellar associations, sometimes referred to as moving groups.
Star clusters visible to the naked eye include the Pleiades, Hyades, and 47 Tucanae.
Open cluster
Open clusters are very different from globular clusters. Unlike the spherically distributed globulars, they are confined to the galactic plane, and are almost always found within spiral arms. They are generally young objects, up to a few tens of millions of years old, with a few rare exceptions as old as a few billion years, such as Messier 67 (the closest and most observed old open cluster) for example. They form in H II regions such as the Orion Nebula.
Open clusters typically have a few hundred members and are located in an area up to 30 light-years across. Being much less densely populated than globular clusters, they are much less tightly gravitationally bound, and over time, are disrupted by the gravity of giant molecular clouds and other clusters. Close encounters between cluster members can also result in the ejection of stars, a process known as "evaporation".
The most prominent open clusters are the Pleiades and Hyades in Taurus. The Double Cluster of h+Chi Persei can also be prominent under dark skies. Open clusters are often dominated by hot young blue stars, because although such stars are short-lived in stellar terms, only lasting a few tens of millions of years, open clusters tend to have dispersed before these stars die.
A subset of open clusters constitute a binary or aggregate cluster. New research indicates Messier 25 may constitute a ternary star cluster together with NGC 6716 and Collinder 394.
Establishing precise distances to open clusters enables the calibration of the period-luminosity relationship shown by Cepheids variable stars, which are then used as standard candles. Cepheids are luminous and can be used to establish both the distances to remote galaxies and the expansion rate of the Universe (Hubble constant). Indeed, the open cluster NGC 7790 hosts three classical Cepheids which are critical for such efforts.
Embedded cluster
Embedded clusters are groups of very young stars that are partially or fully encased in interstellar dust or gas which is often impervious to optical observations. Embedded clusters form in molecular clouds, when the clouds begin to collapse and form stars. There is often ongoing star formation in these clusters, so embedded clusters may be home to various types of young stellar objects including protostars and pre-main-sequence stars. An example of an embedded cluster is the Trapezium Cluster in the Orion Nebula. In ρ Ophiuchi cloud (L1688) core region there is an embedded cluster.
The embedded cluster phase may last for several million years, after which gas in the cloud is depleted by star formation or dispersed through radiation pressure, stellar winds and outflows, or supernova explosions. In general less than 30% of cloud mass is converted to stars before the cloud is dispersed, but this fraction may be higher in particularly dense parts of the cloud. With the loss of mass in the cloud, the energy of the system is altered, often leading to the disruption of a star cluster. Most young embedded clusters disperse shortly after the end of star formation.
The open clusters found in the Galaxy are former embedded clusters that were able to survive early cluster evolution. However, nearly all freely floating stars, including the Sun, were originally born into embedded clusters that disintegrated.
Globular cluster
Globular clusters are roughly spherical groupings of from 10 thousand to several million stars packed into regions of from 10 to 30 light-years across. They commonly consist of very old Population II stars – just a few hundred million years younger than the universe itself – which are mostly yellow and red, with masses less than two solar masses. Such stars predominate within clusters because hotter and more massive stars have exploded as supernovae, or evolved through planetary nebula phases to end as white dwarfs. Yet a few rare blue stars exist in globulars, thought to be formed by stellar mergers in their dense inner regions; these stars are known as blue stragglers.
In the Milky Way galaxy, globular clusters are distributed roughly spherically in the galactic halo, around the Galactic Center, orbiting the center in highly elliptical orbits. In 1917, the astronomer Harlow Shapley made the first respectable estimate of the Sun's distance from the Galactic Center, based on the distribution of globular clusters.
Until the mid-1990s, globular clusters were the cause of a great mystery in astronomy, as theories of stellar evolution gave ages for the oldest members of globular clusters that were greater than the estimated age of the universe. However, greatly improved distance measurements to globular clusters using the Hipparcos satellite and increasingly accurate measurements of the Hubble constant resolved the paradox, giving an age for the universe of about 13 billion years and an age for the oldest stars of a few hundred million years less.
Our Galaxy has about 150 globular clusters, some of which may have been captured cores of small galaxies stripped of stars previously in their outer margins by the tides of the Milky Way, as seems to be the case for the globular cluster M79. Some galaxies are much richer in globulars than the Milky Way: The giant elliptical galaxy M87 contains over a thousand.
A few of the brightest globular clusters are visible to the naked eye; the brightest, Omega Centauri, was observed in antiquity and catalogued as a star, before the telescopic age. The brightest globular cluster in the northern hemisphere is M13 in the constellation of Hercules.
Super star cluster
Super star clusters are very large regions of recent star formation, and are thought to be the precursors of globular clusters. Examples include Westerlund 1 in the Milky Way.
Intermediate forms
In 2005, astronomers discovered a new type of star cluster in the Andromeda Galaxy, which is, in several ways, very similar to globular clusters although less dense. No such clusters (which also known as extended globular clusters) are known in the Milky Way. The three discovered in Andromeda Galaxy are M31WFS C1 M31WFS C2, and M31WFS C3.
These new-found star clusters contain hundreds of thousands of stars, a similar number to globular clusters. The clusters also share other characteristics with globular clusters, e.g. the stellar populations and metallicity. What distinguishes them from the globular clusters is that they are much larger – several hundred light-years across – and hundreds of times less dense. The distances between the stars are thus much greater. The clusters have properties intermediate between globular clusters and dwarf spheroidal galaxies.
How these clusters are formed is not yet known, but their formation might well be related to that of globular clusters. Why M31 has such clusters, while the Milky Way has not, is not yet known. It is also unknown if any other galaxy contains this kind of clusters, but it would be very unlikely that M31 is the sole galaxy with extended clusters.
Another type of cluster are faint fuzzies which so far have only been found in lenticular galaxies like NGC 1023 and NGC 3384. They are characterized by their large size compared to globular clusters and a ringlike distribution around the centres of their host galaxies. As the latter they seem to be old objects.
Astronomical significance
Star clusters are important in many areas of astronomy. The reason behind this is that almost all the stars in old clusters were born at roughly the same time. Various properties of all the stars in a cluster are a function only of mass, and so stellar evolution theories rely on observations of open and globular clusters. This is primarily true for old globular clusters. In the case of young (age < 1Gyr) and intermediate-age (1 < age < 5 Gyr), factors such as age, mass, chemical compositions may also play vital roles. Based on their ages, star clusters can reveal a lot of information about their host galaxies. For example, star clusters residing in the Magellanic Clouds can provide essential information about the formation of the Magellanic Clouds dwarf galaxies. This, in turn, can help us understand many astrophysical processes happening in our own Milky Way Galaxy. These clusters, especially the young ones can explain the star formation process that might have happened in our Milky Way Galaxy.
Clusters are also a crucial step in determining the distance scale of the universe. A few of the nearest clusters are close enough for their distances to be measured using parallax. A Hertzsprung–Russell diagram can be plotted for these clusters which has absolute values known on the luminosity axis. Then, when similar diagram is plotted for a cluster whose distance is not known, the position of the main sequence can be compared to that of the first cluster and the distance estimated. This process is known as main-sequence fitting. Reddening and stellar populations must be accounted for when using this method.
Nearly all stars in the Galactic field, including the Sun, were initially born in regions with embedded clusters that disintegrated. This means that properties of stars and planetary systems may have been affected by early clustered environments. This appears to be the case for our own Solar System, in which chemical abundances point to the effects of a supernova from a nearby star early in our Solar System's history.
Star cloud
Technically not star clusters, star clouds are large groups of many stars within a galaxy, spread over very many light-years of space. Often they contain star clusters within them. The stars appear closely packed, but are not usually part of any structure. Within the Milky Way, star clouds show through gaps between dust clouds of the Great Rift, allowing deeper views along our particular line of sight. Star clouds have also been identified in other nearby galaxies. Examples of star clouds include the Large Sagittarius Star Cloud, Small Sagittarius Star Cloud, Scutum Star Cloud, Cygnus Star Cloud, Norma Star Cloud, and NGC 206 in the Andromeda Galaxy.
Nomenclature
In 1979, the International Astronomical Union's 17th general assembly recommended that newly discovered star clusters, open or globular, within the Galaxy have designations following the convention "Chhmm±ddd", always beginning with the prefix C, where h, m, and d represent the approximate coordinates of the cluster centre in hours and minutes of right ascension, and degrees of declination, respectively, with leading zeros. The designation, once assigned, is not to change, even if subsequent measurements improve on the location of the cluster centre. The first of such designations were assigned by Gosta Lynga in 1982.
| Physical sciences | Stellar astronomy | null |
28625 | https://en.wikipedia.org/wiki/Supervolcano | Supervolcano | A supervolcano is a volcano that has had an eruption with a volcanic explosivity index (VEI) of 8, the largest recorded value on the index. This means the volume of deposits for such an eruption is greater than .
Supervolcanoes occur when magma in the mantle rises into the crust but is unable to break through it. Pressure builds in a large and growing magma pool until the crust is unable to contain the pressure and ruptures. This can occur at hotspots (for example, Yellowstone Caldera) or at subduction zones (for example, Toba).
Large-volume supervolcanic eruptions are also often associated with large igneous provinces, which can cover huge areas with lava and volcanic ash. These can cause long-lasting climate change (such as the triggering of a small ice age) and threaten species with extinction. The Oruanui eruption of New Zealand's Taupō Volcano (about 25,600 years ago) was the world's most recent VEI-8 eruption.
Terminology
The term "supervolcano" was first used in a volcanic context in 1949.
Its origins lie in an early 20th-century scientific debate about the geological history and features of the Three Sisters volcanic region of Oregon in the United States. In 1925, Edwin T. Hodge suggested that a very large volcano, which he named Mount Multnomah, had existed in that region. He believed that several peaks in the Three Sisters area were remnants of Mount Multnomah after it had been largely destroyed by violent volcanic explosions, similarly to Mount Mazama. In his 1948 book The Ancient Volcanoes of Oregon, volcanologist Howel Williams ignored the possible existence of Mount Multnomah, but in 1949 another volcanologist, F. M. Byers Jr., reviewed the book, and in the review, Byers refers to Mount Multnomah as a "supervolcano".
More than fifty years after Byers' review was published, the term supervolcano was popularised by the BBC popular science television program Horizon in 2000, referring to eruptions that produce extremely large amounts of ejecta.
The term megacaldera is sometimes used for caldera supervolcanoes, such as the Blake River Megacaldera Complex in the Abitibi greenstone belt of Ontario and Quebec, Canada.
Though there is no well-defined minimum explosive size for a "supervolcano", there are at least two types of volcanic eruptions that have been identified as supervolcanoes: large igneous provinces and massive eruptions.
Large igneous provinces
Large igneous provinces, such as Iceland, the Siberian Traps, Deccan Traps, and the Ontong Java Plateau, are extensive regions of basalts on a continental scale resulting from flood basalt eruptions. When created, these regions often occupy several thousand square kilometres and have volumes on the order of millions of cubic kilometers. In most cases, the lavas are normally laid down over several million years. They release large amounts of gases.
The Réunion hotspot produced the Deccan Traps about 66 million years ago, coincident with the Cretaceous–Paleogene extinction event. The scientific consensus is that an asteroid impact was the cause of the extinction event, but the volcanic activity may have caused environmental stresses on extant species up to the Cretaceous–Paleogene boundary. Additionally, the largest flood basalt event (the Siberian Traps) occurred around 250 million years ago and was coincident with the largest mass extinction in history, the Permian–Triassic extinction event, although it is unknown whether it was solely responsible for the extinction event.
Such outpourings are not explosive, though lava fountains may occur. Many volcanologists consider Iceland to be a large igneous province that is currently being formed. The last major outpouring occurred in 1783–84 from the Laki fissure, which is approximately long. An estimated of basaltic lava was poured out during the eruption (VEI 4).
The Ontong Java Plateau has an area of about , and the province was at least 50% larger before the Manihiki and Hikurangi Plateaus broke away.
Massive explosive eruptions
Volcanic eruptions are classified using the volcanic explosivity index. It is a logarithmic scale, and an increase of one in VEI number is equivalent to a tenfold increase in volume of erupted material. VEI 7 or VEI 8 eruptions are so powerful that they often form circular calderas rather than cones because the downward withdrawal of magma causes the overlying rock mass to collapse into the empty magma chamber beneath it.
Known super eruptions
Based on incomplete statistics, at least 60 VEI 8 eruptions have been identified.
Media portrayal
Nova featured an episode "Mystery of the Megavolcano" in September 2006 examining such eruptions in the last 100,000 years.
Supervolcano is the title of a British-Canadian television disaster film, first released in 2005. It tells a fictional story of a supereruption at Yellowstone.
In the 2009 disaster film 2012, a supereruption of Yellowstone is one of the events that contributes to a global cataclysm.
Gallery
| Physical sciences | Volcanology | Earth science |
28648 | https://en.wikipedia.org/wiki/String-searching%20algorithm | String-searching algorithm | A string-searching algorithm, sometimes called string-matching algorithm, is an algorithm that searches a body of text for portions that match by pattern.
A basic example of string searching is when the pattern and the searched text are arrays of elements of an alphabet (finite set) Σ. Σ may be a human language alphabet, for example, the letters A through Z and other applications may use a binary alphabet (Σ = {0,1}) or a DNA alphabet (Σ = {A,C,G,T}) in bioinformatics.
In practice, the method of feasible string-search algorithm may be affected by the string encoding. In particular, if a variable-width encoding is in use, then it may be slower to find the Nth character, perhaps requiring time proportional to N. This may significantly slow some search algorithms. One of many possible solutions is to search for the sequence of code units instead, but doing so may produce false matches unless the encoding is specifically designed to avoid it.
Overview
The most basic case of string searching involves one (often very long) string, sometimes called the haystack, and one (often very short) string, sometimes called the needle. The goal is to find one or more occurrences of the needle within the haystack. For example, one might search for to within:
Some books are to be tasted, others to be swallowed, and some few to be chewed and digested.
One might request the first occurrence of "to", which is the fourth word; or all occurrences, of which there are 3; or the last, which is the fifth word from the end.
Very commonly, however, various constraints are added. For example, one might want to match the "needle" only where it consists of one (or more) complete words—perhaps defined as not having other letters immediately adjacent on either side. In that case a search for "hew" or "low" should fail for the example sentence above, even though those literal strings do occur.
Another common example involves "normalization". For many purposes, a search for a phrase such as "to be" should succeed even in places where there is something else intervening between the "to" and the "be":
More than one space
Other "whitespace" characters such as tabs, non-breaking spaces, line-breaks, etc.
Less commonly, a hyphen or soft hyphen
In structured texts, tags or even arbitrarily large but "parenthetical" things such as footnotes, list-numbers or other markers, embedded images, and so on.
Many symbol systems include characters that are synonymous (at least for some purposes):
Latin-based alphabets distinguish lower-case from upper-case, but for many purposes string search is expected to ignore the distinction.
Many languages include ligatures, where one composite character is equivalent to two or more other characters.
Many writing systems involve diacritical marks such as accents or vowel points, which may vary in their usage, or be of varying importance in matching.
DNA sequences can involve non-coding segments which may be ignored for some purposes, or polymorphisms that lead to no change in the encoded proteins, which may not count as a true difference for some other purposes.
Some languages have rules where a different character or form of character must be used at the start, middle, or end of words.
Finally, for strings that represent natural language, aspects of the language itself become involved. For example, one might wish to find all occurrences of a "word" despite it having alternate spellings, prefixes or suffixes, etc.
Another more complex type of search is regular expression searching, where the user constructs a pattern of characters or other symbols, and any match to the pattern should fulfill the search. For example, to catch both the American English word "color" and the British equivalent "colour", instead of searching for two different literal strings, one might use a regular expression such as:
colou?r
where the "?" conventionally makes the preceding character ("u") optional.
This article mainly discusses algorithms for the simpler kinds of string searching.
A similar problem introduced in the field of bioinformatics and genomics is the maximal exact matching (MEM). Given two strings, MEMs are common substrings that cannot be extended left or right without causing a mismatch.
Examples of search algorithms
Naive string search
A simple and inefficient way to see where one string occurs inside another is to check at each index, one by one. First, we see if there is a copy of the needle starting at the first character of the haystack; if not, we look to see if there's a copy of the needle starting at the second character of the haystack, and so forth. In the normal case, we only have to look at one or two characters for each wrong position to see that it is a wrong position, so in the average case, this takes O(n + m) steps, where n is the length of the haystack and m is the length of the needle; but in the worst case, searching for a string like "aaaab" in a string like "aaaaaaaaab", it takes O(nm)
Finite-state-automaton-based search
In this approach, backtracking is avoided by constructing a deterministic finite automaton (DFA) that recognizes a stored search string. These are expensive to construct—they are usually created using the powerset construction—but are very quick to use. For example, the DFA shown to the right recognizes the word "MOMMY". This approach is frequently generalized in practice to search for arbitrary regular expressions.
Stubs
Knuth–Morris–Pratt computes a DFA that recognizes inputs with the string to search for as a suffix, Boyer–Moore starts searching from the end of the needle, so it can usually jump ahead a whole needle-length at each step. Baeza–Yates keeps track of whether the previous j characters were a prefix of the search string, and is therefore adaptable to fuzzy string searching. The bitap algorithm is an application of Baeza–Yates' approach.
Index methods
Faster search algorithms preprocess the text. After building a substring index, for example a suffix tree or suffix array, the occurrences of a pattern can be found quickly. As an example, a suffix tree can be built in time, and all occurrences of a pattern can be found in time under the assumption that the alphabet has a constant size and all inner nodes in the suffix tree know what leaves are underneath them. The latter can be accomplished by running a DFS algorithm from the root of the suffix tree.
Other variants
Some search methods, for instance trigram search, are intended to find a "closeness" score between the search string and the text rather than a "match/non-match". These are sometimes called "fuzzy" searches.
Classification of search algorithms
Classification by a number of patterns
The various algorithms can be classified by the number of patterns each uses.
Single-pattern algorithms
In the following compilation, m is the length of the pattern, n the length of the searchable text, and k = |Σ| is the size of the alphabet.
1.Asymptotic times are expressed using O, Ω, and Θ notation.
2.Used to implement the memmem and strstr search functions in the glibc and musl C standard libraries.
3.Can be extended to handle approximate string matching and (potentially-infinite) sets of patterns represented as regular languages.
The Boyer–Moore string-search algorithm has been the standard benchmark for the practical string-search literature.
Algorithms using a finite set of patterns
In the following compilation, M is the length of the longest pattern, m their total length, n the length of the searchable text, o the number of occurrences.
Algorithms using an infinite number of patterns
Naturally, the patterns can not be enumerated finitely in this case. They are represented usually by a regular grammar or regular expression.
Classification by the use of preprocessing programs
Other classification approaches are possible. One of the most common uses preprocessing as main criteria.
Classification by matching strategies
Another one classifies the algorithms by their matching strategy:
Match the prefix first (Knuth–Morris–Pratt, Shift-And, Aho–Corasick)
Match the suffix first (Boyer–Moore and variants, Commentz-Walter)
Match the best factor first (BNDM, BOM, Set-BOM)
Other strategy (Naïve, Rabin–Karp, Vectorized)
| Mathematics | Algorithms | null |
28650 | https://en.wikipedia.org/wiki/Stoichiometry | Stoichiometry | Stoichiometry () is the relationships between the masses of reactants and products before, during, and following chemical reactions.
Stoichiometry is founded on the law of conservation of mass where the total mass of the reactants equals the total mass of the products, leading to the insight that the relations between quantities of reactants and products typically form a ratio of positive integers. This means that if the amounts of the separate reactants are known, then the amount of the product can be calculated. Conversely, if one reactant has a known quantity and the quantity of the products can be empirically determined, then the amount of the other reactants can also be calculated.
This is illustrated in the image here, where the balanced equation is:
Here, one molecule of methane reacts with two molecules of oxygen gas to yield one molecule of carbon dioxide and two molecules of water. This particular chemical equation is an example of complete combustion. Stoichiometry measures these quantitative relationships, and is used to determine the amount of products and reactants that are produced or needed in a given reaction. Describing the quantitative relationships among substances as they participate in chemical reactions is known as reaction stoichiometry. In the example above, reaction stoichiometry measures the relationship between the quantities of methane and oxygen that react to form carbon dioxide and water.
Because of the well known relationship of moles to atomic weights, the ratios that are arrived at by stoichiometry can be used to determine quantities by weight in a reaction described by a balanced equation. This is called composition stoichiometry.
Gas stoichiometry deals with reactions involving gases, where the gases are at a known temperature, pressure, and volume and can be assumed to be ideal gases. For gases, the volume ratio is ideally the same by the ideal gas law, but the mass ratio of a single reaction has to be calculated from the molecular masses of the reactants and products. In practice, because of the existence of isotopes, molar masses are used instead in calculating the mass ratio.
Etymology
The term stoichiometry was first used by Jeremias Benjamin Richter in 1792 when the first volume of Richter's (Fundamentals of Stoichiometry, or the Art of Measuring the Chemical Elements) was published. The term is derived from the Ancient Greek words "element" and "measure".
L. Darmstaedter and Ralph E. Oesper has written a useful account on this.
Definition
A stoichiometric amount or stoichiometric ratio of a reagent is the optimum amount or ratio where, assuming that the reaction proceeds to completion:
All of the reagent is consumed
There is no deficiency of the reagent
There is no excess of the reagent.
Stoichiometry rests upon the very basic laws that help to understand it better, i.e., law of conservation of mass, the law of definite proportions (i.e., the law of constant composition), the law of multiple proportions and the law of reciprocal proportions. In general, chemical reactions combine in definite ratios of chemicals. Since chemical reactions can neither create nor destroy matter, nor transmute one element into another, the amount of each element must be the same throughout the overall reaction. For example, the number of atoms of a given element X on the reactant side must equal the number of atoms of that element on the product side, whether or not all of those atoms are actually involved in a reaction.
Chemical reactions, as macroscopic unit operations, consist of simply a very large number of elementary reactions, where a single molecule reacts with another molecule. As the reacting molecules (or moieties) consist of a definite set of atoms in an integer ratio, the ratio between reactants in a complete reaction is also in integer ratio. A reaction may consume more than one molecule, and the stoichiometric number counts this number, defined as positive for products (added) and negative for reactants (removed). The unsigned coefficients are generally referred to as the stoichiometric coefficients.
Each element has an atomic mass, and considering molecules as collections of atoms, compounds have a definite molecular mass, which when expressed in daltons is numerically equal to the molar mass in g/mol. By definition, the atomic mass of carbon-12 is 12 Da, giving a molar mass of 12 g/mol. The number of molecules per mole in a substance is given by the Avogadro constant, exactly since the 2019 revision of the SI. Thus, to calculate the stoichiometry by mass, the number of molecules required for each reactant is expressed in moles and multiplied by the molar mass of each to give the mass of each reactant per mole of reaction. The mass ratios can be calculated by dividing each by the total in the whole reaction.
Elements in their natural state are mixtures of isotopes of differing mass; thus, atomic masses and thus molar masses are not exactly integers. For instance, instead of an exact 14:3 proportion, 17.04 g of ammonia consists of 14.01 g of nitrogen and 3 × 1.01 g of hydrogen, because natural nitrogen includes a small amount of nitrogen-15, and natural hydrogen includes hydrogen-2 (deuterium).
A stoichiometric reactant is a reactant that is consumed in a reaction, as opposed to a catalytic reactant, which is not consumed in the overall reaction because it reacts in one step and is regenerated in another step.
Converting grams to moles
Stoichiometry is not only used to balance chemical equations but also used in conversions, i.e., converting from grams to moles using molar mass as the conversion factor, or from grams to milliliters using density. For example, to find the amount of NaCl (sodium chloride) in 2.00 g, one would do the following:
In the above example, when written out in fraction form, the units of grams form a multiplicative identity, which is equivalent to one (g/g = 1), with the resulting amount in moles (the unit that was needed), as shown in the following equation,
Molar proportion
Stoichiometry is often used to balance chemical equations (reaction stoichiometry). For example, the two diatomic gases, hydrogen and oxygen, can combine to form a liquid, water, in an exothermic reaction, as described by the following equation:
Reaction stoichiometry describes the 2:1:2 ratio of hydrogen, oxygen, and water molecules in the above equation.
The molar ratio allows for conversion between moles of one substance and moles of another. For example, in the reaction
the amount of water that will be produced by the combustion of 0.27 moles of is obtained using the molar ratio between and of 2 to 4.
The term stoichiometry is also often used for the molar proportions of elements in stoichiometric compounds (composition stoichiometry). For example, the stoichiometry of hydrogen and oxygen in is 2:1. In stoichiometric compounds, the molar proportions are whole numbers.
Determining amount of product
Stoichiometry can also be used to find the quantity of a product yielded by a reaction. If a piece of solid copper (Cu) were added to an aqueous solution of silver nitrate (), the silver (Ag) would be replaced in a single displacement reaction forming aqueous copper(II) nitrate () and solid silver. How much silver is produced if 16.00 grams of Cu is added to the solution of excess silver nitrate?
The following steps would be used:
Write and balance the equation
Mass to moles: Convert grams of Cu to moles of Cu
Mole ratio: Convert moles of Cu to moles of Ag produced
Mole to mass: Convert moles of Ag to grams of Ag produced
The complete balanced equation would be:
For the mass to mole step, the mass of copper (16.00 g) would be converted to moles of copper by dividing the mass of copper by its molar mass: 63.55 g/mol.
Now that the amount of Cu in moles (0.2518) is found, we can set up the mole ratio. This is found by looking at the coefficients in the balanced equation: Cu and Ag are in a 1:2 ratio.
Now that the moles of Ag produced is known to be 0.5036 mol, we convert this amount to grams of Ag produced to come to the final answer:
This set of calculations can be further condensed into a single step:
Further examples
For propane () reacting with oxygen gas (), the balanced chemical equation is:
The mass of water formed if 120 g of propane () is burned in excess oxygen is then
Stoichiometric ratio
Stoichiometry is also used to find the right amount of one reactant to "completely" react with the other reactant in a chemical reaction – that is, the stoichiometric amounts that would result in no leftover reactants when the reaction takes place. An example is shown below using the thermite reaction,
This equation shows that 1 mole of and 2 moles of aluminum will produce 1 mole of aluminium oxide and 2 moles of iron. So, to completely react with 85.0 g of (0.532 mol), 28.7 g (1.06 mol) of aluminium are needed.
Limiting reagent and percent yield
The limiting reagent is the reagent that limits the amount of product that can be formed and is completely consumed when the reaction is complete. An excess reactant is a reactant that is left over once the reaction has stopped due to the limiting reactant being exhausted.
Consider the equation of roasting lead(II) sulfide (PbS) in oxygen () to produce lead(II) oxide (PbO) and sulfur dioxide ():
To determine the theoretical yield of lead(II) oxide if 200.0 g of lead(II) sulfide and 200.0 g of oxygen are heated in an open container:
Because a lesser amount of PbO is produced for the 200.0 g of PbS, it is clear that PbS is the limiting reagent.
In reality, the actual yield is not the same as the stoichiometrically-calculated theoretical yield. Percent yield, then, is expressed in the following equation:
If 170.0 g of lead(II) oxide is obtained, then the percent yield would be calculated as follows:
Example
Consider the following reaction, in which iron(III) chloride reacts with hydrogen sulfide to produce iron(III) sulfide and hydrogen chloride:
The stoichiometric masses for this reaction are:
324.41 g , 102.25 g , 207.89 g , 218.77 g HCl
Suppose 90.0 g of reacts with 52.0 g of . To find the limiting reagent and the mass of HCl produced by the reaction, we change the above amounts by a factor of 90/324.41 and obtain the following amounts:
90.00 g , 28.37 g , 57.67 g , 60.69 g HCl
The limiting reactant (or reagent) is , since all 90.00 g of it is used up while only 28.37 g are consumed. Thus, 52.0 − 28.4 = 23.6 g left in excess. The mass of HCl produced is 60.7 g.
By looking at the stoichiometry of the reaction, one might have guessed being the limiting reactant; three times more is used compared to (324 g vs 102 g).
Different stoichiometries in competing reactions
Often, more than one reaction is possible given the same starting materials. The reactions may differ in their stoichiometry. For example, the methylation of benzene (), through a Friedel–Crafts reaction using as a catalyst, may produce singly methylated (), doubly methylated (), or still more highly methylated () products, as shown in the following example,
In this example, which reaction takes place is controlled in part by the relative concentrations of the reactants.
Stoichiometric coefficient and stoichiometric number
In lay terms, the stoichiometric coefficient of any given component is the number of molecules and/or formula units that participate in the reaction as written. A related concept is the stoichiometric number (using IUPAC nomenclature), wherein the stoichiometric coefficient is multiplied by +1 for all products and by −1 for all reactants.
For example, in the reaction , the stoichiometric number of is −1, the stoichiometric number of is −2, for it would be +1 and for it is +2.
In more technically precise terms, the stoichiometric number in a chemical reaction system of the i-th component is defined as
or
where is the number of molecules of i, and is the progress variable or extent of reaction.
The stoichiometric number represents the degree to which a chemical species participates in a reaction. The convention is to assign negative numbers to reactants (which are consumed) and positive ones to products, consistent with the convention that increasing the extent of reaction will correspond to shifting the composition from reactants towards products. However, any reaction may be viewed as going in the reverse direction, and in that point of view, would change in the negative direction in order to lower the system's Gibbs free energy. Whether a reaction actually will go in the arbitrarily selected forward direction or not depends on the amounts of the substances present at any given time, which determines the kinetics and thermodynamics, i.e., whether equilibrium lies to the right or the left of the initial state,
In reaction mechanisms, stoichiometric coefficients for each step are always integers, since elementary reactions always involve whole molecules. If one uses a composite representation of an overall reaction, some may be rational fractions. There are often chemical species present that do not participate in a reaction; their stoichiometric coefficients are therefore zero. Any chemical species that is regenerated, such as a catalyst, also has a stoichiometric coefficient of zero.
The simplest possible case is an isomerization
A → B
in which since one molecule of B is produced each time the reaction occurs, while since one molecule of A is necessarily consumed. In any chemical reaction, not only is the total mass conserved but also the numbers of atoms of each kind are conserved, and this imposes corresponding constraints on possible values for the stoichiometric coefficients.
There are usually multiple reactions proceeding simultaneously in any natural reaction system, including those in biology. Since any chemical component can participate in several reactions simultaneously, the stoichiometric number of the i-th component in the k-th reaction is defined as
so that the total (differential) change in the amount of the i-th component is
Extents of reaction provide the clearest and most explicit way of representing compositional change, although they are not yet widely used.
With complex reaction systems, it is often useful to consider both the representation of a reaction system in terms of the amounts of the chemicals present (state variables), and the representation in terms of the actual compositional degrees of freedom, as expressed by the extents of reaction . The transformation from a vector expressing the extents to a vector expressing the amounts uses a rectangular matrix whose elements are the stoichiometric numbers .
The maximum and minimum for any ξk occur whenever the first of the reactants is depleted for the forward reaction; or the first of the "products" is depleted if the reaction as viewed as being pushed in the reverse direction. This is a purely kinematic restriction on the reaction simplex, a hyperplane in composition space, or N‑space, whose dimensionality equals the number of linearly-independent chemical reactions. This is necessarily less than the number of chemical components, since each reaction manifests a relation between at least two chemicals. The accessible region of the hyperplane depends on the amounts of each chemical species actually present, a contingent fact. Different such amounts can even generate different hyperplanes, all sharing the same algebraic stoichiometry.
In accord with the principles of chemical kinetics and thermodynamic equilibrium, every chemical reaction is reversible, at least to some degree, so that each equilibrium point must be an interior point of the simplex. As a consequence, extrema for the ξs will not occur unless an experimental system is prepared with zero initial amounts of some products.
The number of physically-independent reactions can be even greater than the number of chemical components, and depends on the various reaction mechanisms. For example, there may be two (or more) reaction paths for the isomerism above. The reaction may occur by itself, but faster and with different intermediates, in the presence of a catalyst.
The (dimensionless) "units" may be taken to be molecules or moles. Moles are most commonly used, but it is more suggestive to picture incremental chemical reactions in terms of molecules. The Ns and ξs are reduced to molar units by dividing by the Avogadro constant. While dimensional mass units may be used, the comments about integers are then no longer applicable.
Stoichiometry matrix
In complex reactions, stoichiometries are often represented in a more compact form called the stoichiometry matrix. The stoichiometry matrix is denoted by the symbol N.
If a reaction network has n reactions and m participating molecular species, then the stoichiometry matrix will have correspondingly m rows and n columns.
For example, consider the system of reactions shown below:
This system comprises four reactions and five different molecular species. The stoichiometry matrix for this system can be written as:
where the rows correspond to respectively. The process of converting a reaction scheme into a stoichiometry matrix can be a lossy transformation: for example, the stoichiometries in the second reaction simplify when included in the matrix. This means that it is not always possible to recover the original reaction scheme from a stoichiometry matrix.
Often the stoichiometry matrix is combined with the rate vector, v, and the species vector, x to form a compact equation, the biochemical systems equation, describing the rates of change of the molecular species:
Gas stoichiometry
Gas stoichiometry is the quantitative relationship (ratio) between reactants and products in a chemical reaction with reactions that produce gases. Gas stoichiometry applies when the gases produced are assumed to be ideal, and the temperature, pressure, and volume of the gases are all known. The ideal gas law is used for these calculations. Often, but not always, the standard temperature and pressure (STP) are taken as 0 °C and 1 bar and used as the conditions for gas stoichiometric calculations.
Gas stoichiometry calculations solve for the unknown volume or mass of a gaseous product or reactant. For example, if we wanted to calculate the volume of gaseous produced from the combustion of 100 g of , by the reaction:
we would carry out the following calculations:
There is a 1:1 molar ratio of to in the above balanced combustion reaction, so 5.871 mol of will be formed. We will employ the ideal gas law to solve for the volume at 0 °C (273.15 K) and 1 atmosphere using the gas law constant of R = 0.08206 L·atm·K−1·mol−1:
Gas stoichiometry often involves having to know the molar mass of a gas, given the density of that gas. The ideal gas law can be re-arranged to obtain a relation between the density and the molar mass of an ideal gas:
and
and thus:
where:
P = absolute gas pressure
V = gas volume
n = amount (measured in moles)
R = universal ideal gas law constant
T = absolute gas temperature
ρ = gas density at T and P
m = mass of gas
M = molar mass of gas
Stoichiometric air-to-fuel ratios of common fuels
In the combustion reaction, oxygen reacts with the fuel, and the point where exactly all oxygen is consumed and all fuel burned is defined as the stoichiometric point. With more oxygen (overstoichiometric combustion), some of it stays unreacted. Likewise, if the combustion is incomplete due to lack of sufficient oxygen, fuel remains unreacted. (Unreacted fuel may also remain because of slow combustion or insufficient mixing of fuel and oxygen – this is not due to stoichiometry.) Different hydrocarbon fuels have different contents of carbon, hydrogen and other elements, thus their stoichiometry varies.
Oxygen makes up only 20.95% of the volume of air, and only 23.20% of its mass. The air-fuel ratios listed below are much higher than the equivalent oxygen-fuel ratios, due to the high proportion of inert gasses in the air.
Gasoline engines can run at stoichiometric air-to-fuel ratio, because gasoline is quite volatile and is mixed (sprayed or carburetted) with the air prior to ignition. Diesel engines, in contrast, run lean, with more air available than simple stoichiometry would require. Diesel fuel is less volatile and is effectively burned as it is injected.
| Physical sciences | Reaction | Chemistry |
28682 | https://en.wikipedia.org/wiki/Streaming%20media | Streaming media | Streaming media refers to multimedia delivered through a network for playback using a media player. Media is transferred in a stream of packets from a server to a client and is rendered in real-time; this contrasts with file downloading, a process in which the end-user obtains an entire media file before consuming the content. Streaming is more commonly used for video-on-demand, streaming television, and music streaming services over the Internet.
While streaming is most commonly associated with multimedia from a remote server over the Internet, it also includes offline multimedia between devices on a local area network. For example, using DLNA and a home server, or in a personal area network between two devices using Bluetooth (which uses radio waves rather than IP). Online streaming was initially popularized by RealNetworks and Microsoft in the 1990s and has since grown to become the globally most popular method for consuming music and videos, with numerous competing subscription services being offered since the 2010s. Audio streaming to wireless speakers, often using Bluetooth, is another use that has become prevalent during that decade. Live streaming is the real-time delivery of content during production, much as live television broadcasts content via television channels.
Distinguishing delivery methods from the media applies specifically to, as most of the traditional media delivery systems are either inherently streaming (e.g., radio, television) or inherently non-streaming (e.g., books, videotapes, audio CDs). The term "streaming media" can apply to media other than video and audio, such as live closed captioning, ticker tape, and real-time text, which are all considered "streaming text".
Etymology
The term "streaming" was first used for tape drives manufactured by Data Electronics Inc. that were meant to slowly ramp up and run for the entire track; slower ramp times lowered drive costs. "Streaming" was applied in the early 1990s as a better description for video on demand and later live video on IP networks. It was first done by Starlight Networks for video streaming and Real Networks for audio streaming. Such video had previously been referred to by the misnomer "store and forward video."
Precursors
Beginning in 1881, Théâtrophone enabled subscribers to listen to opera and theatre performances over telephone lines. This operated until 1932. The concept of media streaming eventually came to America.
In the early 1920s, George Owen Squier was granted patents for a system for the transmission and distribution of signals over electrical lines, which was the technical basis for what later became Muzak, a technology for streaming continuous music to commercial customers without the use of radio.
The Telephone Music Service, a live jukebox service, began in 1929 and continued until 1997. The clientele eventually included 120 bars and restaurants in the Pittsburgh area. A tavern customer would deposit money in the jukebox, use a telephone on top of the jukebox, and ask the operator to play a song. The operator would find the record in the studio library of more than 100,000 records, put it on a turntable, and the music would be piped over the telephone line to play in the tavern. The music media began as 78s, 33s and 45s, played on the six turntables they monitored. CDs and tapes were incorporated in later years.
The business had a succession of owners, notably Bill Purse, his daughter Helen Reutzel, and finally Dotti White. The revenue stream for each quarter was split between 60% for the music service and 40% for the tavern owner. This business model eventually became unsustainable due to city permits and the cost of setting up these telephone lines.
History
Early development
Attempts to display media on computers date back to the earliest days of computing in the mid-20th century. However, little progress was made for several decades, primarily due to the high cost and limited capabilities of computer hardware. From the late 1980s through the 1990s, consumer-grade personal computers became powerful enough to display various media. The primary technical issues related to streaming were having enough CPU and bus bandwidth to support the required data rates and achieving the real-time computing performance required to prevent buffer underruns and enable smooth streaming of the content. However, computer networks were still limited in the mid-1990s, and audio and video media were usually delivered over non-streaming channels, such as playback from a local hard disk drive or CD-ROMs on the end user's computer.
Terminology in the 1970s was at best confusing for applications such as telemetered aircraft or missile test data. By then PCM [Pulse Code Modulation] was the dominant transmission type. This PCM transmission was bit-serial and not packetized so the 'streaming' terminology was often a confusion factor. In 1969 Grumman acquired one of the first telemetry ground stations [Automated Telemetry Station, 'ATS'] which had the capability for reconstructing serial telemetered data which had been recorded on digital computer peripheral tapes. Computer peripheral tapes were inherently recorded in blocks. Reconstruction was required for continuous display purposes without time-base distortion. The Navy implemented similar capability in DoD for the first time in 1973. These implementations are the only known examples of true 'streaming' in the sense of reconstructing distortion-free serial data from packetized or blocked recordings. 'Real-time' terminology has also been confusing in streaming context. The most accepted definition of 'real-time' requires that all associated processing or formatting of the data must take place prior to availability of the next sample of each measurement. In the 1970s the most powerful mainframe computers were not fast enough for this task at significant overall data rates in the range of 50,000 samples per second. For that reason both the Grumman ATS and the Navy Real-time Telemetry Processing System [RTPS] employed unique special purpose digital computers dedicated to real-time processing of raw data samples.
In 1990, the first commercial Ethernet switch was introduced by Kalpana, which enabled the more powerful computer networks that led to the first streaming video solutions used by schools and corporations.
Practical streaming media was only made possible with advances in data compression due to the impractically high bandwidth requirements of uncompressed media. Raw digital audio encoded with pulse-code modulation (PCM) requires a bandwidth of 1.4Mbit/s for uncompressed CD audio, while raw digital video requires a bandwidth of 168Mbit/s for SD video and over 1000Mbit/s for FHD video.
Late 1990s to early 2000s
During the late 1990s and early 2000s, users had increased access to computer networks, especially the Internet. During the early 2000s, users had access to increased network bandwidth, especially in the last mile. These technological improvements facilitated the streaming of audio and video content to computer users in their homes and workplaces. There was also an increasing use of standard protocols and formats, such as TCP/IP, HTTP, and HTML, as the Internet became increasingly commercialized, which led to an infusion of investment into the sector.
The band Severe Tire Damage was the first group to perform live on the Internet. On 24 June 1993, the band was playing a gig at Xerox PARC, while elsewhere in the building, scientists were discussing new technology (the Mbone) for broadcasting on the Internet using multicasting. As proof of PARC's technology, the band's performance was broadcast and could be seen live in Australia and elsewhere. In a March 2017 interview, band member Russ Haines stated that the band had used approximately "half of the total bandwidth of the internet" to stream the performance, which was a pixel video, updated eight to twelve times per second, with audio quality that was, "at best, a bad telephone connection." In October 1994, a school music festival was webcast from the Michael Fowler Centre in Wellington, New Zealand. The technician who arranged the webcast, local council employee Richard Naylor, later commented: "We had 16 viewers in 12 countries."
RealNetworks pioneered the broadcast of a baseball game between the New York Yankees and the Seattle Mariners over the Internet in 1995. The first symphonic concert on the Internet—a collaboration between the Seattle Symphony and guest musicians Slash, Matt Cameron, and Barrett Martin—took place at the Paramount Theater in Seattle, Washington, on 10 November 1995.
In 1996, Marc Scarpa produced the first large-scale, online, live broadcast, the Adam Yauch–led Tibetan Freedom Concert, an event that would define the format of social change broadcasts. Scarpa continued to pioneer in the streaming media world with projects such as Woodstock '99, Townhall with President Clinton, and more recently Covered CA's campaign "Tell a Friend Get Covered", which was livestreamed on YouTube.
Business developments
Xing Technology was founded in 1989 and developed a JPEG streaming product called "StreamWorks". Another streaming product appeared in late 1992 and was named StarWorks. StarWorks enabled on-demand MPEG-1 full-motion videos to be randomly accessed on corporate Ethernet networks. Starworks was from Starlight Networks, which also pioneered live video streaming on Ethernet and via Internet Protocol over satellites with Hughes Network Systems. Other early companies that created streaming media technology include Progressive Networks and Protocomm prior to widespread World Wide Web usage. After the Netscape IPO in 1995 (and the release of Windows 95 with built-in TCP/IP support), usage of the Internet expanded, and many companies "went public", including Progressive Networks (which was renamed "RealNetworks", and listed on Nasdaq as "RNWK"). As the web became even more popular in the late 90s, streaming video on the internet blossomed from startups such as Vivo Software (later acquired by RealNetworks), VDOnet (acquired by RealNetworks), Precept (acquired by Cisco), and Xing (acquired by RealNetworks).
Microsoft developed a media player known as ActiveMovie in 1995 that supported streaming media and included a proprietary streaming format, which was the precursor to the streaming feature later in Windows Media Player 6.4 in 1999. In June 1999, Apple also introduced a streaming media format in its QuickTime 4 application. It was later also widely adopted on websites, along with RealPlayer and Windows Media streaming formats. The competing formats on websites required each user to download the respective applications for streaming, which resulted in many users having to have all three applications on their computer for general compatibility.
In 2000, Industryview.com launched its "world's largest streaming video archive" website to help businesses promote themselves. Webcasting became an emerging tool for business marketing and advertising that combined the immersive nature of television with the interactivity of the Web. The ability to collect data and feedback from potential customers caused this technology to gain momentum quickly.
Around 2002, the interest in a single, unified, streaming format and the widespread adoption of Adobe Flash prompted the development of a video streaming format through Flash, which was the format used in Flash-based players on video hosting sites. The first popular video streaming site, YouTube, was founded by Steve Chen, Chad Hurley, and Jawed Karim in 2005. It initially used a Flash-based player, which played MPEG-4 AVC video and AAC audio, but now defaults to HTML video. Increasing consumer demand for live streaming prompted YouTube to implement a new live streaming service for users. The company currently also offers a (secure) link that returns the available connection speed of the user.
The Recording Industry Association of America (RIAA) revealed through its 2015, earnings report that streaming services were responsible for 34.3 percent of the year's total music industry's revenue, growing 29 percent from the previous year and becoming the largest source of income, pulling in around $2.4 billion. US streaming revenue grew 57 percent to $1.6 billion in the first half of 2016 and accounted for almost half of industry sales.
Streaming wars
The term streaming wars was coined to describe the new era (starting in the late 2010s) of competition between video streaming services such as Netflix, Amazon Prime Video, Hulu, Max, Disney+, Paramount+, Apple TV+, Peacock, and many more.
The competition among online platforms has driven them to find ways to differentiate themselves from the rest. A key differentiator is offering exclusive content, often self-produced and created for a specific market segment. When Netflix first launched in 2007 it became one of the more dominant streaming platforms. This changed when Disney+ came out offering exclusive content that wasn't available on any other platforms. Disney+ took advantage of owning popular movies and shows like Frozen and Moana drawing in more subscribers and making it a big competitor for Netflix. Research suggests that this approach to streaming competition can be disadvantageous for consumers by increasing spending across platforms, and for the industry as a whole by dilution of subscriber base. Once specific content is made available on a streaming service, piracy searches for the same content decrease; competition or legal availability across multiple platforms appears to deter online piracy. Exclusive content produced for subscription services such as Netflix tends to have a higher production budget than content produced exclusively for pay-per-view services, such as Amazon Prime Video.
This competition increased during the first two years of the COVID-19 pandemic as more people stayed home and watched TV. "The COVID-19 pandemic has led to a seismic shift in the film & TV industry in terms of how films are made, distributed, and screened. Many industries have been hit by the economic effects of the pandemic" (Totaro Donato). In August 2022, a CNN headline declared that "The streaming wars are over" as pandemic-era restrictions had largely ended and audience growth had stalled. This led services to focus on profit over market share by cutting production budgets, cracking down on password sharing, and introducing ad-supported tiers. A December 2022 article in The Verge echoed this, declaring an end to the "golden age of the streaming wars".
In September 2023, several streaming services formed a trade association named the Streaming Innovation Alliance (SIA), spearheaded by Charles Rivkin of the Motion Picture Association (MPA). Former U.S. representative Fred Upton and former Federal Communications Commission (FCC) acting chair Mignon Clyburn serve as senior advisors. Founding members include AfroLandTV, America Nu Network, BET+, Discovery+, Disney+, Disney+ Hotstar, ESPN+, For Us By Us Network, Hulu, Max, the MPA, MotorTrend+, Netflix, Paramount+, Peacock, Pluto TV, Star+, Telemundo, TelevisaUnivision, Vault TV, and Vix. Notably absent were Apple, Amazon, Roku, and Tubi.
Use by the general public
Advances in computer networking, combined with powerful home computers and operating systems, have made streaming media affordable and easy for the public. Stand-alone Internet radio devices emerged to offer listeners a non-technical option for listening to audio streams. These audio-streaming services became increasingly popular; music streaming reached 4 trillion streams globally in 2023—a significant increase from 2022—jumping 34% over the year.
In general, multimedia content is data-intensive, so media storage and transmission costs are still significant. Media is generally compressed for transport and storage. Increasing consumer demand for streaming high-definition (HD) content has led the industry to develop technologies such as WirelessHD and G.hn, which are optimized for streaming HD content. Many developers have introduced HD streaming apps that work on smaller devices, such as tablets and smartphones, for everyday purposes.
A media stream can be streamed either live or on demand. Live streams are generally provided by a method called true streaming. True streaming sends the information straight to the computer or device without saving it to a local file. On-demand streaming is provided by a method called progressive download. Progressive download saves the received information to a local file and then plays it from that location. On-demand streams are often saved to files for extended period of time, while live streams are only available at one time only (e.g., during a football game).
Streaming media is increasingly being coupled with the use of social media. For example, sites such as YouTube encourage social interaction in webcasts through features such as live chat, online surveys, user posting of comments online, and more. Furthermore, streaming media is increasingly being used for social business and e-learning.
The Horowitz Research State of Pay TV, OTT, and SVOD 2017 report said that 70 percent of those viewing content did so through a streaming service and that 40 percent of TV viewing was done this way, twice the number from five years earlier. Millennials, the report said, streamed 60 percent of the content.
Transition from DVD
One of the movie streaming industry's largest impacts was on the DVD industry, which drastically dropped in popularity and profitability with the mass popularization of online content. The rise of media streaming caused the downfall of many DVD rental companies, such as Blockbuster. In July 2015, The New York Times published an article about Netflix's DVD services. It stated that Netflix was continuing their DVD services with 5.3 million subscribers, which was a significant drop from the previous year. On the other hand, their streaming service had 65 million members. The shift to streaming platforms also led to the decline of DVD rental services. In July 2024, NBC News reported that RedBox, a DVD rental service that had operated for 22 years, would shut down due to the rapid incline of streaming platforms. As the rental services has been rapidly declining since 2010, the business had to file for bankruptcy, with 99% of households now subscribing to streaming services. Further reflecting the shift away from physical media, BestBuy has ceased selling DVDs.
Napster
Music streaming is one of the most popular ways in which consumers interact with streaming media. In the age of digitization, the private consumption of music has transformed into a public good, largely due to one player in the market: Napster.
Napster, a peer-to-peer (P2P) file-sharing network where users could upload and download MP3 files freely, broke all music industry conventions when it launched in early 1999 in Hull, Massachusetts. The platform was developed by Shawn and John Fanning as well as Sean Parker. In an interview from 2009, Shawn Fanning explained that Napster "was something that came to me as a result of seeing a sort of unmet need and the passion people had for being able to find all this music, particularly a lot of the obscure stuff, which wouldn't be something you go to a record store and purchase, so it felt like a problem worth solving."
Not only did this development disrupt the music industry by making songs that previously required payment to be freely accessible to any Napster user, but it also demonstrated the power of P2P networks in turning any digital file into a public, shareable good. For the brief period of time that Napster existed, mp3 files fundamentally changed as a type of good. Songs were no longer financially excludable, barring access to a computer with internet access, and they were not rivals, meaning if one person downloaded a song, it did not diminish another user from doing the same. Napster, like most other providers of public goods, faced the free-rider problem. Every user benefits when an individual uploads an mp3 file, but there is no requirement or mechanism that forces all users to share their music. Generally, the platform encouraged sharing; users who downloaded files from others often had their own files available for upload as well. However, not everyone chose to share their files. There was no a built-in incentive specifically discouraging users from sharing their own files.
This structure revolutionized the consumer's perception of ownership over digital goods; it made music freely replicable. Napster quickly garnered millions of users, growing faster than any other business in history. At the peak of its existence, Napster boasted about 80 million users globally. The site gained so much traffic that many college campuses had to block access to Napster because it created network congestion from so many students sharing music files.
The advent of Napster sparked the creation of numerous other P2P sites, including LimeWire (2000), BitTorrent (2001), and the Pirate Bay (2003). The reign of P2P networks was short-lived. The first to fall was Napster in 2001. Numerous lawsuits were filed against Napster by various record labels, all of which were subsidiaries of Universal Music Group, Sony Music Entertainment, Warner Music Group, or EMI. In addition to this, the Recording Industry Association of America (RIAA) also filed a lawsuit against Napster on the grounds of unauthorized distribution of copyrighted material, which ultimately led Napster to shut down in 2001. In an interview with the New York Times, Gary Stiffelman, who represents Eminem, Aerosmith, and TLC, explained, "I'm not an opponent of artists' music being included in these services, I'm just an opponent of their revenue not being shared."
The fight for intellectual property rights: A&M Records, Inc. v. Napster, Inc.
The lawsuit A&M Records, Inc. v. Napster, Inc. fundamentally changed the way consumers interact with music streaming. It was argued on 2 October 2000, and was decided on 12 February 2001. The Court of Appeals for the Ninth Circuit ruled that a P2P file-sharing service could be held liable for contributory and vicarious infringement of copyright, serving as a landmark decision for Intellectual property law.
The first issue that the Court addressed was fair use, which says that otherwise infringing activities are permissible so long as they are for purposes "such as criticism, comment, news reporting, teaching [...] scholarship, or research." Judge Beezer, the judge for this case, noted that Napster claimed that its services fit "three specific alleged fair uses: sampling, where users make temporary copies of a work before purchasing; space-shifting, where users access a sound recording through the Napster system that they already own in audio CD format; and permissive distribution of recordings by both new and established artists." Judge Beezer found that Napster did not fit these criteria, instead enabling their users to repeatedly copy music, which would affect the market value of the copyrighted good.
The second claim by the plaintiffs was that Napster was actively contributing to copyright infringement since it had knowledge of widespread file sharing on its platform. Since Napster took no action to reduce infringement and financially benefited from repeated use, the court ruled against the P2P site. The court found that "as much as eighty-seven percent of the files available on Napster may be copyrighted and more than seventy percent may be owned or administered by plaintiffs."
The injunction ordered against Napster ended the brief period in which music streaming was a public good – non-rival and non-excludable in nature. Other P2P networks had some success at sharing MP3s, though they all met a similar fate in court. The ruling set the precedent that copyrighted digital content cannot be freely replicated and shared unless given consent by the owner, thereby strengthening the property rights of artists and record labels alike.
Music streaming platforms
Although music streaming is no longer a freely replicable public good, streaming platforms such as Spotify, Deezer, Apple Music, SoundCloud, YouTube Music, and Amazon Music have shifted music streaming to a club-type good. While some platforms, most notably Spotify, give customers access to a freemium service that enables the use of limited features for exposure to advertisements, most companies operate under a premium subscription model. Under such circumstances, music streaming is financially excludable, requiring that customers pay a monthly fee for access to a music library, but non-rival, since one customer's use does not impair another's.
An article written by the New York Times in 2021 states that "streaming saved music." This is because it provided monthly revenue. Especially Spotify offers its free platform, but you can pay for their premium to get music ad-free. This allows access for people to stream music anywhere from their devices not having to rely on CDs anymore.
There is competition between services similar but lesser to the streaming wars for video media. , Spotify has over 207 million users in 78 countries, , Apple Music has about 60 million, and SoundCloud has 175 million. All platforms provide varying degrees of accessibility. Apple Music and Prime Music only offer their services for paid subscribers, whereas Spotify and SoundCloud offer freemium and premium services. Napster, owned by Rhapsody since 2011, has resurfaced as a music streaming platform offering subscription-based services to over 4.5 million users .
The music industry's response to music streaming was initially negative. Along with music piracy, streaming services disrupted the market and contributed to the fall in US revenue from $14.6 billion in 1999 to $6.3 billion in 2009. CDs and single-track downloads were not selling because content was freely available on the Internet. By 2018, however, music streaming revenue exceeded that of traditional revenue streams (e.g. record sales, album sales, downloads). Streaming revenue is now one of the largest driving forces behind the growth in the music industry.
COVID-19 pandemic
By August 2020, the COVID-19 pandemic had streaming services busier than ever. The pandemic contributed to a surge in subscriptions, in the UK alone, 12 million people joined a new streaming service that they had not previously had. Global subscriptions skyrocketed passing 1 billion. Within the first 3 months, back in 2020, nearly 15.7 million people signed up for Netflix. With people stuck at home and facing lock-downs Netflix provided a much needed distraction.
An impact analysis of 2020 data by the International Confederation of Societies of Authors and Composers (CISAC) indicated that remuneration from digital streaming of music increased with a strong rise in digital royalty collection (up 16.6% to EUR 2.4 billion), but it would not compensate the overall loss of income of authors from concerts, public performance and broadcast. The International Federation of the Phonographic Industry (IFPI) recompiled the music industry initiatives around the world related to the COVID-19. In its State of the Industry report, it recorded that the global recorded music market grew by 7.4% in 2022, the 6th consecutive year of growth. This growth was driven by streaming, mostly from paid subscription streaming revenues which increased by 18.5%, fueled by 443 million users of subscription accounts by the end of 2020.
The COVID-19 pandemic has also driven an increase in misinformation and disinformation, particularly on streaming platforms like YouTube and podcasts.
Local/home streaming
Streaming also refers to the offline streaming of multimedia at home. This is made possible by technologies such as DLNA, which allow devices on the same local network to connect to each other and share media. Such capabilities are heightened using network-attached storage (NAS) devices at home, or using specialized software like Plex Media Server, Jellyfin or TwonkyMedia.
Technologies
Bandwidth
A broadband speed of 2 Mbit/s or more is recommended for streaming standard-definition video, for example to a Roku, Apple TV, Google TV or a Sony TV Blu-ray Disc Player. 5 Mbit/s is recommended for high-definition content and 9 Mbit/s for ultra-high-definition content. Streaming media storage size is calculated from the streaming bandwidth and length of the media using the following formula (for a single user and file): storage size in megabytes is equal to length (in seconds) × bit rate (in bit/s) / (8 × 1024 × 1024). For example, one hour of digital video encoded at 300 kbit/s (this was a typical broadband video in 2005 and it was usually encoded in resolution) will be: (3,600 s × 300,000 bit/s) / (8 × 1024 × 1024) requires around 128 MB of storage.
If the file is stored on a server for on-demand streaming and this stream is viewed by 1,000 people at the same time using a Unicast protocol, the requirement is 300 kbit/s × 1,000 = 300,000 kbit/s = 300 Mbit/s of bandwidth. This is equivalent to around 135 GB per hour. Using a multicast protocol the server sends out only a single stream that is common to all users. Therefore, such a stream would only use 300 kbit/s of server bandwidth.
In 2018 video was more than 60% of data traffic worldwide and accounted for 80% of growth in data usage.
Protocols
Video and audio streams are compressed to make the file size smaller. Audio coding formats include MP3, Vorbis, AAC and Opus. Video coding formats include H.264, HEVC, VP8 and VP9. Encoded audio and video streams are assembled in a container bitstream such as MP4, FLV, WebM, ASF or ISMA. The bitstream is delivered from a streaming server to a streaming client (e.g., the computer user with their Internet-connected laptop) using a transport protocol, such as Adobe's RTMP or RTP.
In the 2010s, technologies such as Apple's HLS, Microsoft's Smooth Streaming, Adobe's HDS and non-proprietary formats such as MPEG-DASH emerged to enable adaptive bitrate streaming over HTTP as an alternative to using proprietary transport protocols. Often, a streaming transport protocol is used to send video from an event venue to a cloud transcoding service and content delivery network, which then uses HTTP-based transport protocols to distribute the video to individual homes and users. The streaming client (the end user) may interact with the streaming server using a control protocol, such as MMS or RTSP.
The quality of the interaction between servers and users is based on the workload of the streaming service; as more users attempt to access a service the quality may be affected by resource constraints in the service. Deploying clusters of streaming servers is one such method where there are regional servers spread across the network, managed by a singular, central server containing copies of all the media files as well as the IP addresses of the regional servers. This central server then uses load balancing and scheduling algorithms to redirect users to nearby regional servers capable of accommodating them. This approach also allows the central server to provide streaming data to both users as well as regional servers using FFmpeg libraries if required, thus demanding the central server to have powerful data processing and immense storage capabilities. In return, workloads on the streaming backbone network are balanced and alleviated, allowing for optimal streaming quality.
Designing a network protocol to support streaming media raises many problems. Datagram protocols, such as the User Datagram Protocol (UDP), send the media stream as a series of small packets. This is simple and efficient; however, there is no mechanism within the protocol to guarantee delivery. It is up to the receiving application to detect loss or corruption and recover data using error correction techniques. If data is lost, the stream may suffer a dropout. The Real-Time Streaming Protocol (RTSP), Real-time Transport Protocol (RTP) and the Real-time Transport Control Protocol (RTCP) were specifically designed to stream media over networks. RTSP runs over a variety of transport protocols, while the latter two are built on top of UDP.
HTTP adaptive bitrate streaming is based on HTTP progressive download, but contrary to the previous approach, here the files are very small, so that they can be compared to the streaming of packets, much like the case of using RTSP and RTP. Reliable protocols, such as the Transmission Control Protocol (TCP), guarantee correct delivery of each bit in the media stream. It means, however, that when there is data loss on the network, the media stream stalls while the protocol handlers detect the loss and retransmit the missing data. Clients can minimize this effect by buffering data for display. While delay due to buffering is acceptable in video-on-demand scenarios, users of interactive applications such as video conferencing will experience a loss of fidelity if the delay caused by buffering exceeds 200 ms.
Unicast protocols send a separate copy of the media stream from the server to each recipient. Unicast is the norm for most Internet connections but does not scale well when many users want to view the same television program concurrently. Multicast protocols were developed to reduce server and network loads resulting from duplicate data streams that occur when many recipients receive unicast content streams independently. These protocols send a single stream from the source to a group of recipients. Depending on the network infrastructure and type, multicast transmission may or may not be feasible. One potential disadvantage of multicasting is the loss of video on demand functionality. Continuous streaming of radio or television material usually precludes the recipient's ability to control playback. However, this problem can be mitigated by elements such as caching servers, digital set-top boxes, and buffered media players.
IP multicast provides a means to send a single media stream to a group of recipients on a computer network. A connection management protocol, usually Internet Group Management Protocol, is used to manage the delivery of multicast streams to the groups of recipients on a LAN. One of the challenges in deploying IP multicast is that routers and firewalls between LANs must allow the passage of packets destined to multicast groups. If the organization that is serving the content has control over the network between server and recipients (i.e., educational, government, and corporate intranets), then routing protocols such as Protocol Independent Multicast can be used to deliver stream content to multiple local area network segments.
Peer-to-peer (P2P) protocols arrange for prerecorded streams to be sent between computers. This prevents the server and its network connections from becoming a bottleneck. However, it raises technical, performance, security, quality, and business issues.
Content delivery networks (CDNs) use intermediate servers to distribute the load. Internet-compatible unicast delivery is used between CDN nodes and streaming destinations.
Recording
Media that is livestreamed can be recorded through certain media players, such as VLC player, or through the use of a screen recorder. Live-streaming platforms such as Twitch may also incorporate a video on demand system that allows automatic recording of live broadcasts so that they can be watched later. YouTube also has recordings of live broadcasts, including television shows aired on major networks. These streams have the potential to be recorded by anyone who has access to them, whether legally or otherwise.
Recordings can happen through any device that allows people to watch movies they do not have access to or be at a music festival they could not get tickets to. These live streaming platforms have revolutionized entertainment, creating new ways for people to interact with content. Many celebrities started live streaming during COVID-19 through platforms like Instagram, YouTube, and TikTok offering an alternate form of entertainment when concerts were postponed. For example, Miley Cyrus hosted a series where she live streamed and sang songs during the pandemic. She even had other celebrity guests like Justin Bieber, Selena Gomez, Demi Lovato, and more! Who were able to join in from their home through their device. Many other people joined in on this like Phoebe Bridgers who sang songs for over 10,000 people while live streaming. Live Streaming and recording allow for fans to communicate with these artists through chats and likes.
View recommendation
Most streaming services feature a recommender system for viewing based on each user's view history in conjunction with all viewers' aggregated view histories. Rather than focusing on subjective categorization of content by content curators, there is an assumption that, with the immensity of data collected on viewing habits, the choices of those who are first to view content can be algorithmically extrapolated to the totality of the user base, with increasing probabilistic accuracy as to the likelihood of their choosing and enjoying the recommended content as more data is collected.
Applications and marketing
Useful and typical applications of streaming are, for example, long video lectures performed online. An advantage of this presentation is that these lectures can be very long, although they can always be interrupted or repeated at arbitrary places. Streaming enables new content marketing concepts. For example, the Berlin Philharmonic Orchestra sells Internet live streams of whole concerts instead of several CDs or similar fixed media in their Digital Concert Hall using YouTube for trailers. These online concerts are also spread over a lot of different places, including cinemas at various places on the globe. A similar concept is used by the Metropolitan Opera in New York. There is also a livestream from the International Space Station. In video entertainment, video streaming platforms like Netflix, Hulu, and Disney+ are mainstream elements of the media industry.
Marketers have found many opportunities offered by streaming media and the platforms that offer them, especially in light of the significant increase in the use of streaming media during COVID lockdowns from 2020 onwards. While revenue and placement of traditional advertising continued to decrease, digital marketing increased by 15% in 2021, with digital media and search representing 65% of the expenditures.
A case study commissioned by the WIPO indicates that streaming services attract advertising budgets with the opportunities provided by interactivity and the use of data from users, resulting in personalization on a mass scale with content marketing. Targeted marketing is expanding with the use of artificial intelligence, in particular programmatic advertisement, a tool that helps advertisers decide their campaign parameters and whether they are interested in buying advertising space online or not. One example of advertising space acquisition is Real-Time Bidding (RTB).
Challenges
Copyright issues
For over-the-top media service (OTT) platforms, the original content captures additional subscribers. This presents copyright issues and the potential for international exploitation through streaming, widespread use of standards, and metadata in digital files. The WIPO has indicated several basic copyright issues arising for those pursuing work in the film and music industries in the era of streaming.
Streaming copyrighted content can involve making infringing copies of the works in question. The recording and distribution of streamed content is also an issue for many companies that rely on revenue based on views or attendance.
| Technology | Broadcasting | null |
28706 | https://en.wikipedia.org/wiki/SECAM | SECAM | SECAM, also written SÉCAM (, Séquentiel de couleur à mémoire, French for color sequential with memory), is an analog color television system that was used in France, Russia and some other countries or territories of Europe and Africa. It was one of three major analog color television standards, the others being PAL and NTSC. Like PAL, a SECAM picture is also made up of 625 interlaced lines and is displayed at a rate of 25 frames per second (except SECAM-M). However, due to the way SECAM processes color information, it is not compatible with the PAL video format standard. SECAM video is composite video; the luminance (luma, monochrome image) and chrominance (chroma, color applied to the monochrome image) are transmitted together as one signal.
All the countries using SECAM have either converted to Digital Video Broadcasting (DVB), the new pan-European standard for digital television, or are currently in the process of conversion. SECAM remained a major standard into the 2000s.
History
Invention
Development of SECAM predates PAL, and began in 1956 by a team led by Henri de France working at Compagnie Française de Télévision (later bought by Thomson, now Technicolor). NTSC was considered undesirable in Europe because of its tint problem, requiring an additional control, which SECAM (and PAL) solved.
Some have argued that the primary motivation for the development of SECAM in France was to protect French television equipment manufacturers. However, incompatibility had started with the earlier unusual decision to adopt positive video modulation for 819-line French broadcast signals (only the UK's 405-line was similar; widely adopted 525- and 625-line systems used negative video).
The first proposed system was called SECAM I in 1961, followed by other studies to improve compatibility and image quality, but it was too soon for a wide introduction. A version of SECAM for the French 819-line television standard was devised and tested, but never introduced.
Following a pan-European agreement to introduce color TV only on 625-line broadcasts, France had to switch to that system, which happened in 1963 with the introduction of "la deuxième chaîne ORTF" France 2, the second national TV network.
Further improvements during 1963 and 1964 to the standard were called SECAM II and SECAM III, with the latter being presented at the 1965 CCIR General Assembly in Vienna, and adopted by France and the Soviet Union.
Soviet technicians were involved in a separate development of the standard, creating an incompatible variant called NIIR or SECAM IV, which was not deployed. The team was working in Moscow's Telecentrum. The NIIR designation comes from the name of the Nautchno-Issledovatelskiy Institut Radio (NIIR, rus. Научно-Исследовательский Институт Радио), a Soviet research institute involved in the studies. Two standards were developed: Non-linear NIIR, in which a process analogous to gamma correction is used, and Linear NIIR or SECAM IV that omits this process. SECAM IV was proposed by France and USSR at the 1966 Oslo CCIR conference and demonstrated in London.
Further improvements were SECAM III A, followed by SECAM III B, the system adopted for general use in 1967.
Implementation
Tested until 1963 on the second French national network "la deuxième chaîne ORTF", the SECAM standard was adopted in France and launched on 1 October 1967, now called France 2. A group of four suited men—a presenter (Georges Gorse, Minister of Information) and three contributors to the system's development—were shown standing in a studio. Following a count from 10, at 2:15 pm the black-and-white image switched to color; the presenter then declared "Et voici la couleur !" (fr: And here is color!) In the same year of 1967, CLT of Lebanon became the third television station in the world, after France 2 in France and the Soviet Central Television in the Soviet Union, to broadcast in color utilizing the French SECAM technology.
The first color television sets cost 5000 francs. Color TV was not very popular initially; only about 1500 people watched the inaugural program in color. A year later in 1968, only 200,000 sets had been sold of an expected million. This pattern was similar to the earlier slow build-up of color television popularity in the US.
In March 1969, East Germany decided to adopt SECAM III B. The adoption of SECAM in Eastern Europe has been attributed to Cold War political machinations. According to this explanation, East German political authorities were well aware of West German television's popularity and adopted SECAM rather than the PAL encoding used in West Germany. This did not hinder mutual reception in black and white, because the underlying TV standards remained essentially the same in both parts of Germany. However, East Germans responded by buying PAL decoders for their SECAM sets. Eventually, the government in East Berlin stopped paying attention to so-called "Republikflucht via Fernsehen", or "defection via television". Later East German–produced TV sets, such as the RFT Chromat, even included a dual standard PAL/SECAM decoder as an option.
Another explanation for the Eastern European adoption of SECAM, led by the Soviet Union, is that the Russians had extremely long distribution lines between broadcasting stations and transmitters. Long co-axial cables or microwave links can cause amplitude and phase variations, which do not affect SECAM signals.
Other countries, notably the United Kingdom and Italy, briefly experimented with SECAM before opting for PAL. SECAM was adopted by former French and Belgian colonies in Africa, as well as Greece, Cyprus, and Eastern Bloc countries (except for Romania) and some Middle Eastern countries.
European efforts during the 1980–90s towards the creation of a unified analog standard, resulting in the MAC standards, still used the sequential color transmission idea of SECAM, with only one of time-compressed U and V components being transmitted on a given line. The D2-MAC standard enjoyed some short real market deployment, particularly in northern European countries. To some extent, this idea is still present in 4:2:0 digital sampling format, which is used by most digital video media available to the public. In this case, however, color resolution is halved in both horizontal and vertical directions thus yielding a more symmetrical behavior.
Decline
With the fall of communism and following a period when multi-standard TV sets became a commodity in the early 2000s, many Eastern European countries decided to switch to the West German-developed PAL system. Yet SECAM remained in use in Russia, Belarus and the French-speaking African countries. In the late 2000s, SECAM started a process of being phased out and replaced by DVB.
Unlike some other manufacturers, the company where SECAM was invented, Technicolor (known as Thomson until 2010), still sold television sets worldwide under different brands until the company sold its Trademark Licensing operations in 2022; this may be due in part to the legacy of SECAM. Thomson bought the company that developed PAL, Telefunken, and even co-owned the RCA brand – RCA being the creator of NTSC. Thomson also co-authored the ATSC standards which are used for American high-definition television.
Design
Just as with the other color standards adopted for broadcast usage over the world, SECAM is a standard that permits existing monochrome television receivers predating its introduction to continue to be operated as monochrome televisions. Because of this compatibility requirement, color standards added a second signal to the basic monochrome signal, which carries the color information. The color information is called chrominance or for short, while the black-and-white information is called the luminance or for short. Monochrome television receivers only display luminance, while color receivers process both signals. The YDbDr color space is used to encode the mentioned (luminance) and (red and blue color difference signals that make up chrominance) components.
Additionally, for compatibility, it is required to use no more bandwidth than the monochrome signal alone; the color signal has to be somehow inserted into the monochrome signal, without disturbing it. This insertion is possible because the bandwidth of the monochrome TV signal is generally not fully utilized; the high-frequency portions of the signal, corresponding to fine details in the image, were often not recorded by contemporary video equipment, or not visible on consumer televisions anyway, especially after transmission. This section of the spectrum was thus used to carry color information, at the cost of reducing the possible resolution.
European monochrome standards were not compatible when SECAM was first being considered. France had introduced an 819-line system that used 14 MHz of bandwidth (System E), much more than the 5 MHz standard used in the UK (System A) or the 6 MHz in the US (System M). The closest thing to a standard in Europe at the time was the 8 MHz 625-line system (System D), which had originated Germany and the Soviet Union and quickly became one of the most used systems. An effort to harmonize European broadcasts on the 625-line system started in the 1950s and was first implemented in Ireland in 1962 (System I).
SECAM thus had the added issue of having to be compatible both with their existing 819-line system as well as their future broadcasts on the 625-line system. As the latter used much less bandwidth, it was this standard that defined the amount of color information that could be carried. In the 8 MHz standard, the signal is split into two parts, the video signal, and the audio signal, each with its own carrier frequency. For any given channel, one carrier is located 1.25 MHz above the channel's listed frequency and indicates the location of the luminance portion of the signal. A second carrier is located 6 MHz above the luma carrier, indicating the center of the audio signal.
To add color to the signal, SECAM adds another carrier located 4.4336... MHz above the luma carrier. The chroma signal is centered on this carrier, overlapping the upper part of the luma frequency range. Because the information of most scan lines differ little from their immediate neighbors, both luma and chroma signals are close to being periodic on the horizontal scan frequency, and thus their power spectra tends to be concentrated on multiples of such frequency. The specific color carrier frequency of SECAM results from carefully choosing it so that the higher-powered harmonics of the modulated chroma and luma signals are apart from each other and from the sound carrier, thereby minimizing crosstalk between the three signals.
The color space perceived by humans is three-dimensional because of the nature of their retinas, which include specific detectors for red, green and blue light. So in addition to luminance, which is already carried by the existing monochrome signal, color requires sending two additional signals. The human retina is more sensitive to green light than to red (3:1) or blue (9:1) light. Because of this, the red () and blue () signals are usually chosen to be sent along luma but with comparably less resolution, to be able to save bandwidth while impacting the perceived image quality the least. (Also, the green signal is on average more closely correlated to luma, making them a poor choice of signal to send separately). To minimize crosstalk with luma and increase compatibility with existing monochrome TV sets, the and signals are usually sent as differences from luma (): and . This way, for an image that contains little color, its color difference signals tend to zero and its color-encoded signal converges to its equivalent monochrome signal.
Colorimetry
SECAM colorimetry was similar to PAL, as defined by the ITU on REC-BT.470. Yet the same document indicates that for existing (at the time of revision, 1998) SECAM sets, the following parameters (similar to the original 1953 color NTSC specification) could be allowed:
The assumed display gamma was also defined as 2.8.
Luma () is derived from red, green, and blue () gamma pre-corrected primary signals:
and are red and blue color difference signals, used to calculate chrominance:
Comparison to PAL and NTSC
SECAM differs significantly from the other color systems by the way the color difference signals are carried. In NTSC and PAL, each line carries color difference signals encoded using quadrature amplitude modulation (QAM). To demodulate such a signal, knowledge of the phase of the carrier signal is needed. This information is sent along the video signal at the start of every scan line in the form of a short burst of the color carrier itself, called a "colorburst". A phase error during QAM demodulation produces crosstalk between the color difference signals. On NTSC this creates Hue and Saturation errors, manually corrected for with a "tint" control on the receiving TV set; while PAL only suffers from Saturation errors. SECAM is free of this problem.
SECAM uses frequency modulation (FM) to encode chrominance information on the color carrier, which does not require knowledge of the carrier phase to demodulate. However, the simple FM scheme used allows the transmission of only one signal, not the two required for color. To address this, SECAM broadcasts and separately on alternating scan lines. To produce full color, the color information on one scan line is briefly stored in an analog delay line adjusted so the signal exits the delay at the precise start of the next line. This allows the television to combine the signal transmitted on one line with the on the next and thereby produce a full color gamut on every line. Because SECAM transmits only one chrominance component at a time, it is free of the color artifacts ("dot crawl") present in NTSC and PAL that result from the combined transmission of color difference signals.
This means that the vertical color resolution of a field is halved compared to NTSC. However, the color signals of all color TV systems of the time were encoded in a narrower band than their luma signals, so color information had lower horizontal resolution compared to luma in all systems. This matches the human retina, which has higher luminance resolution than color resolution. On SECAM, the loss of vertical color resolution makes the color resolution closer to uniform in both axes and has little visual effect. The idea of reducing the vertical color resolution comes from Henri de France, who observed that color information is approximately identical for two successive lines. Because the color information was designed to be a cheap, backwards compatible addition to the monochrome signal, the color signal has a lower bandwidth than the luminance signal, and hence lower horizontal resolution. Fortunately, the human visual system is similar in design: it perceives changes in luminance at a higher resolution than changes in chrominance, so this asymmetry has minimal visual impact. It was therefore also logical to reduce the vertical color resolution. A similar paradox applies to the vertical resolution in television in general: reducing the bandwidth of the video signal will preserve the vertical resolution, even if the image loses sharpness and is smudged in the horizontal direction. Hence, video could be sharper vertically than horizontally. Additionally, transmitting an image with too much vertical detail will cause annoying flicker on interlaced television screens, as small details will only appear on a single line (in one of the two interlaced fields), and hence be refreshed at half the frequency. (This is a consequence of interlaced scanning that is obviated by progressive scan.) Computer-generated text and inserts have to be carefully low-pass filtered to prevent this.
The color difference signals in SECAM are calculated in the YDbDr color space, which is a scaled version of the YUV color space. This encoding is better suited to the transmission of only one signal at a time. FM modulation of the color information allows SECAM to be completely free of the dot crawl problem commonly encountered with the other analog standards. SECAM transmissions are more robust over longer distances than NTSC or PAL. However, owing to their FM nature, the color signal remains present, although at reduced amplitude, even in monochrome portions of the image, thus being subject to stronger cross color even though color crawl of the PAL type does not exist. Though most of the pattern is removed from PAL and NTSC-encoded signals with a comb filter (designed to segregate the two signals where the luma spectrum may overlap into the spectral space used by the chroma) by modern displays, some can still be left in certain parts of the picture. Such parts are usually sharp edges on the picture, sudden color or brightness changes along the picture or certain repeating patterns, such as a checker board on clothing. FM SECAM is a continuous spectrum, so unlike PAL and NTSC even a perfect digital comb filter could not entirely separate SECAM colour and luminance signals.
Disadvantages
Unlike PAL or NTSC, analog SECAM programming cannot easily be edited in its native analog form. Because it uses frequency modulation, SECAM is not linear with respect to the input image (this is also what protects it against signal distortion), so electrically mixing two (synchronized) SECAM signals does not yield a valid SECAM signal, unlike with analog PAL or NTSC. For this reason, to mix two SECAM signals, they must be demodulated, the demodulated signals mixed, and are remodulated again. Hence, post-production is often done in PAL, or in component formats, with the result encoded or transcoded into SECAM at the point of transmission. Reducing the costs of running television stations is one reason for some countries' switchovers to PAL.
Most TVs currently sold in SECAM countries support both SECAM and PAL, and more recently composite video NTSC as well (though not usually broadcast NTSC, that is, they cannot accept a broadcast signal from an antenna). Although the older analog camcorders (VHS, VHS-C) were produced in SECAM versions, none of the 8 mm or Hi-band models (S-VHS, S-VHS-C, and Hi-8) recorded it directly. Camcorders and VCRs of these standards sold in SECAM countries are internally PAL. The result could be converted back to SECAM in some models; most people buying such expensive equipment would have a multistandard TV set and as such would not need a conversion. Digital camcorders or DVD players (with the exception of some early models) do not accept or output a SECAM analog signal. However, this is of dwindling importance: since 1980 most European domestic video equipment uses French-originated SCART connectors, allowing the transmission of RGB signals between devices. This eliminates the legacy of PAL, SECAM, and NTSC color sub carrier standards.
In general, modern professional equipment is now all-digital, and uses component-based digital interconnects such as CCIR 601 to eliminate the need for any analog processing prior to the final modulation of the analog signal for broadcast. However, large installed bases of analog professional equipment still exist, particularly in third world countries.
Varieties
Broadcast systems L, B/G, D/K, H, K, M
There are six varieties of SECAM, according to each of the broadcast system it was used with:
SECAM-L: Used only in France, Luxembourg (only RTL9 on channel 21 from Dudelange) and Télé Monte-Carlo transmitters in the south of France.
SECAM-B/G: Used in parts of the Middle East, former East Germany, Greece and Cyprus
SECAM-D/K: Used in the Commonwealth of Independent States and most parts of Central and Eastern Europe (this is simply SECAM used with the D and K monochrome TV transmission standards) although most Central and Eastern European countries have now migrated to other systems.
SECAM-H: Around 1983–1984 a new color identification standard ("Line SECAM or SECAM-H") was introduced in order to make more space available inside the signal for adding teletext information (originally according to the Antiope standard). Identification bursts were made per-line (like in PAL) rather than per-picture. Very old SECAM TV sets might not be able to display colour for today's broadcasts, although sets manufactured after the mid-1970s should be able to receive either variant.
SECAM-K: The standard used in France's overseas possessions (as well as African countries that were once ruled by France) was slightly different from the SECAM used in Metropolitan France. The SECAM standard used in Metropolitan France used the SECAM-L and a variant of the channel information for VHF channels 2–10. French overseas possessions and many French-speaking African countries use the SECAM-K1 standard and a mutually incompatible variant of the channel information for VHF channels 4-9 (not channels 2–10).
SECAM-M: Between 1970 and 1991, SECAM-M was used in Cambodia, Laos, and Vietnam (Hanoi and other northern cities).
MESECAM (home recording)
MESECAM is a method of recording SECAM color signals onto VHS or Betamax video tape. It should not be mistaken for a broadcast standard.
"Native" SECAM recording (marketing term: "SECAM-West") was devised for machines sold for the French (and adjacent countries) market. At a later stage, countries where both PAL and SECAM signals were available developed a cheap method of converting PAL video machines to record SECAM signals, using only the PAL recording circuitry. Although being a workaround, MESECAM is much more widespread than "native" SECAM. It has been the only method of recording SECAM signals to VHS in almost all countries that used SECAM, including the Middle East and all countries in Eastern Europe.
A tape produced by this method is not compatible with "native" SECAM tapes as produced by VCRs in the French market. It will play in black and white only, the color is lost. Most VHS machines advertised as "SECAM capable" outside France can be expected to be of the MESECAM variety only.
Technical details
On VHS tapes, the luminance signal is recorded FM-encoded (on VHS with reduced bandwidth, on S-VHS with full bandwidth) but the PAL or NTSC chrominance signal is too sensitive to small changes in frequency caused by inevitable small variations in tape speed to be recorded directly. Instead, it is first shifted down to the lower frequency of 630 kHz, and the complex nature of the PAL or NTSC sub-carrier means that the down conversion must be done via heterodyning to ensure that information is not lost.
The SECAM sub-carriers, which consist of two simple FM signals at 4.41 MHz and 4.25 MHz, do not need this (actually simple) processing. The VHS specification for "native" SECAM recording specifies that they be divided by 4 on recording to give sub carriers of approximately 1.1 MHz and 1.06 MHz, and multiplied by 4 on playback. A true dual-standard PAL and SECAM video recorder therefore requires two color processing circuits, adding to complexity and expense. Since some countries in the Middle East use PAL and others use SECAM, the region has adopted a shortcut, and uses the PAL mixer-down converter approach for both PAL and SECAM, simplifying VCR design.
Many PAL VHS recorders have had their analog tuner modified in French-speaking western Switzerland (Switzerland used the PAL-B/G standard while the bordering France used SECAM-L). The original tuner in those PAL recorders allows only PAL-B/G reception. The Swiss importers added a circuit with a specific IC for the French SECAM-L standard, making the tuner multi-standard and allowing the VCR to record SECAM broadcasts in MESECAM. A stamp mentioning "PAL+SECAM" was added to these machines.
Video recorders like Panasonic NV-W1E (AG-W1-P for professional), AG-W3, NV-J700AM, Aiwa HV-MX100, HV-MX1U, Samsung SV-4000W and SV-7000W feature a digital standard conversion circuitry.
Adoption
A legacy list of SECAM users in 1998 is available on Recommendation ITU-R BT.470-6 - Conventional Television Systems, Appendix 1 to Annex 1, and the list before many OIRT countries migrated to PAL can be found at CCIR Report 624-3 Characteristics of television systems, Annex I.
Below is an updated list of nations that currently authorize the use of the SECAM standard for television broadcasting. It is subject to ongoing changes as nations move to PAL and DVB-T. These migrations are listed separately.
Migration to other standards
PAL
Europe
(migrated in 1994–1996)
(migrated in 1992–1994)
(switchover on 31 December 1991 after German reunification in 1990)
(switchover ended in November 1999 with ETV and Kanal 2)
(migrated in 2000s)
(migrated in 1992)
(migrated in 1995–1996)
(migrated in 1997–1999)
(migrated in 2002)
(migrated in 1993–1995)
(migrated in 1992–1994, simulcast in SECAM until early 2010s)
Czech Republic, Slovakia, Hungary and the Baltic countries also changed their underlying sound carrier standard on the UHF band from D/K to B/G which is used in most of Western Europe, to facilitate use of imported broadcast equipment, while leaving the D/K standard on VHF. This required viewers to purchase multistandard receivers though. The other countries mentioned kept their existing standards (B/G in the cases of East Germany and Greece, D/K for the rest).
Africa
(For a few years before was simulcast)
Asia
(migrated in the 1990s)
(migrated in late 1980s)
(migrated in 2001)
(migrated in 1991–1992, from SECAM-M to PAL-B/G)
(reverted in 1998 to PAL-B/G)
(migrated in the 1990s from SECAM-M)
(migrated in 1993)
(simulcast in NTSC, SECAM and PAL—before switching to PAL entirely in the late 1990s or early 2000s)
(brief simulcast in NTSC)
DVB
| Technology | Broadcasting | null |
28710 | https://en.wikipedia.org/wiki/Smelt%20%28fish%29 | Smelt (fish) | Smelts are a family of small fish, the Osmeridae, found in the North Atlantic and North Pacific oceans, as well as rivers, streams and lakes in Europe, North America and Northeast Asia. They are also known as freshwater smelts or typical smelts to distinguish them from the related Argentinidae (herring smelts or argentines), Bathylagidae (deep-sea smelts), and Retropinnidae (Australian and New Zealand smelts).
Some smelt species are common in the North American Great Lakes, and in the lakes and seas of the northern part of Europe, where they run in large schools along the saltwater coastline during spring migration to their spawning streams. In some western parts of the United States, smelt populations have greatly declined in recent decades, leading to their protection under the Endangered Species Act. The Delta smelt (Hypomesus transpacificus) found in the Sacramento Delta of California, and the eulachon (Thaleichthys pacificus) found in the Northeast Pacific and adjacent rivers, are both protected from harvest.
Some species of smelts are among the few fish that sportsmen have been allowed to net, using hand-held dip nets, either along the coastline or in streams. Some sportsmen also ice fish for smelt. They are often fried and eaten whole.
The earliest known fossil smelt is Enoplophthalmus from the Early Oligocene of Europe; Speirsaenigma from the Paleocene of Canada may be an even earlier representative, although some authors instead consider it a relative of the ayu.
Description
In size, most species rarely exceed , although some grow larger. Some females of European smelt can reach in length.
Like salmon, many species are anadromous, living most of their lives in the sea, but traveling into fresh water to breed. However, a few exceptions, such as the surf smelt, spend their entire lives at sea.
Smelt dipping
In the Canadian provinces and U.S. states around the Great Lakes, "smelt dipping" is a common group sport in the early spring and when stream waters reach around . Fish are spotted using a flashlight or headlamp and scooped out of the water using a dip net made of nylon or metal mesh. The smelt are cleaned by removing the head and the entrails. Fins, scales, and bones of all but the largest of smelts are cooked without removal.
On the Maine coast, smelts were also a sign of spring, with the run of these small fish up tiny tidal estuaries. Many of these streams were narrow enough for a person to straddle and get a good catch of smelts by dipping a bucket.
Culinary use
North America
Smelts are an important winter catch in the saltwater mouths of rivers in New England and the Maritime Provinces of Canada. Fishermen would historically go to customary locations over the ice using horses and sleighs. Smelt taken out of the cold saltwater were preferred to those taken in warm water. The fish did not command a high price on the market, but provided a source of supplemental income due to their abundance. The smelts were "flash frozen" simply by leaving them on the ice and then sold to fish buyers who came down the rivers.
In the present day, smelts are fished commercially using nets at sea, and for recreation by hand-netting, spearing or angling them through holes in river ice. They are often the target fish of small 'fishing shack' villages that spring up along frozen rivers. Typical ways of preparing them include pan-frying in flour and butter, deep-frying in batter and cooking them, directly out of the water, over small stoves in the shacks.
Canada
Indigenous peoples in Canada native to the Great Lakes regions (Lake Huron, Lake Ontario, and Lake Superior), as well as nearby Lake Erie (which still is well known for its smelts today), were both familiar and partially dependent upon smelts as a dietary source of protein and omega fats that didn't require a large effort or hunting party to obtain. Smelts are one of the best choices of freshwater and saltwater fish to eat, as one of the types of edible fish with the lowest amount of mercury.
Smelts can be found in the Atlantic and Pacific oceans, as well as in some freshwater lakes across Canada. Smelts were eaten by many different native peoples who had access to them. One popular way that First Nations of the Pacific coast made dried smelts more appealing was to serve it with oil. Eulachon, a type of smelt, contains so much oil during spawning that, once dried, it can literally be burned like a candle; hence its common nickname of the "candlefish".
Today, there are numerous recipes and methods of preparing and cooking smelts. A popular First Nations recipe calls for the removal of all the fishes' bones, uses canola or peanut oil for frying, and has a breaded-like coating mixed with lemon juice and grated parmesan cheese (with a few other basic ingredients) to coat the smelts prior to frying them.
East Asia
Smelt is popular in Japan, as the species Sprinchus lanceolatus, and is generally served grilled, called shishamo, especially when full of eggs.
Smelt roe, specifically from capelin, called masago in Japanese, is yellow to orange in color and is often used in sushi.
Smelt is also served in Chinese dim sum restaurants where it is deep fried with the heads and tails attached, identified as duō chūn yú () or duō luǎn yú (), "many egg fish" or which loosely translates as "fish with many eggs".
Smelt is one of the prime fish species eaten in Tamil Nadu as Nethili fry, Nethili karuvadu (dried fish), coastal Karnataka, especially in Mangalore and Udupi regions, usually fried with heads and tailed removed or in curries. They are called 'Bolingei' (ಬೊಳಂಜೆ) in Kannada and Tulu and 'MotiyaLe' in Konkani.
Festivals
In the city of Inje, South Korea (Gangwon Province), an Ice Fishing Festival is held annually from 30 January to 2 February on Soyang Lake, coinciding with the smelt's yearly run into fresh water to spawn. They are locally known as bing-eo (빙어) and typically eaten alive or deep-fried.
In Finland, the province of Paltamo has yearly Norssikarnevaali festivals in the middle of May.
For some Italians, especially from the region of Calabria, fried smelts are a traditional part of the Christmas Eve dinner consisting of multiple courses of fish.
In 1956, the Chamber of Commerce in Kelso, Washington, declared Kelso, located on the Cowlitz River, as the "Smelt Capital of the World". They erected billboards proclaiming this, and held festivals for the annual smelt runs until the runs dried up.
The village of Lewiston, New York, on the lower portion of the Niagara River, celebrates an annual two-day smelt festival the first weekend in May. During the festival, approximately of smelt are battered and fried at the Lewiston Waterfront. The smelt samples are free during the festival and donations are welcome, as they help support programs supported by the Niagara River Anglers. The festival has a motto, which is a play on words: "Lewiston never smelt so good."
Lithuania celebrates an annual weekend smelt festival in Palanga "Palangos Stinta" early every January.
The American Legion Post 82, in Port Washington, Wisconsin, has been hosting its annual Smelt Fry since 1951. Located on the Western shore of Lake Michigan, north of Milwaukee, Port Washington has a long history as a fishing community with commercial and sports ventures. The Legion's Smelt Fry happens every year in mid to late April. In mid-July, the quaint town hosts their Fish Day event, billed as "The world's largest, one-day, outdoor fish fry!"
At the time of the smelt's spring run in the Neva River at the head of the Gulf of Finland on the Baltic Sea in Saint Petersburg, a (Prazdnik koryushki) is celebrated.
The Magic Smelt Puppet Troupe of Duluth, Minnesota has held an annual "Run, Smelt, Run!" puppet-based second line parade, smelt fry, and dance party since 2012. The troupe occasionally hosts other performances throughout the year.
| Biology and health sciences | Osmeriformes | null |
28714 | https://en.wikipedia.org/wiki/Stethoscope | Stethoscope | The stethoscope is a medical device for auscultation, or listening to internal sounds of an animal or human body. It typically has a small disc-shaped resonator that is placed against the skin, with either one or two tubes connected to two earpieces. A stethoscope can be used to listen to the sounds made by the heart, lungs or intestines, as well as blood flow in arteries and veins. In combination with a manual sphygmomanometer, it is commonly used when measuring blood pressure.
Less commonly, "mechanic's stethoscopes", equipped with rod shaped chestpieces, are used to listen to internal sounds made by machines (for example, sounds and vibrations emitted by worn ball bearings), such as diagnosing a malfunctioning automobile engine by listening to the sounds of its internal parts. Stethoscopes can also be used to check scientific vacuum chambers for leaks and for various other small-scale acoustic monitoring tasks.
A stethoscope that intensifies auscultatory sounds is called a phonendoscope.
History
The stethoscope was invented in France in 1816 by René Laennec at the Necker-Enfants Malades Hospital in Paris. It consisted of a wooden tube and was monaural. Laennec invented the stethoscope because he was not comfortable placing his ear directly onto a woman's chest in order to listen to her heart. He observed that a rolled piece of paper, placed between the individual's chest and his ear, could amplify heart sounds without requiring physical contact. Laennec's device was similar to the common ear trumpet, a historical form of hearing aid; indeed, his invention was almost indistinguishable in structure and function from the trumpet, which was commonly called a "microphone". Laennec called his device the "stethoscope" (stetho- + -scope, "chest scope"), and he called its use "mediate auscultation", because it was auscultation with a tool intermediate between the individual's body and the physician's ear. (Today the word auscultation denotes all such listening, mediate or not.) The first flexible stethoscope of any sort may have been a binaural instrument with articulated joints not very clearly described in 1829. In 1840, Golding Bird described a stethoscope he had been using with a flexible tube. Bird was the first to publish a description of such a stethoscope, but he noted in his paper the prior existence of an earlier design (which he thought was of little utility) which he described as the snake ear trumpet. Bird's stethoscope had a single earpiece.
Binaural devices
In 1851, Irish physician Arthur Leared invented a binaural stethoscope, and in 1852, George Philip Cammann perfected the design of the stethoscope instrument (that used both ears) for commercial production, which has become the standard ever since. Cammann also wrote a major treatise on diagnosis by auscultation, which the refined binaural stethoscope made possible. By 1873, there were descriptions of a differential stethoscope that could connect to slightly different locations to create a slight stereo effect, though this did not become a standard tool in clinical practice.
Somerville Scott Alison described his invention of the stethophone at the Royal Society in 1858; the stethophone had two separate bells, allowing the user to hear and compare sounds derived from two discrete locations. This was used to do definitive studies on binaural hearing and auditory processing that advanced knowledge of sound localization and eventually led to an understanding of binaural fusion.
The medical historian Jacalyn Duffin has argued that the invention of the stethoscope marked a major step in the redefinition of disease from being a bundle of symptoms, to the current sense of a disease as a problem with an anatomical system even if there are no observable symptoms. This re-conceptualization occurred in part, Duffin argues, because prior to stethoscopes, there were no non-lethal instruments for exploring internal anatomy.
Rappaport and Sprague designed a new stethoscope in the 1940s, which became the standard by which other stethoscopes are measured, consisting of two sides, one of which is used for the respiratory system, the other for the cardiovascular system. The Rappaport-Sprague was later made by Hewlett-Packard. HP's medical products division was spun off as part of Agilent Technologies, Inc., where it became Agilent Healthcare. Agilent Healthcare was purchased by Philips which became Philips Medical Systems, before the walnut-boxed, $300, original Rappaport-Sprague stethoscope was finally abandoned ca. 2004, along with Philips' brand (manufactured by Andromed, of Montreal, Canada) electronic stethoscope model. The Rappaport-Sprague model stethoscope was heavy and short () with an antiquated appearance recognizable by their two large independent latex rubber tubes connecting an exposed leaf-spring-joined pair of opposing F-shaped chrome-plated brass binaural ear tubes with a dual-head chest piece.
Several other minor refinements were made to stethoscopes until, in the early 1960s, David Littmann, a Harvard Medical School professor, created a new stethoscope that was lighter than previous models and had improved acoustics. In the late 1970s, 3M-Littmann introduced the tunable diaphragm: a very hard (G-10) glass-epoxy resin diaphragm member with an overmolded silicone flexible acoustic surround which permitted increased excursion of the diaphragm member in a Z-axis with respect to the plane of the sound collecting area. The left shift to a lower resonant frequency increases the volume of some low frequency sounds due to the longer waves propagated by the increased excursion of the hard diaphragm member suspended in the concentric accountic surround. Conversely, restricting excursion of the diaphragm by pressing the stethoscope diaphragm surface firmly against the anatomical area overlying the physiological sounds of interest, the acoustic surround could also be used to dampen excursion of the diaphragm in response to "z"-axis pressure against a concentric fret. This raises the frequency bias by shortening the wavelength to auscultate a higher range of physiological sounds.
In 1999, Richard Deslauriers patented the first external noise reducing stethoscope, the DRG Puretone. It featured two parallel lumens containing two steel coils which dissipated infiltrating noise as inaudible heat energy. The steel coil "insulation" added .30 lb to each stethoscope. In 2005, DRG's diagnostics division was acquired by TRIMLINE Medical Products.
Current practice
Stethoscopes are a symbol of healthcare professionals. Healthcare providers are often seen or depicted wearing a stethoscope around the neck. A 2012 research paper claimed that the stethoscope, when compared to other medical equipment, had the highest positive impact on the perceived trustworthiness of the practitioner seen with it.
Prevailing opinions on the utility of the stethoscope in current clinical practice vary depending on the medical specialty. Studies have shown that auscultation skill (i.e., the ability to make a diagnosis based on what is heard through a stethoscope) has been in decline for some time, such that some medical educators are working to re-establish it.
In general practice, traditional blood pressure measurement using a mechanical sphygmomanometer with inflatable cuff and stethoscope is gradually being replaced with automated blood pressure monitors.
Types
Acoustic
Acoustic stethoscopes operate on the transmission of sound from the chest piece, via air-filled hollow tubes, to the listener's ears. The chestpiece usually consists of two sides that can be placed against the patient for sensing sound: a diaphragm (plastic disc) or bell (hollow cup). If the diaphragm is placed on the patient, body sounds vibrate the diaphragm, creating acoustic pressure waves which travel up the tubing to the listener's ears. If the bell is placed on the patient, the vibrations of the skin directly produce acoustic pressure waves traveling up to the listener's ears. The bell transmits low frequency sounds, while the diaphragm transmits higher frequency sounds. To deliver the acoustic energy primarily to either the bell or diaphragm, the tube connecting into the chamber between bell and diaphragm is open on only one side and can rotate. The opening is visible when connected into the bell. Rotating the tube 180 degrees in the head connects it to the diaphragm. This two-sided stethoscope was invented by Rappaport and Sprague in the early part of the 20th century.
Electronic
An electronic stethoscope (or stethophone) overcomes the low sound levels by electronically amplifying body sounds. However, amplification of stethoscope contact artifacts, and component cutoffs (frequency response thresholds of electronic stethoscope microphones, pre-amps, amps, and speakers) limit electronically amplified stethoscopes' overall utility by amplifying mid-range sounds, while simultaneously attenuating high- and low- frequency range sounds. Currently, a number of companies offer electronic stethoscopes. Electronic stethoscopes require conversion of acoustic sound waves to electrical signals which can then be amplified and processed for optimal listening. Unlike acoustic stethoscopes, which are all based on the same physics, transducers in electronic stethoscopes vary widely. The simplest and least effective method of sound detection is achieved by placing a microphone in the chestpiece. This method suffers from ambient noise interference and has fallen out of favor. Another method, used in Welch-Allyn's Meditron stethoscope, comprises placement of a piezoelectric crystal at the head of a metal shaft, the bottom of the shaft making contact with a diaphragm. 3M also uses a piezo-electric crystal placed within foam behind a thick rubber-like diaphragm. The Thinklabs' Rhythm 32 uses an electromagnetic diaphragm with a conductive inner surface to form a capacitive sensor. This diaphragm responds to sound waves, with changes in an electric field replacing changes in air pressure. The Eko Core enables wireless transmission of heart sounds to a smartphone or tablet. The Eko Duo can take electrocardiograms as well as echocardiograms. This enables clinicians to screen for conditions such as heart failure, which would not be possible with a traditional stethoscope.
Because the sounds are transmitted electronically, an electronic stethoscope can be a wireless device, can be a recording device, and can provide noise reduction, signal enhancement, and both visual and audio output. Around 2001, Stethographics introduced PC-based software which enabled a phonocardiograph, graphic representation of cardiologic and pulmonologic sounds to be generated, and interpreted according to related algorithms. All of these features are helpful for purposes of telemedicine (remote diagnosis) and teaching.
Electronic stethoscopes are also used with computer-aided auscultation programs to analyze the recorded heart sounds pathological or innocent heart murmurs.
Recording
Some electronic stethoscopes feature direct audio output that can be used with an external recording device, such as a laptop or MP3 recorder. The same connection can be used to listen to the previously recorded auscultation through the stethoscope headphones, allowing for more detailed study for general research as well as evaluation and consultation regarding a particular patient's condition and telemedicine, or remote diagnosis.
There are some smartphone apps that can use the phone as a stethoscope. At least one uses the phone's own microphone to amplify sound, produce a visualization, and e-mail the results. These apps may be used for training purposes or as novelties, but have not yet gained acceptance for professional medical use.
The first stethoscope that could work with a smartphone application was introduced in 2015
Fetal
A fetal stethoscope or fetoscope is an acoustic stethoscope shaped like a listening trumpet. It is placed against the abdomen of a pregnant woman to listen to the heart sounds of the fetus. The fetal stethoscope is also known as a Pinard horn after French obstetrician Adolphe Pinard (1844–1934).
Doppler
A Doppler stethoscope is an electronic device that measures the Doppler effect of ultrasound waves reflected from organs within the body. Motion is detected by the change in frequency, due to the Doppler effect, of the reflected waves. Hence the Doppler stethoscope is particularly suited to deal with moving objects such as a beating heart.
It was recently demonstrated that continuous Doppler enables the auscultation of valvular movements and blood flow sounds that are undetected during cardiac examination with a stethoscope in adults. The Doppler auscultation presented a sensitivity of 84% for the detection of aortic regurgitations while classic stethoscope auscultation presented a sensitivity of 58%. Moreover, Doppler auscultation was superior in the detection of impaired ventricular relaxation. Since the physics of Doppler auscultation and classic auscultation are different, it has been suggested that both methods could complement each other.
A military noise-immune Doppler based stethoscope has recently been developed for auscultation of patients in loud sound environments (up to 110 dB).
3D-printed
A 3D-printed stethoscope is an open-source medical device meant for auscultation and manufactured via means of 3D printing. The 3D stethoscope was developed by Dr. Tarek Loubani and a team of medical and technology specialists. The 3D-stethoscope was developed as part of the Glia project, and its design is open source from the outset. The stethoscope gained widespread media coverage in Summer 2015.
The need for a 3D-stethoscope was borne out of a lack of stethoscopes and other vital medical equipment because of the blockade of the Gaza Strip, where Loubani, a Palestinian-Canadian, worked as an emergency physician during the 2012 conflict in Gaza. The 1960s-era Littmann Cardiology 3 stethoscope became the basis for the 3D-printed stethoscope developed by Loubani.
Esophageal
Prior to the 1960s, the esophageal stethoscope was a part of the routine intraoperative monitoring.
Earpieces
Stethoscopes usually have rubber earpieces, which aid comfort and create a seal with the ear, improving the acoustic function of the device. Stethoscopes can be modified by replacing the standard earpieces with moulded versions, which improve comfort and transmission of sound. Moulded earpieces can be cast by an audiologist or made by the stethoscope user from a kit.
| Technology | Devices | null |
28716 | https://en.wikipedia.org/wiki/Smelting | Smelting | Smelting is a process of applying heat and a chemical reducing agent to an ore to extract a desired base metal product. It is a form of extractive metallurgy that is used to obtain many metals such as iron, copper, silver, tin, lead and zinc. Smelting uses heat and a chemical reducing agent to decompose the ore, driving off other elements as gases or slag and leaving the metal behind. The reducing agent is commonly a fossil-fuel source of carbon, such as carbon monoxide from incomplete combustion of coke—or, in earlier times, of charcoal. The oxygen in the ore binds to carbon at high temperatures, as the chemical potential energy of the bonds in carbon dioxide () is lower than that of the bonds in the ore.
Sulfide ores such as those commonly used to obtain copper, zinc or lead, are roasted before smelting in order to convert the sulfides to oxides, which are more readily reduced to the metal. Roasting heats the ore in the presence of oxygen from air, oxidizing the ore and liberating the sulfur as sulfur dioxide gas.
Smelting most prominently takes place in a blast furnace to produce pig iron, which is converted into steel.
Plants for the electrolytic reduction of aluminium are referred to as aluminium smelters.
Process
Smelting involves more than just melting the metal out of its ore. Most ores are the chemical compound of the metal and other elements, such as oxygen (as an oxide), sulfur (as a sulfide), or carbon and oxygen together (as a carbonate). To extract the metal, workers must make these compounds undergo a chemical reaction. Smelting, therefore, consists of using suitable reducing substances that combine with those oxidizing elements to free the metal.
Roasting
In the case of sulfides and carbonates, a process called "roasting" removes the unwanted carbon or sulfur, leaving an oxide, which can be directly reduced. Roasting is usually carried out in an oxidizing environment. A few practical examples:
Malachite, a common ore of copper is primarily copper carbonate hydroxide Cu2(CO3)(OH)2. This mineral undergoes thermal decomposition to 2CuO, CO2, and H2O in several stages between 250 °C and 350 °C. The carbon dioxide and water are expelled into the atmosphere, leaving copper(II) oxide, which can be directly reduced to copper as described in the following section titled Reduction.
Galena, the most common mineral of lead, is primarily lead sulfide (PbS). The sulfide is oxidized to a sulfite (PbSO3), which thermally decomposes into lead oxide and sulfur dioxide gas (PbO and SO2). The sulfur dioxide is expelled (like the carbon dioxide in the previous example), and the lead oxide is reduced as below.
Reduction
Reduction is the final, high-temperature step in smelting, in which the oxide becomes the elemental metal. A reducing environment (often provided by carbon monoxide, made by incomplete combustion in an air-starved furnace) pulls the final oxygen atoms from the raw metal. The carbon source acts as a chemical reactant to remove oxygen from the ore, yielding the purified metal element as a product. The carbon source is oxidized in two stages. First, carbon (C) combusts with oxygen (O2) in the air to produce carbon monoxide (CO). Second, the carbon monoxide reacts with the ore (e.g. Fe2O3) and removes one of its oxygen atoms, releasing carbon dioxide (). After successive interactions with carbon monoxide, all of the oxygen in the ore will be removed, leaving the raw metal element (e.g. Fe). As most ores are impure, it is often necessary to use flux, such as limestone (or dolomite), to remove the accompanying rock gangue as slag. This calcination reaction emits carbon dioxide.
The required temperature varies both in absolute terms and in terms of the melting point of the base metal. Examples:
Iron oxide becomes metallic iron at roughly 1250 °C (2282 °F or 1523 K), almost 300 degrees below iron's melting point of 1538 °C (2800 °F or 1811 K).
Mercuric oxide becomes vaporous mercury near 550 °C (1022 °F or 823 K), almost 600 degrees above mercury's melting point of -38 °C (-36.4 °F or 235 K), and also above mercury's boiling point.
Fluxes
Fluxes are materials added to the ore during smelting to catalyze the desired reactions and to chemically bind to unwanted impurities or reaction products. Calcium carbonate or calcium oxide in the form of lime are often used for this purpose, since they react with sulfur, phosphorus, and silicon impurities to allow them to be readily separated and discarded, in the form of slag. Fluxes may also serve to control the viscosity and neutralize unwanted acids.
Flux and slag can provide a secondary service after the reduction step is complete; they provide a molten cover on the purified metal, preventing contact with oxygen while still hot enough to readily oxidize. This prevents impurities from forming in the metal.
Sulfide ores
The ores of base metals are often sulfides. In recent centuries, reverberatory furnaces have been used to keep the charge being smelted separately from the fuel. Traditionally, they were used for the first step of smelting: forming two liquids, one an oxide slag containing most of the impurities, and the other a sulfide matte containing the valuable metal sulfide and some impurities. Such "reverb" furnaces are today about 40 meters long, 3 meters high, and 10 meters wide. Fuel is burned at one end to melt the dry sulfide concentrates (usually after partial roasting) which are fed through openings in the roof of the furnace. The slag floats over the heavier matte and is removed and discarded or recycled. The sulfide matte is then sent to the converter. The precise details of the process vary from one furnace to another depending on the mineralogy of the ore body.
While reverberatory furnaces produced slags containing very little copper, they were relatively energy inefficient and off-gassed a low concentration of sulfur dioxide that was difficult to capture; a new generation of copper smelting technologies has supplanted them. More recent furnaces exploit bath smelting, top-jetting lance smelting, flash smelting, and blast furnaces. Some examples of bath smelters include the Noranda furnace, the Isasmelt furnace, the Teniente reactor, the Vunyukov smelter, and the SKS technology. Top-jetting lance smelters include the Mitsubishi smelting reactor. Flash smelters account for over 50% of the world's copper smelters. There are many more varieties of smelting processes, including the Kivset, Ausmelt, Tamano, EAF, and BF.
History
Of the seven metals known in antiquity, only gold regularly occurs in nature as a native metal. The others – copper, lead, silver, tin, iron, and mercury – occur primarily as minerals, although native copper is occasionally found in commercially significant quantities. These minerals are primarily carbonates, sulfides, or oxides of the metal, mixed with other components such as silica and alumina. Roasting the carbonate and sulfide minerals in the air converts them to oxides. The oxides, in turn, are smelted into the metal. Carbon monoxide was (and is) the reducing agent of choice for smelting. It is easily produced during the heating process, and as a gas comes into intimate contact with the ore.
In the Old World, humans learned to smelt metals in prehistoric times, more than 8000 years ago. The discovery and use of the "useful" metals – copper and bronze at first, then iron a few millennia later – had an enormous impact on human society. The impact was so pervasive that scholars traditionally divide ancient history into Stone Age, Bronze Age, and Iron Age.
In the Americas, pre-Inca civilizations of the central Andes in Peru had mastered the smelting of copper and silver at least six centuries before the first Europeans arrived in the 16th century, while never mastering the smelting of metals such as iron for use with weapon craft.
Copper and bronze
Copper was the first metal to be smelted. How the discovery came about is debated. Campfires are about 200 °C short of the temperature needed, so some propose that the first smelting of copper may have occurred in pottery kilns. (The development of copper smelting in the Andes, which is believed to have occurred independently of the Old World, may have occurred in the same way.)
The earliest current evidence of copper smelting, dating from between 5500 BC and 5000 BC, has been found in Pločnik and Belovode, Serbia. A mace head found in Turkey and dated to 5000 BC, once thought to be the oldest evidence, now appears to be hammered, native copper.
Combining copper with tin and/or arsenic in the right proportions produces bronze, an alloy that is significantly harder than copper. The first copper/arsenic bronzes date from 4200 BC from Asia Minor. The Inca bronze alloys were also of this type. Arsenic is often an impurity in copper ores, so the discovery could have been made by accident. Eventually, arsenic-bearing minerals were intentionally added during smelting.
Copper–tin bronzes, harder and more durable, were developed around 3500 BC, also in Asia Minor.
How smiths learned to produce copper/tin bronzes is unknown. The first such bronzes may have been a lucky accident from tin-contaminated copper ores. However, by 2000 BC, people were mining tin on purpose to produce bronze—which is remarkable as tin is a semi-rare metal, and even a rich cassiterite ore only has 5% tin.
The discovery of copper and bronze manufacture had a significant impact on the history of the Old World. Metals were hard enough to make weapons that were heavier, stronger, and more resistant to impact damage than wood, bone, or stone equivalents. For several millennia, bronze was the material of choice for weapons such as swords, daggers, battle axes, and spear and arrow points, as well as protective gear such as shields, helmets, greaves (metal shin guards), and other body armor. Bronze also supplanted stone, wood, and organic materials in tools and household utensils—such as chisels, saws, adzes, nails, blade shears, knives, sewing needles and pins, jugs, cooking pots and cauldrons, mirrors, and horse harnesses. Tin and copper also contributed to the establishment of trade networks that spanned large areas of Europe and Asia and had a major effect on the distribution of wealth among individuals and nations.
Tin and lead
The earliest known cast lead beads were thought to be in the Çatalhöyük site in Anatolia (Turkey), and dated from about 6500 BC. However, recent research has discovered that this was not lead, but rather cerussite and galena, minerals rich in, but distinct from, lead.
Since the discovery happened several millennia before the invention of writing, there is no written record of how it was made. However, tin and lead can be smelted by placing the ores in a wood fire, leaving the possibility that the discovery may have occurred by accident. Recent scholarship however has called this find into question.
Lead is a common metal, but its discovery had relatively little impact in the ancient world. It is too soft to use for structural elements or weapons, though its high density relative to other metals makes it ideal for sling projectiles. However, since it was easy to cast and shape, workers in the classical world of Ancient Greece and Ancient Rome used it extensively to pipe and store water. They also used it as a mortar in stone buildings.
Tin was much less common than lead, is only marginally harder, and had even less impact by itself.
Early iron smelting
The earliest evidence for iron-making is a small number of iron fragments with the appropriate amounts of carbon admixture found in the Proto-Hittite layers at Kaman-Kalehöyük and dated to 2200–2000 BC. Souckova-Siegolová (2001) shows that iron implements were made in Central Anatolia in very limited quantities around 1800 BC and were in general use by elites, though not by commoners, during the New Hittite Empire (~1400–1200 BC).
Archaeologists have found indications of iron working in Ancient Egypt, somewhere between the Third Intermediate Period and 23rd Dynasty (ca. 1100–750 BC). Significantly though, they have found no evidence of iron ore smelting in any (pre-modern) period. In addition, very early instances of carbon steel were in production around 2000 years ago (around the first-century.) in northwest Tanzania, based on complex preheating principles. These discoveries are significant for the history of metallurgy.
Most early processes in Europe and Africa involved smelting iron ore in a bloomery, where the temperature is kept low enough so that the iron does not melt. This produces a spongy mass of iron called a bloom, which then must be consolidated with a hammer to produce wrought iron. Some of the earliest evidence to date for the bloomery smelting of iron is found at Tell Hammeh, Jordan, radiocarbon-dated to .
Later iron smelting
From the medieval period, an indirect process began to replace the direct reduction in bloomeries. This used a blast furnace to make pig iron, which then had to undergo a further process to make forgeable bar iron. Processes for the second stage include fining in a finery forge. In the 13th century during the High Middle Ages the blast furnace was introduced by China who had been using it since as early as 200 b.c during the Qin dynasty. Puddling was also introduced in the Industrial Revolution.
Both processes are now obsolete, and wrought iron is now rarely made. Instead, mild steel is produced from a Bessemer converter or by other means including smelting reduction processes such as the Corex Process.
Environmental and occupational health impacts
Smelting has serious effects on the environment, producing wastewater and slag and releasing such toxic metals as copper, silver, iron, cobalt, and selenium into the atmosphere. Smelters also release gaseous sulfur dioxide, contributing to acid rain, which acidifies soil and water.
The smelter in Flin Flon, Canada was one of the largest point sources of mercury in North America in the 20th century. Even after smelter releases were drastically reduced, landscape re-emission continued to be a major regional source of mercury. Lakes will likely receive mercury contamination from the smelter for decades, from both re-emissions returning as rainwater and leaching of metals from the soil.
Air pollution
Air pollutants generated by aluminium smelters include carbonyl sulfide, hydrogen fluoride, polycyclic compounds, lead, nickel, manganese, polychlorinated biphenyls, and mercury. Copper smelter emissions include arsenic, beryllium, cadmium, chromium, lead, manganese, and nickel. Lead smelters typically emit arsenic, antimony, cadmium and various lead compounds.
Wastewater
Wastewater pollutants discharged by iron and steel mills includes gasification products such as benzene, naphthalene, anthracene, cyanide, ammonia, phenols and cresols, together with a range of more complex organic compounds known collectively as polycyclic aromatic hydrocarbons (PAH). Treatment technologies include recycling of wastewater; settling basins, clarifiers and filtration systems for solids removal; oil skimmers and filtration; chemical precipitation and filtration for dissolved metals; carbon adsorption and biological oxidation for organic pollutants; and evaporation.
Pollutants generated by other types of smelters varies with the base metal ore. For example, aluminum smelters typically generate fluoride, benzo(a)pyrene, antimony and nickel, as well as aluminum. Copper smelters typically discharge cadmium, lead, zinc, arsenic and nickel, in addition to copper. Lead smelters may discharge antimony, asbestos, cadmium, copper and zinc, in addition to lead.
Health impacts
Labourers working in the smelting industry have reported respiratory illnesses inhibiting their ability to perform the physical tasks demanded by their jobs.
Regulations
In the United States, the Environmental Protection Agency has published pollution control regulations for smelters.
Air pollution standards under the Clean Air Act
Water pollution standards (effluent guidelines) under the Clean Water Act.
| Technology | Metallurgy | null |
28729 | https://en.wikipedia.org/wiki/Solution%20%28chemistry%29 | Solution (chemistry) | In chemistry, a solution is defined by IUPAC as "A liquid or solid phase containing more than one substance, when for convenience one (or more) substance, which is called the solvent, is treated differently from the other substances, which are called solutes. When, as is often but not necessarily the case, the sum of the mole fractions of solutes is small compared with unity, the solution is called a dilute solution. A superscript attached to the ∞ symbol for a property of a solution denotes the property in the limit of infinite dilution." One important parameter of a solution is the concentration, which is a measure of the amount of solute in a given amount of solution or solvent. The term "aqueous solution" is used when one of the solvents is water.
Types
Homogeneous means that the components of the mixture form a single phase. Heterogeneous means that the components of the mixture are of different phase. The properties of the mixture (such as concentration, temperature, and density) can be uniformly distributed through the volume but only in absence of diffusion phenomena or after their completion. Usually, the substance present in the greatest amount is considered the solvent. Solvents can be gases, liquids, or solids. One or more components present in the solution other than the solvent are called solutes. The solution has the same physical state as the solvent.
Gaseous mixtures
If the solvent is a gas, only gases (non-condensable) or vapors (condensable) are dissolved under a given set of conditions. An example of a gaseous solution is air (oxygen and other gases dissolved in nitrogen). Since interactions between gaseous molecules play almost no role, non-condensable gases form rather trivial solutions. In the literature, they are not even classified as solutions, but simply addressed as homogeneous mixtures of gases. The Brownian motion and the permanent molecular agitation of gas molecules guarantee the homogeneity of the gaseous systems. Non-condensable gaseous mixtures (e.g., air/CO2, or air/xenon) do not spontaneously demix, nor sediment, as distinctly stratified and separate gas layers as a function of their relative density. Diffusion forces efficiently counteract gravitation forces under normal conditions prevailing on Earth. The case of condensable vapors is different: once the saturation vapor pressure at a given temperature is reached, vapor excess condenses into the liquid state.
Liquid solutions
Liquids dissolve gases, other liquids, and solids. An example of a dissolved gas is oxygen in water, which allows fish to breathe under water. An examples of a dissolved liquid is ethanol in water, as found in alcoholic beverages. An example of a dissolved solid is sugar water, which contains dissolved sucrose.
Solid solutions
If the solvent is a solid, then gases, liquids, and solids can be dissolved.
Gas in solids:
Hydrogen dissolves rather well in metals, especially in palladium; this is studied as a means of hydrogen storage.
Liquid in solid:
Mercury in gold, forming an amalgam
Water in solid salt or sugar, forming moist solids
Hexane in paraffin wax
Polymers containing plasticizers such as phthalate (liquid) in PVC (solid)
Solid in solid:
Steel, basically a solution of carbon atoms in a crystalline matrix of iron atoms
Alloys like bronze and many others
Radium sulfate dissolved in barium sulfate: a true solid solution of Ra in BaSO4
Solubility
The ability of one compound to dissolve in another compound is called solubility. When a liquid can completely dissolve in another liquid the two liquids are miscible. Two substances that can never mix to form a solution are said to be immiscible.
All solutions have a positive entropy of mixing. The interactions between different molecules or ions may be energetically favored or not. If interactions are unfavorable, then the free energy decreases with increasing solute concentration. At some point, the energy loss outweighs the entropy gain, and no more solute particles can be dissolved; the solution is said to be saturated. However, the point at which a solution can become saturated can change significantly with different environmental factors, such as temperature, pressure, and contamination. For some solute-solvent combinations, a supersaturated solution can be prepared by raising the solubility (for example by increasing the temperature) to dissolve more solute and then lowering it (for example by cooling).
Usually, the greater the temperature of the solvent, the more of a given solid solute it can dissolve. However, most gases and some compounds exhibit solubilities that decrease with increased temperature. Such behavior is a result of an exothermic enthalpy of solution. Some surfactants exhibit this behaviour. The solubility of liquids in liquids is generally less temperature-sensitive than that of solids or gases.
Properties
The physical properties of compounds such as melting point and boiling point change when other compounds are added. Together they are called colligative properties. There are several ways to quantify the amount of one compound dissolved in the other compounds collectively called concentration. Examples include molarity, volume fraction, and mole fraction.
The properties of ideal solutions can be calculated by the linear combination of the properties of its components. If both solute and solvent exist in equal quantities (such as in a 50% ethanol, 50% water solution), the concepts of "solute" and "solvent" become less relevant, but the substance that is more often used as a solvent is normally designated as the solvent (in this example, water).
Liquid solution characteristics
In principle, all types of liquids can behave as solvents: liquid noble gases, molten metals, molten salts, molten covalent networks, and molecular liquids. In the practice of chemistry and biochemistry, most solvents are molecular liquids. They can be classified into polar and non-polar, according to whether their molecules possess a permanent electric dipole moment. Another distinction is whether their molecules can form hydrogen bonds (protic and aprotic solvents). Water, the most commonly used solvent, is both polar and sustains hydrogen bonds.
Salts dissolve in polar solvents, forming positive and negative ions that are attracted to the negative and positive ends of the solvent molecule, respectively. If the solvent is water, hydration occurs when the charged solute ions become surrounded by water molecules. A standard example is aqueous saltwater. Such solutions are called electrolytes. Whenever salt dissolves in water ion association has to be taken into account.
Polar solutes dissolve in polar solvents, forming polar bonds or hydrogen bonds. As an example, all alcoholic beverages are aqueous solutions of ethanol. On the other hand, non-polar solutes dissolve better in non-polar solvents. Examples are hydrocarbons such as oil and grease that easily mix, while being incompatible with water.
An example of the immiscibility of oil and water is a leak of petroleum from a damaged tanker, that does not dissolve in the ocean water but rather floats on the surface.
| Physical sciences | Chemical mixtures: General | null |
28730 | https://en.wikipedia.org/wiki/Security%20engineering | Security engineering | Security engineering is the process of incorporating security controls into an information system so that the controls become an integral part of the system's operational capabilities. It is similar to other systems engineering activities in that its primary motivation is to support the delivery of engineering solutions that satisfy pre-defined functional and user requirements, but it has the added dimension of preventing misuse and malicious behavior. Those constraints and restrictions are often asserted as a security policy.
In one form or another, security engineering has existed as an informal field of study for several centuries. For example, the fields of locksmithing and security printing have been around for many years. The concerns for modern security engineering and computer systems were first solidified in a RAND paper from 1967, "Security and Privacy in Computer Systems" by Willis H. Ware. This paper, later expanded in 1979, provided many of the fundamental information security concepts, labelled today as Cybersecurity, that impact modern computer systems, from cloud implementations to embedded IoT.
Recent catastrophic events, most notably 9/11, have made security engineering quickly become a rapidly-growing field. In fact, in a report completed in 2006, it was estimated that the global security industry was valued at US $150 billion.
Security engineering involves aspects of social science, psychology (such as designing a system to "fail well", instead of trying to eliminate all sources of error), and economics as well as physics, chemistry, mathematics, criminology architecture, and landscaping.
Some of the techniques used, such as fault tree analysis, are derived from safety engineering.
Other techniques such as cryptography were previously restricted to military applications. One of the pioneers of establishing security engineering as a formal field of study is Ross Anderson.
Qualifications
No single qualification exists to become a security engineer.
However, an undergraduate and/or graduate degree, often in computer science, computer engineering, or physical protection focused degrees such as Security Science, in combination with practical work experience (systems, network engineering, software development, physical protection system modelling etc.) most qualifies an individual to succeed in the field. Other degree qualifications with a security focus exist. Multiple certifications, such as the Certified Information Systems Security Professional, or Certified Physical Security Professional are available that may demonstrate expertise in the field. Regardless of the qualification, the course must include a knowledge base to diagnose the security system drivers, security theory and principles including defense in depth, protection in depth, situational crime prevention and crime prevention through environmental design to set the protection strategy (professional inference), and technical knowledge including physics and mathematics to design and commission the engineering treatment solution. A security engineer can also benefit from having knowledge in cyber security and information security. Any previous work experience related to privacy and computer science is also valued.
All of this knowledge must be braced by professional attributes including strong communication skills and high levels of literacy for engineering report writing. Security engineering also goes by the label Security Science.
Related-fields
Cybersecurity engineering
See especially Information security and Computer security
protecting data from unauthorized access, use, disclosure, destruction, modification, or disruption to access.
Physical security
deter attackers from accessing a facility, resource, or information stored on physical media.
Technical surveillance counter-measures
Economics of security
the economic aspects of economics of privacy and computer security.
Methodologies
Technological advances, principally in the field of computers, have now allowed the creation of far more complex systems, with new and complex security problems. Because modern systems cut across many areas of human endeavor, security engineers not only need consider the mathematical and physical properties of systems; they also need to consider attacks on the people who use and form parts of those systems using social engineering attacks. Secure systems have to resist not only technical attacks, but also coercion, fraud, and deception by confidence tricksters.
Web applications
According to the Microsoft Developer Network the patterns and practices of security engineering consist of the following activities:
Security Objectives
Security Design Guidelines
Security Modeling
Security Architecture and Design Review
Security Code Review
Security Testing
Security Tuning
Security Deployment Review
These activities are designed to help meet security objectives in the software life cycle.
Physical
Understanding of a typical threat and the usual risks to people and property.
Understanding the incentives created both by the threat and the countermeasures.
Understanding risk and threat analysis methodology and the benefits of an empirical study of the physical security of a facility.
Understanding how to apply the methodology to buildings, critical infrastructure, ports, public transport and other facilities/compounds.
Overview of common physical and technological methods of protection and understanding their roles in deterrence, detection and mitigation.
Determining and prioritizing security needs and aligning them with the perceived threats and the available budget.
Product
Product security engineering is security engineering applied specifically to the products that an organization creates, distributes, and/or sells. Product security engineering is distinct from corporate/enterprise security, which focuses on securing corporate networks and systems that an organization uses to conduct business.
Product security includes security engineering applied to:
Hardware devices such as cell phones, computers, Internet of things devices, and cameras.
Software such as operating systems, applications, and firmware.
Such security engineers are often employed in separate teams from corporate security teams and work closely with product engineering teams.
Target hardening
Whatever the target, there are multiple ways of preventing penetration by unwanted or unauthorized persons. Methods include placing Jersey barriers, stairs or other sturdy obstacles outside tall or politically sensitive buildings to prevent car and truck bombings. Improving the method of visitor management and some new electronic locks take advantage of technologies such as fingerprint scanning, iris or retinal scanning, and voiceprint identification to authenticate users.
| Technology | Disciplines | null |
28733 | https://en.wikipedia.org/wiki/Steganography | Steganography | Steganography ( ) is the practice of representing information within another message or physical object, in such a manner that the presence of the concealed information would not be evident to an unsuspecting person's examination. In computing/electronic contexts, a computer file, message, image, or video is concealed within another file, message, image, or video. The word steganography comes from Greek steganographia, which combines the words steganós (), meaning "covered or concealed", and -graphia () meaning "writing".
The first recorded use of the term was in 1499 by Johannes Trithemius in his Steganographia, a treatise on cryptography and steganography, disguised as a book on magic. Generally, the hidden messages appear to be (or to be part of) something else: images, articles, shopping lists, or some other cover text. For example, the hidden message may be in invisible ink between the visible lines of a private letter. Some implementations of steganography that lack a formal shared secret are forms of security through obscurity, while key-dependent steganographic schemes try to adhere to Kerckhoffs's principle.
The advantage of steganography over cryptography alone is that the intended secret message does not attract attention to itself as an object of scrutiny. Plainly visible encrypted messages, no matter how unbreakable they are, arouse interest and may in themselves be incriminating in countries in which encryption is illegal. Whereas cryptography is the practice of protecting the contents of a message alone, steganography is concerned with concealing both the fact that a secret message is being sent and its contents.
Steganography includes the concealment of information within computer files. In digital steganography, electronic communications may include steganographic coding inside of a transport layer, such as a document file, image file, program, or protocol. Media files are ideal for steganographic transmission because of their large size. For example, a sender might start with an innocuous image file and adjust the color of every hundredth pixel to correspond to a letter in the alphabet. The change is so subtle that someone who is not specifically looking for it is unlikely to notice the change.
History
The first recorded uses of steganography can be traced back to 440 BC in Greece, when Herodotus mentions two examples in his Histories. Histiaeus sent a message to his vassal, Aristagoras, by shaving the head of his most trusted servant, "marking" the message onto his scalp, then sending him on his way once his hair had regrown, with the instruction, "When thou art come to Miletus, bid Aristagoras shave thy head, and look thereon." Additionally, Demaratus sent a warning about a forthcoming attack to Greece by writing it directly on the wooden backing of a wax tablet before applying its beeswax surface. Wax tablets were in common use then as reusable writing surfaces, sometimes used for shorthand.
In his work Polygraphiae, Johannes Trithemius developed his so-called "Ave-Maria-Cipher" that can hide information in a Latin praise of God. "Auctor Sapientissimus Conseruans Angelica Deferat Nobis Charitas Potentissimi Creatoris" for example contains the concealed word VICIPEDIA.
Techniques
Numerous techniques throughout history have been developed to embed a message within another medium.
Physical
Placing the message in a physical item has been widely used for centuries. Some notable examples include invisible ink on paper, writing a message in Morse code on yarn worn by a courier, microdots, or using a music cipher to hide messages as musical notes in sheet music.
Social steganography
In communities with social or government taboos or censorship, people use cultural steganography—hiding messages in idiom, pop culture references, and other messages they share publicly and assume are monitored. This relies on social context to make the underlying messages visible only to certain readers. Examples include:
Hiding a message in the title and context of a shared video or image.
Misspelling names or words that are popular in the media in a given week, to suggest an alternate meaning.
Hiding a picture that can be traced by using Paint or any other drawing tool.
Digital messages
Since the dawn of computers, techniques have been developed to embed messages in digital cover mediums. The message to conceal is often encrypted, then used to overwrite part of a much larger block of encrypted data or a block of random data (an unbreakable cipher like the one-time pad generates ciphertexts that look perfectly random without the private key).
Examples of this include changing pixels in image or sound files, properties of digital text such as spacing and font choice, Chaffing and winnowing, Mimic functions, modifying the echo of a sound file (Echo Steganography)., and including data in ignored sections of a file.
Steganography in streaming media
Since the era of evolving network applications, steganography research has shifted from image steganography to steganography in streaming media such as Voice over Internet Protocol (VoIP).
In 2003, Giannoula et al. developed a data hiding technique leading to compressed forms of source video signals on a frame-by-frame basis.
In 2005, Dittmann et al. studied steganography and watermarking of multimedia contents such as VoIP.
In 2008, Yongfeng Huang and Shanyu Tang presented a novel approach to information hiding in low bit-rate VoIP speech stream, and their published work on steganography is the first-ever effort to improve the codebook partition by using Graph theory along with Quantization Index Modulation in low bit-rate streaming media.
In 2011 and 2012, Yongfeng Huang and Shanyu Tang devised new steganographic algorithms that use codec parameters as cover object to realise real-time covert VoIP steganography. Their findings were published in IEEE Transactions on Information Forensics and Security.
In 2024, Cheddad & Cheddad proposed a new framework for reconstructing lost or corrupted audio signals using a combination of machine learning techniques and latent information. The main idea of their paper is to enhance audio signal reconstruction by fusing steganography, halftoning (dithering), and state-of-the-art shallow and deep learning methods (e.g., RF, LSTM). This combination of steganography, halftoning, and machine learning for audio signal reconstruction may inspire further research in optimizing this approach or applying it to other domains, such as image reconstruction (i.e., inpainting).
Adaptive-Steganography
Adaptive steganography is a technique for concealing information within digital media by tailoring the embedding process to the specific features of the cover medium. An example of this approach is demonstrated in the work. Their method develops a skin tone detection algorithm, capable of identifying facial features, which is then applied to adaptive steganography. By incorporating face rotation into their approach, the technique aims to enhance its adaptivity to conceal information in a manner that is both less detectable and more robust across various facial orientations within images. This strategy can potentially improve the efficacy of information hiding in both static images and video content.
Cyber-physical systems/Internet of Things
Academic work since 2012 demonstrated the feasibility of steganography for cyber-physical systems (CPS)/the Internet of Things (IoT). Some techniques of CPS/IoT steganography overlap with network steganography, i.e. hiding data in communication protocols used in CPS/the IoT. However, specific techniques hide data in CPS components. For instance, data can be stored in unused registers of IoT/CPS components and in the states of IoT/CPS actuators.
Printed
Digital steganography output may be in the form of printed documents. A message, the plaintext, may be first encrypted by traditional means, producing a ciphertext. Then, an innocuous cover text is modified in some way so as to contain the ciphertext, resulting in the stegotext. For example, the letter size, spacing, typeface, or other characteristics of a cover text can be manipulated to carry the hidden message. Only a recipient who knows the technique used can recover the message and then decrypt it. Francis Bacon developed Bacon's cipher as such a technique.
The ciphertext produced by most digital steganography methods, however, is not printable. Traditional digital methods rely on perturbing noise in the channel file to hide the message, and as such, the channel file must be transmitted to the recipient with no additional noise from the transmission. Printing introduces much noise in the ciphertext, generally rendering the message unrecoverable. There are techniques that address this limitation, one notable example being ASCII Art Steganography.
Although not classic steganography, some types of modern color laser printers integrate the model, serial number, and timestamps on each printout for traceability reasons using a dot-matrix code made of small, yellow dots not recognizable to the naked eye — see printer steganography for details.
Network
In 2015, a taxonomy of 109 network hiding methods was presented by Steffen Wendzel, Sebastian Zander et al. that summarized core concepts used in network steganography research. The taxonomy was developed further in recent years by several publications and authors and adjusted to new domains, such as CPS steganography.
In 1977, Kent concisely described the potential for covert channel signaling in general network communication protocols, even if the traffic is encrypted (in a footnote) in "Encryption-Based Protection for Interactive User/Computer Communication," Proceedings of the Fifth Data Communications Symposium, September 1977.
In 1987, Girling first studied covert channels on a local area network (LAN), identified and realised three obvious covert channels (two storage channels and one timing channel), and his research paper entitled “Covert channels in LAN’s” published in IEEE Transactions on Software Engineering, vol. SE-13 of 2, in February 1987.
In 1989, Wolf implemented covert channels in LAN protocols, e.g. using the reserved fields, pad fields, and undefined fields in the TCP/IP protocol.
In 1997, Rowland used the IP identification field, the TCP initial sequence number and acknowledge sequence number fields in TCP/IP headers to build covert channels.
In 2002, Kamran Ahsan made an excellent summary of research on network steganography.
In 2005, Steven J. Murdoch and Stephen Lewis contributed a chapter entitled "Embedding Covert Channels into TCP/IP" in the "Information Hiding" book published by Springer.
All information hiding techniques that may be used to exchange steganograms in telecommunication networks can be classified under the general term of network steganography. This nomenclature was originally introduced by Krzysztof Szczypiorski in 2003. Contrary to typical steganographic methods that use digital media (images, audio and video files) to hide data, network steganography uses communication protocols' control elements and their intrinsic functionality. As a result, such methods can be harder to detect and eliminate.
Typical network steganography methods involve modification of the properties of a single network protocol. Such modification can be applied to the protocol data unit (PDU), to the time relations between the exchanged PDUs, or both (hybrid methods).
Moreover, it is feasible to utilize the relation between two or more different network protocols to enable secret communication. These applications fall under the term inter-protocol steganography. Alternatively, multiple network protocols can be used simultaneously to transfer hidden information and so-called control protocols can be embedded into steganographic communications to extend their capabilities, e.g. to allow dynamic overlay routing or the switching of utilized hiding methods and network protocols.
Network steganography covers a broad spectrum of techniques, which include, among others:
Steganophony – the concealment of messages in Voice-over-IP conversations, e.g. the employment of delayed or corrupted packets that would normally be ignored by the receiver (this method is called LACK – Lost Audio Packets Steganography), or, alternatively, hiding information in unused header fields.
WLAN Steganography – transmission of steganograms in Wireless Local Area Networks. A practical example of WLAN Steganography is the HICCUPS system (Hidden Communication System for Corrupted Networks)
Additional terminology
Discussions of steganography generally use terminology analogous to and consistent with conventional radio and communications technology. However, some terms appear specifically in software and are easily confused. These are the most relevant ones to digital steganographic systems:
The payload is the data covertly communicated. The carrier is the signal, stream, or data file that hides the payload, which differs from the channel, which typically means the type of input, such as a JPEG image. The resulting signal, stream, or data file with the encoded payload is sometimes called the package, stego file, or covert message. The proportion of bytes, samples, or other signal elements modified to encode the payload is called the encoding density and is typically expressed as a number between 0 and 1.
In a set of files, the files that are considered likely to contain a payload are suspects. A suspect identified through some type of statistical analysis can be referred to as a candidate.
Countermeasures and detection
Detecting physical steganography requires a careful physical examination, including the use of magnification, developer chemicals, and ultraviolet light. It is a time-consuming process with obvious resource implications, even in countries that employ many people to spy on their fellow nationals. However, it is feasible to screen mail of certain suspected individuals or institutions, such as prisons or prisoner-of-war (POW) camps.
During World War II, prisoner of war camps gave prisoners specially-treated paper that would reveal invisible ink. An article in the 24 June 1948 issue of Paper Trade Journal by the Technical Director of the United States Government Printing Office had Morris S. Kantrowitz describe in general terms the development of this paper. Three prototype papers (Sensicoat, Anilith, and Coatalith) were used to manufacture postcards and stationery provided to German prisoners of war in the US and Canada. If POWs tried to write a hidden message, the special paper rendered it visible. The US granted at least two patents related to the technology, one to Kantrowitz, , "Water-Detecting paper and Water-Detecting Coating Composition Therefor," patented 18 July 1950, and an earlier one, "Moisture-Sensitive Paper and the Manufacture Thereof," , patented 20 July 1948. A similar strategy issues prisoners with writing paper ruled with a water-soluble ink that runs in contact with water-based invisible ink.
In computing, steganographically encoded package detection is called steganalysis. The simplest method to detect modified files, however, is to compare them to known originals. For example, to detect information being moved through the graphics on a website, an analyst can maintain known clean copies of the materials and then compare them against the current contents of the site. The differences, if the carrier is the same, comprise the payload. In general, using extremely high compression rates makes steganography difficult but not impossible. Compression errors provide a hiding place for data, but high compression reduces the amount of data available to hold the payload, raising the encoding density, which facilitates easier detection (in extreme cases, even by casual observation).
There are a variety of basic tests that can be done to identify whether or not a secret message exists. This process is not concerned with the extraction of the message, which is a different process and a separate step. The most basic approaches of steganalysis are visual or aural attacks, structural attacks, and statistical attacks. These approaches attempt to detect the steganographic algorithms that were used. These algorithms range from unsophisticated to very sophisticated, with early algorithms being much easier to detect due to statistical anomalies that were present. The size of the message that is being hidden is a factor in how difficult it is to detect. The overall size of the cover object also plays a factor as well. If the cover object is small and the message is large, this can distort the statistics and make it easier to detect. A larger cover object with a small message decreases the statistics and gives it a better chance of going unnoticed.
Steganalysis that targets a particular algorithm has much better success as it is able to key in on the anomalies that are left behind. This is because the analysis can perform a targeted search to discover known tendencies since it is aware of the behaviors that it commonly exhibits. When analyzing an image the least significant bits of many images are actually not random. The camera sensor, especially lower-end sensors are not the best quality and can introduce some random bits. This can also be affected by the file compression done on the image. Secret messages can be introduced into the least significant bits in an image and then hidden. A steganography tool can be used to camouflage the secret message in the least significant bits but it can introduce a random area that is too perfect. This area of perfect randomization stands out and can be detected by comparing the least significant bits to the next-to-least significant bits on an image that hasn't been compressed.
Generally, though, there are many techniques known to be able to hide messages in data using steganographic techniques. None are, by definition, obvious when users employ standard applications, but some can be detected by specialist tools. Others, however, are resistant to detection—or rather it is not possible to reliably distinguish data containing a hidden message from data containing just noise—even when the most sophisticated analysis is performed. Steganography is being used to conceal and deliver more effective cyber attacks, referred to as Stegware. The term Stegware was first introduced in 2017 to describe any malicious operation involving steganography as a vehicle to conceal an attack. Detection of steganography is challenging, and because of that, not an adequate defence. Therefore, the only way of defeating the threat is to transform data in a way that destroys any hidden messages, a process called Content Threat Removal.
Applications
Use in modern printers
Some modern computer printers use steganography, including Hewlett-Packard and Xerox brand color laser printers. The printers add tiny yellow dots to each page. The barely-visible dots contain encoded printer serial numbers and date and time stamps.
Example from modern practice
The larger the cover message (in binary data, the number of bits) relative to the hidden message, the easier it is to hide the hidden message (as an analogy, the larger the "haystack", the easier it is to hide a "needle"). So digital pictures, which contain much data, are sometimes used to hide messages on the Internet and on other digital communication media. It is not clear how common this practice actually is.
For example, a 24-bit bitmap uses 8 bits to represent each of the three color values (red, green, and blue) of each pixel. The blue alone has 28 different levels of blue intensity. The difference between 111111112 and 111111102 in the value for blue intensity is likely to be undetectable by the human eye. Therefore, the least significant bit can be used more or less undetectably for something else other than color information. If that is repeated for the green and the red elements of each pixel as well, it is possible to encode one letter of ASCII text for every three pixels.
Stated somewhat more formally, the objective for making steganographic encoding difficult to detect is to ensure that the changes to the carrier (the original signal) because of the injection of the payload (the signal to covertly embed) are visually (and ideally, statistically) negligible. The changes are indistinguishable from the noise floor of the carrier. All media can be a carrier, but media with a large amount of redundant or compressible information is better suited.
From an information theoretical point of view, that means that the channel must have more capacity than the "surface" signal requires. There must be redundancy. For a digital image, it may be noise from the imaging element; for digital audio, it may be noise from recording techniques or amplification equipment. In general, electronics that digitize an analog signal suffer from several noise sources, such as thermal noise, flicker noise, and shot noise. The noise provides enough variation in the captured digital information that it can be exploited as a noise cover for hidden data. In addition, lossy compression schemes (such as JPEG) always introduce some error to the decompressed data, and it is possible to exploit that for steganographic use, as well.
Although steganography and digital watermarking seem similar, they are not. In steganography, the hidden message should remain intact until it reaches its destination. Steganography can be used for digital watermarking in which a message (being simply an identifier) is hidden in an image so that its source can be tracked or verified (for example, Coded Anti-Piracy) or even just to identify an image (as in the EURion constellation). In such a case, the technique of hiding the message (here, the watermark) must be robust to prevent tampering. However, digital watermarking sometimes requires a brittle watermark, which can be modified easily, to check whether the image has been tampered with. That is the key difference between steganography and digital watermarking.
Alleged use by intelligence services
In 2010, the Federal Bureau of Investigation alleged that the Russian foreign intelligence service uses customized steganography software for embedding encrypted text messages inside image files for certain communications with "illegal agents" (agents without diplomatic cover) stationed abroad.
On 23 April 2019 the U.S. Department of Justice unsealed an indictment charging Xiaoqing Zheng, a Chinese businessman and former Principal Engineer at General Electric, with 14 counts of conspiring to steal intellectual property and trade secrets from General Electric. Zheng had allegedly used steganography to exfiltrate 20,000 documents from General Electric to Tianyi Aviation Technology Co. in Nanjing, China, a company the FBI accused him of starting with backing from the Chinese government.
Distributed steganography
There are distributed steganography methods, including methodologies that distribute the payload through multiple carrier files in diverse locations to make detection more difficult. For example, by cryptographer William Easttom (Chuck Easttom).
Online challenge
The puzzles that are presented by Cicada 3301 incorporate steganography with cryptography and other solving techniques since 2012. Puzzles involving steganography have also been featured in other alternate reality games.
The communications of The May Day mystery incorporate steganography and other solving techniques since 1981.
Computer malware
It is possible to steganographically hide computer malware into digital images, videos, audio and various other files in order to evade detection by antivirus software. This type of malware is called stegomalware. It can be activated by external code, which can be malicious or even non-malicious if some vulnerability in the software reading the file is exploited.
Stegomalware can be removed from certain files without knowing whether they contain stegomalware or not. This is done through content disarm and reconstruction (CDR) software, and it involves reprocessing the entire file or removing parts from it. Actually detecting stegomalware in a file can be difficult and may involve testing the file behaviour in virtual environments or deep learning analysis of the file.
Steganalysis
Stegoanalytical algorithms
Stegoanalytical algorithms can be cataloged in different ways, highlighting: according to the available information and according to the purpose sought.
According to the information available
There is the possibility of cataloging these algorithms based on the information held by the stegoanalyst in terms of clear and encrypted messages. It is a technique similar to cryptography, however, they have several differences:
Chosen stego attack: the stegoanalyst perceives the final target stego and the steganographic algorithm used.
Known cover attack: the stegoanalyst comprises the initial conductive target and the final target stego.
Known stego attack: the stegoanalyst knows the initial carrier target and the final target stego, in addition to the algorithm used.
Stego only attack: the stegoanalyst perceives exclusively the stego target.
Chosen message attack: the stegoanalyst, following a message selected by him, originates a stego target.
Known message attack: the stegoanalyst owns the stego target and the hidden message, which is known to them.
According to the purpose sought
The principal purpose of steganography is to transfer information unnoticed, however, it is possible for an attacker to have two different pretensions:
Passive steganalysis: does not alter the target stego, therefore, it examines the target stego in order to establish whether it carries hidden information and recovers the hidden message, the key used or both.
Active steganalysis: changes the initial stego target, therefore, it seeks to suppress the transfer of information, if it exists.
| Technology | Computer security | null |
28736 | https://en.wikipedia.org/wiki/Speed%20of%20light | Speed of light | The speed of light in vacuum, commonly denoted , is a universal physical constant that is exactly equal to ). According to the special theory of relativity, is the upper limit for the speed at which conventional matter or energy (and thus any signal carrying information) can travel through space.
All forms of electromagnetic radiation, including visible light, travel at the speed of light. For many practical purposes, light and other electromagnetic waves will appear to propagate instantaneously, but for long distances and very sensitive measurements, their finite speed has noticeable effects. Much starlight viewed on Earth is from the distant past, allowing humans to study the history of the universe by viewing distant objects. When communicating with distant space probes, it can take minutes to hours for signals to travel. In computing, the speed of light fixes the ultimate minimum communication delay. The speed of light can be used in time of flight measurements to measure large distances to extremely high precision.
Ole Rømer first demonstrated in 1676 that light does not travel instantaneously by studying the apparent motion of Jupiter's moon Io. Progressively more accurate measurements of its speed came over the following centuries. In a paper published in 1865, James Clerk Maxwell proposed that light was an electromagnetic wave and, therefore, travelled at speed . In 1905, Albert Einstein postulated that the speed of light with respect to any inertial frame of reference is a constant and is independent of the motion of the light source. He explored the consequences of that postulate by deriving the theory of relativity and, in doing so, showed that the parameter had relevance outside of the context of light and electromagnetism.
Massless particles and field perturbations, such as gravitational waves, also travel at speed in vacuum. Such particles and waves travel at regardless of the motion of the source or the inertial reference frame of the observer. Particles with nonzero rest mass can be accelerated to approach but can never reach it, regardless of the frame of reference in which their speed is measured. In the theory of relativity, interrelates space and time and appears in the famous mass–energy equivalence, .
In some cases, objects or waves may appear to travel faster than light (e.g., phase velocities of waves, the appearance of certain high-speed astronomical objects, and particular quantum effects). The expansion of the universe is understood to exceed the speed of light beyond a certain boundary.
The speed at which light propagates through transparent materials, such as glass or air, is less than ; similarly, the speed of electromagnetic waves in wire cables is slower than . The ratio between and the speed at which light travels in a material is called the refractive index of the material (). For example, for visible light, the refractive index of glass is typically around 1.5, meaning that light in glass travels at ; the refractive index of air for visible light is about 1.0003, so the speed of light in air is about slower than .
Numerical value, notation, and units
The speed of light in vacuum is usually denoted by a lowercase , for "constant" or the Latin (meaning 'swiftness, celerity'). In 1856, Wilhelm Eduard Weber and Rudolf Kohlrausch had used for a different constant that was later shown to equal times the speed of light in vacuum. Historically, the symbol V was used as an alternative symbol for the speed of light, introduced by James Clerk Maxwell in 1865. In 1894, Paul Drude redefined with its modern meaning. Einstein used V in his original German-language papers on special relativity in 1905, but in 1907 he switched to , which by then had become the standard symbol for the speed of light.
Sometimes is used for the speed of waves in any material medium, and 0 for the speed of light in vacuum. This subscripted notation, which is endorsed in official SI literature, has the same form as related electromagnetic constants: namely, μ0 for the vacuum permeability or magnetic constant, ε0 for the vacuum permittivity or electric constant, and Z0 for the impedance of free space. This article uses exclusively for the speed of light in vacuum.
Use in unit systems
Since 1983, the constant has been defined in the International System of Units (SI) as exactly ; this relationship is used to define the metre as exactly the distance that light travels in vacuum in of a second. By using the value of , as well as an accurate measurement of the second, one can thus establish a standard for the metre. As a dimensional physical constant, the numerical value of is different for different unit systems. For example, in imperial units, the speed of light is approximately miles per second, or roughly 1 foot per nanosecond.
In branches of physics in which appears often, such as in relativity, it is common to use systems of natural units of measurement or the geometrized unit system where . Using these units, does not appear explicitly because multiplication or division by1 does not affect the result. Its unit of light-second per second is still relevant, even if omitted.
Fundamental role in physics
The speed at which light waves propagate in vacuum is independent both of the motion of the wave source and of the inertial frame of reference of the observer. This invariance of the speed of light was postulated by Einstein in 1905, after being motivated by Maxwell's theory of electromagnetism and the lack of evidence for motion against the luminiferous aether. It has since been consistently confirmed by many experiments. It is only possible to verify experimentally that the two-way speed of light (for example, from a source to a mirror and back again) is frame-independent, because it is impossible to measure the one-way speed of light (for example, from a source to a distant detector) without some convention as to how clocks at the source and at the detector should be synchronized.
By adopting Einstein synchronization for the clocks, the one-way speed of light becomes equal to the two-way speed of light by definition. The special theory of relativity explores the consequences of this invariance of c with the assumption that the laws of physics are the same in all inertial frames of reference. One consequence is that c is the speed at which all massless particles and waves, including light, must travel in vacuum.
Special relativity has many counterintuitive and experimentally verified implications. These include the equivalence of mass and energy , length contraction (moving objects shorten), and time dilation (moving clocks run more slowly). The factor γ by which lengths contract and times dilate is known as the Lorentz factor and is given by , where v is the speed of the object. The difference of γ from1 is negligible for speeds much slower than c, such as most everyday speedsin which case special relativity is closely approximated by Galilean relativitybut it increases at relativistic speeds and diverges to infinity as v approaches c. For example, a time dilation factor of γ = 2 occurs at a relative velocity of 86.6% of the speed of light (v = 0.866 c). Similarly, a time dilation factor of γ = 10 occurs at 99.5% the speed of light (v = 0.995 c).
The results of special relativity can be summarized by treating space and time as a unified structure known as spacetime (with c relating the units of space and time), and requiring that physical theories satisfy a special symmetry called Lorentz invariance, whose mathematical formulation contains the parameter c. Lorentz invariance is an almost universal assumption for modern physical theories, such as quantum electrodynamics, quantum chromodynamics, the Standard Model of particle physics, and general relativity. As such, the parameter c is ubiquitous in modern physics, appearing in many contexts that are unrelated to light. For example, general relativity predicts that c is also the speed of gravity and of gravitational waves, and observations of gravitational waves have been consistent with this prediction. In non-inertial frames of reference (gravitationally curved spacetime or accelerated reference frames), the local speed of light is constant and equal to c, but the speed of light can differ from c when measured from a remote frame of reference, depending on how measurements are extrapolated to the region.
It is generally assumed that fundamental constants such as c have the same value throughout spacetime, meaning that they do not depend on location and do not vary with time. However, it has been suggested in various theories that the speed of light may have changed over time. No conclusive evidence for such changes has been found, but they remain the subject of ongoing research.
It is generally assumed that the two-way speed of light is isotropic, meaning that it has the same value regardless of the direction in which it is measured. Observations of the emissions from nuclear energy levels as a function of the orientation of the emitting nuclei in a magnetic field (see Hughes–Drever experiment), and of rotating optical resonators (see Resonator experiments) have put stringent limits on the possible two-way anisotropy.
Upper limit on speeds
According to special relativity, the energy of an object with rest mass m and speed v is given by , where γ is the Lorentz factor defined above. When v is zero, γ is equal to one, giving rise to the famous formula for mass–energy equivalence. The γ factor approaches infinity as v approaches c, and it would take an infinite amount of energy to accelerate an object with mass to the speed of light. The speed of light is the upper limit for the speeds of objects with positive rest mass, and individual photons cannot travel faster than the speed of light. This is experimentally established in many tests of relativistic energy and momentum.
More generally, it is impossible for signals or energy to travel faster than c. One argument for this follows from the counter-intuitive implication of special relativity known as the relativity of simultaneity. If the spatial distance between two events A and B is greater than the time interval between them multiplied by c then there are frames of reference in which A precedes B, others in which B precedes A, and others in which they are simultaneous. As a result, if something were travelling faster than c relative to an inertial frame of reference, it would be travelling backwards in time relative to another frame, and causality would be violated. In such a frame of reference, an "effect" could be observed before its "cause". Such a violation of causality has never been recorded, and would lead to paradoxes such as the tachyonic antitelephone.
Faster-than-light observations and experiments
There are situations in which it may seem that matter, energy, or information-carrying signal travels at speeds greater than c, but they do not. For example, as is discussed in the propagation of light in a medium section below, many wave velocities can exceed c. The phase velocity of X-rays through most glasses can routinely exceed c, but phase velocity does not determine the velocity at which waves convey information.
If a laser beam is swept quickly across a distant object, the spot of light can move faster than c, although the initial movement of the spot is delayed because of the time it takes light to get to the distant object at the speed c. However, the only physical entities that are moving are the laser and its emitted light, which travels at the speed c from the laser to the various positions of the spot. Similarly, a shadow projected onto a distant object can be made to move faster than c, after a delay in time. In neither case does any matter, energy, or information travel faster than light.
The rate of change in the distance between two objects in a frame of reference with respect to which both are moving (their closing speed) may have a value in excess of c. However, this does not represent the speed of any single object as measured in a single inertial frame.
Certain quantum effects appear to be transmitted instantaneously and therefore faster than c, as in the EPR paradox. An example involves the quantum states of two particles that can be entangled. Until either of the particles is observed, they exist in a superposition of two quantum states. If the particles are separated and one particle's quantum state is observed, the other particle's quantum state is determined instantaneously. However, it is impossible to control which quantum state the first particle will take on when it is observed, so information cannot be transmitted in this manner.
Another quantum effect that predicts the occurrence of faster-than-light speeds is called the Hartman effect: under certain conditions the time needed for a virtual particle to tunnel through a barrier is constant, regardless of the thickness of the barrier. This could result in a virtual particle crossing a large gap faster than light. However, no information can be sent using this effect.
So-called superluminal motion is seen in certain astronomical objects, such as the relativistic jets of radio galaxies and quasars. However, these jets are not moving at speeds in excess of the speed of light: the apparent superluminal motion is a projection effect caused by objects moving near the speed of light and approaching Earth at a small angle to the line of sight: since the light which was emitted when the jet was farther away took longer to reach the Earth, the time between two successive observations corresponds to a longer time between the instants at which the light rays were emitted.
A 2011 experiment where neutrinos were observed to travel faster than light turned out to be due to experimental error.
In models of the expanding universe, the farther galaxies are from each other, the faster they drift apart. For example, galaxies far away from Earth are inferred to be moving away from the Earth with speeds proportional to their distances. Beyond a boundary called the Hubble sphere, the rate at which their distance from Earth increases becomes greater than the speed of light.
These recession rates, defined as the increase in proper distance per cosmological time, are not velocities in a relativistic sense. Faster-than-light cosmological recession speeds are only a coordinate artifact.
Propagation of light
In classical physics, light is described as a type of electromagnetic wave. The classical behaviour of the electromagnetic field is described by Maxwell's equations, which predict that the speed c with which electromagnetic waves (such as light) propagate in vacuum is related to the distributed capacitance and inductance of vacuum, otherwise respectively known as the electric constant ε0 and the magnetic constant μ0, by the equation
In modern quantum physics, the electromagnetic field is described by the theory of quantum electrodynamics (QED). In this theory, light is described by the fundamental excitations (or quanta) of the electromagnetic field, called photons. In QED, photons are massless particles and thus, according to special relativity, they travel at the speed of light in vacuum.
Extensions of QED in which the photon has a mass have been considered. In such a theory, its speed would depend on its frequency, and the invariant speed c of special relativity would then be the upper limit of the speed of light in vacuum. No variation of the speed of light with frequency has been observed in rigorous testing, putting stringent limits on the mass of the photon. The limit obtained depends on the model used: if the massive photon is described by Proca theory, the experimental upper bound for its mass is about 10−57 grams; if photon mass is generated by a Higgs mechanism, the experimental upper limit is less sharp, (roughly 2 × 10−47 g).
Another reason for the speed of light to vary with its frequency would be the failure of special relativity to apply to arbitrarily small scales, as predicted by some proposed theories of quantum gravity. In 2009, the observation of gamma-ray burst GRB 090510 found no evidence for a dependence of photon speed on energy, supporting tight constraints in specific models of spacetime quantization on how this speed is affected by photon energy for energies approaching the Planck scale.
In a medium
In a medium, light usually does not propagate at a speed equal to c; further, different types of light wave will travel at different speeds. The speed at which the individual crests and troughs of a plane wave (a wave filling the whole space, with only one frequency) propagate is called the phase velocity vp. A physical signal with a finite extent (a pulse of light) travels at a different speed. The overall envelope of the pulse travels at the group velocity vg, and its earliest part travels at the front velocity vf.
The phase velocity is important in determining how a light wave travels through a material or from one material to another. It is often represented in terms of a refractive index. The refractive index of a material is defined as the ratio of c to the phase velocity vp in the material: larger indices of refraction indicate lower speeds. The refractive index of a material may depend on the light's frequency, intensity, polarization, or direction of propagation; in many cases, though, it can be treated as a material-dependent constant. The refractive index of air is approximately 1.0003. Denser media, such as water, glass, and diamond, have refractive indexes of around 1.3, 1.5 and 2.4, respectively, for visible light.
In exotic materials like Bose–Einstein condensates near absolute zero, the effective speed of light may be only a few metres per second. However, this represents absorption and re-radiation delay between atoms, as do all slower-than-c speeds in material substances. As an extreme example of light "slowing" in matter, two independent teams of physicists claimed to bring light to a "complete standstill" by passing it through a Bose–Einstein condensate of the element rubidium. The popular description of light being "stopped" in these experiments refers only to light being stored in the excited states of atoms, then re-emitted at an arbitrarily later time, as stimulated by a second laser pulse. During the time it had "stopped", it had ceased to be light. This type of behaviour is generally microscopically true of all transparent media which "slow" the speed of light.
In transparent materials, the refractive index generally is greater than 1, meaning that the phase velocity is less than c. In other materials, it is possible for the refractive index to become smaller than1 for some frequencies; in some exotic materials it is even possible for the index of refraction to become negative. The requirement that causality is not violated implies that the real and imaginary parts of the dielectric constant of any material, corresponding respectively to the index of refraction and to the attenuation coefficient, are linked by the Kramers–Kronig relations. In practical terms, this means that in a material with refractive index less than 1, the wave will be absorbed quickly.
A pulse with different group and phase velocities (which occurs if the phase velocity is not the same for all the frequencies of the pulse) smears out over time, a process known as dispersion. Certain materials have an exceptionally low (or even zero) group velocity for light waves, a phenomenon called slow light.
The opposite, group velocities exceeding c, was proposed theoretically in 1993 and achieved experimentally in 2000. It should even be possible for the group velocity to become infinite or negative, with pulses travelling instantaneously or backwards in time.
None of these options allow information to be transmitted faster than c. It is impossible to transmit information with a light pulse any faster than the speed of the earliest part of the pulse (the front velocity). It can be shown that this is (under certain assumptions) always equal to c.
It is possible for a particle to travel through a medium faster than the phase velocity of light in that medium (but still slower than c). When a charged particle does that in a dielectric material, the electromagnetic equivalent of a shock wave, known as Cherenkov radiation, is emitted.
Practical effects of finiteness
The speed of light is of relevance to telecommunications: the one-way and round-trip delay time are greater than zero. This applies from small to astronomical scales. On the other hand, some techniques depend on the finite speed of light, for example in distance measurements.
Small scales
In computers, the speed of light imposes a limit on how quickly data can be sent between processors. If a processor operates at 1gigahertz, a signal can travel only a maximum of about in a single clock cycle – in practice, this distance is even shorter since the printed circuit board refracts and slows down signals. Processors must therefore be placed close to each other, as well as memory chips, to minimize communication latencies, and care must be exercised when routing wires between them to ensure signal integrity. If clock frequencies continue to increase, the speed of light may eventually become a limiting factor for the internal design of single chips.
Large distances on Earth
Given that the equatorial circumference of the Earth is about and that c is about , the theoretical shortest time for a piece of information to travel half the globe along the surface is about 67 milliseconds. When light is traveling in optical fibre (a transparent material) the actual transit time is longer, in part because the speed of light is slower by about 35% in optical fibre, depending on its refractive index n. Straight lines are rare in global communications and the travel time increases when signals pass through electronic switches or signal regenerators.
Although this distance is largely irrelevant for most applications, latency becomes important in fields such as high-frequency trading, where traders seek to gain minute advantages by delivering their trades to exchanges fractions of a second ahead of other traders. For example, traders have been switching to microwave communications between trading hubs, because of the advantage which radio waves travelling at near to the speed of light through air have over comparatively slower fibre optic signals.
Spaceflight and astronomy
Similarly, communications between the Earth and spacecraft are not instantaneous. There is a brief delay from the source to the receiver, which becomes more noticeable as distances increase. This delay was significant for communications between ground control and Apollo 8 when it became the first crewed spacecraft to orbit the Moon: for every question, the ground control station had to wait at least three seconds for the answer to arrive.
The communications delay between Earth and Mars can vary between five and twenty minutes depending upon the relative positions of the two planets. As a consequence of this, if a robot on the surface of Mars were to encounter a problem, its human controllers would not be aware of it until approximately later. It would then take a further for commands to travel from Earth to Mars.
Receiving light and other signals from distant astronomical sources takes much longer. For example, it takes 13 billion (13) years for light to travel to Earth from the faraway galaxies viewed in the Hubble Ultra-Deep Field images. Those photographs, taken today, capture images of the galaxies as they appeared 13 billion years ago, when the universe was less than a billion years old. The fact that more distant objects appear to be younger, due to the finite speed of light, allows astronomers to infer the evolution of stars, of galaxies, and of the universe itself.
Astronomical distances are sometimes expressed in light-years, especially in popular science publications and media. A light-year is the distance light travels in one Julian year, around 9461 billion kilometres, 5879 billion miles, or 0.3066 parsecs. In round figures, a light year is nearly 10 trillion kilometres or nearly 6 trillion miles. Proxima Centauri, the closest star to Earth after the Sun, is around 4.2 light-years away.
Distance measurement
Radar systems measure the distance to a target by the time it takes a radio-wave pulse to return to the radar antenna after being reflected by the target: the distance to the target is half the round-trip transit time multiplied by the speed of light. A Global Positioning System (GPS) receiver measures its distance to GPS satellites based on how long it takes for a radio signal to arrive from each satellite, and from these distances calculates the receiver's position. Because light travels about () in one second, these measurements of small fractions of a second must be very precise. The Lunar Laser Ranging experiment, radar astronomy and the Deep Space Network determine distances to the Moon, planets and spacecraft, respectively, by measuring round-trip transit times.
Measurement
There are different ways to determine the value of c. One way is to measure the actual speed at which light waves propagate, which can be done in various astronomical and Earth-based setups. It is also possible to determine c from other physical laws where it appears, for example, by determining the values of the electromagnetic constants ε0 and μ0 and using their relation to c. Historically, the most accurate results have been obtained by separately determining the frequency and wavelength of a light beam, with their product equalling c. This is described in more detail in the "Interferometry" section below.
In 1983 the metre was defined as "the length of the path travelled by light in vacuum during a time interval of of a second", fixing the value of the speed of light at by definition, as described below. Consequently, accurate measurements of the speed of light yield an accurate realization of the metre rather than an accurate value of c.
Astronomical measurements
Outer space is a convenient setting for measuring the speed of light because of its large scale and nearly perfect vacuum. Typically, one measures the time needed for light to traverse some reference distance in the Solar System, such as the radius of the Earth's orbit. Historically, such measurements could be made fairly accurately, compared to how accurately the length of the reference distance is known in Earth-based units.
Ole Rømer used an astronomical measurement to make the first quantitative estimate of the speed of light in the year 1676. When measured from Earth, the periods of moons orbiting a distant planet are shorter when the Earth is approaching the planet than when the Earth is receding from it. The difference is small, but the cumulative time becomes significant when measured over months. The distance travelled by light from the planet (or its moon) to Earth is shorter when the Earth is at the point in its orbit that is closest to its planet than when the Earth is at the farthest point in its orbit, the difference in distance being the diameter of the Earth's orbit around the Sun. The observed change in the moon's orbital period is caused by the difference in the time it takes light to traverse the shorter or longer distance. Rømer observed this effect for Jupiter's innermost major moon Io and deduced that light takes 22 minutes to cross the diameter of the Earth's orbit.
Another method is to use the aberration of light, discovered and explained by James Bradley in the 18th century. This effect results from the vector addition of the velocity of light arriving from a distant source (such as a star) and the velocity of its observer (see diagram on the right). A moving observer thus sees the light coming from a slightly different direction and consequently sees the source at a position shifted from its original position. Since the direction of the Earth's velocity changes continuously as the Earth orbits the Sun, this effect causes the apparent position of stars to move around. From the angular difference in the position of stars (maximally 20.5 arcseconds) it is possible to express the speed of light in terms of the Earth's velocity around the Sun, which with the known length of a year can be converted to the time needed to travel from the Sun to the Earth. In 1729, Bradley used this method to derive that light travelled times faster than the Earth in its orbit (the modern figure is times faster) or, equivalently, that it would take light 8 minutes 12 seconds to travel from the Sun to the Earth.
Astronomical unit
An astronomical unit (AU) is approximately the average distance between the Earth and Sun. It was redefined in 2012 as exactly . Previously the AU was not based on the International System of Units but in terms of the gravitational force exerted by the Sun in the framework of classical mechanics. The current definition uses the recommended value in metres for the previous definition of the astronomical unit, which was determined by measurement. This redefinition is analogous to that of the metre and likewise has the effect of fixing the speed of light to an exact value in astronomical units per second (via the exact speed of light in metres per second).
Previously, the inverse of expressed in seconds per astronomical unit was measured by comparing the time for radio signals to reach different spacecraft in the Solar System, with their position calculated from the gravitational effects of the Sun and various planets. By combining many such measurements, a best fit value for the light time per unit distance could be obtained. For example, in 2009, the best estimate, as approved by the International Astronomical Union (IAU), was:
light time for unit distance: tau = ,
c = = .
The relative uncertainty in these measurements is 0.02 parts per billion (), equivalent to the uncertainty in Earth-based measurements of length by interferometry. Since the metre is defined to be the length travelled by light in a certain time interval, the measurement of the light time in terms of the previous definition of the astronomical unit can also be interpreted as measuring the length of an AU (old definition) in metres.
Time of flight techniques
A method of measuring the speed of light is to measure the time needed for light to travel to a mirror at a known distance and back. This is the working principle behind experiments by Hippolyte Fizeau and Léon Foucault.
The setup as used by Fizeau consists of a beam of light directed at a mirror away. On the way from the source to the mirror, the beam passes through a rotating cogwheel. At a certain rate of rotation, the beam passes through one gap on the way out and another on the way back, but at slightly higher or lower rates, the beam strikes a tooth and does not pass through the wheel. Knowing the distance between the wheel and the mirror, the number of teeth on the wheel, and the rate of rotation, the speed of light can be calculated.
The method of Foucault replaces the cogwheel with a rotating mirror. Because the mirror keeps rotating while the light travels to the distant mirror and back, the light is reflected from the rotating mirror at a different angle on its way out than it is on its way back. From this difference in angle, the known speed of rotation and the distance to the distant mirror the speed of light may be calculated. Foucault used this apparatus to measure the speed of light in air versus water, based on a suggestion by François Arago.
Today, using oscilloscopes with time resolutions of less than one nanosecond, the speed of light can be directly measured by timing the delay of a light pulse from a laser or an LED reflected from a mirror. This method is less precise (with errors of the order of 1%) than other modern techniques, but it is sometimes used as a laboratory experiment in college physics classes.
Electromagnetic constants
An option for deriving c that does not directly depend on a measurement of the propagation of electromagnetic waves is to use the relation between c and the vacuum permittivity ε0 and vacuum permeability μ0 established by Maxwell's theory: c2 = 1/(ε0μ0). The vacuum permittivity may be determined by measuring the capacitance and dimensions of a capacitor, whereas the value of the vacuum permeability was historically fixed at exactly through the definition of the ampere. Rosa and Dorsey used this method in 1907 to find a value of . Their method depended upon having a standard unit of electrical resistance, the "international ohm", and so its accuracy was limited by how this standard was defined.
Cavity resonance
Another way to measure the speed of light is to independently measure the frequency f and wavelength λ of an electromagnetic wave in vacuum. The value of c can then be found by using the relation c = fλ. One option is to measure the resonance frequency of a cavity resonator. If the dimensions of the resonance cavity are also known, these can be used to determine the wavelength of the wave. In 1946, Louis Essen and A.C. Gordon-Smith established the frequency for a variety of normal modes of microwaves of a microwave cavity of precisely known dimensions. The dimensions were established to an accuracy of about ±0.8 μm using gauges calibrated by interferometry. As the wavelength of the modes was known from the geometry of the cavity and from electromagnetic theory, knowledge of the associated frequencies enabled a calculation of the speed of light.
The Essen–Gordon-Smith result, , was substantially more precise than those found by optical techniques. By 1950, repeated measurements by Essen established a result of .
A household demonstration of this technique is possible, using a microwave oven and food such as marshmallows or margarine: if the turntable is removed so that the food does not move, it will cook the fastest at the antinodes (the points at which the wave amplitude is the greatest), where it will begin to melt. The distance between two such spots is half the wavelength of the microwaves; by measuring this distance and multiplying the wavelength by the microwave frequency (usually displayed on the back of the oven, typically 2450 MHz), the value of c can be calculated, "often with less than 5% error".
Interferometry
Interferometry is another method to find the wavelength of electromagnetic radiation for determining the speed of light. A coherent beam of light (e.g. from a laser), with a known frequency (f), is split to follow two paths and then recombined. By adjusting the path length while observing the interference pattern and carefully measuring the change in path length, the wavelength of the light (λ) can be determined. The speed of light is then calculated using the equation c = λf.
Before the advent of laser technology, coherent radio sources were used for interferometry measurements of the speed of light. Interferometric determination of wavelength becomes less precise with wavelength and the experiments were thus limited in precision by the long wavelength (~) of the radiowaves. The precision can be improved by using light with a shorter wavelength, but then it becomes difficult to directly measure the frequency of the light.
One way around this problem is to start with a low frequency signal of which the frequency can be precisely measured, and from this signal progressively synthesize higher frequency signals whose frequency can then be linked to the original signal. A laser can then be locked to the frequency, and its wavelength can be determined using interferometry. This technique was due to a group at the National Bureau of Standards (which later became the National Institute of Standards and Technology). They used it in 1972 to measure the speed of light in vacuum with a fractional uncertainty of .
History
Until the early modern period, it was not known whether light travelled instantaneously or at a very fast finite speed. The first extant recorded examination of this subject was in ancient Greece. The ancient Greeks, Arabic scholars, and classical European scientists long debated this until Rømer provided the first calculation of the speed of light. Einstein's theory of special relativity postulates that the speed of light is constant regardless of one's frame of reference. Since then, scientists have provided increasingly accurate measurements.
Early history
Empedocles (c. 490–430 BCE) was the first to propose a theory of light and claimed that light has a finite speed. He maintained that light was something in motion, and therefore must take some time to travel. Aristotle argued, to the contrary, that "light is due to the presence of something, but it is not a movement". Euclid and Ptolemy advanced Empedocles' emission theory of vision, where light is emitted from the eye, thus enabling sight. Based on that theory, Heron of Alexandria argued that the speed of light must be infinite because distant objects such as stars appear immediately upon opening the eyes.
Early Islamic philosophers initially agreed with the Aristotelian view that light had no speed of travel. In 1021, Alhazen (Ibn al-Haytham) published the Book of Optics, in which he presented a series of arguments dismissing the emission theory of vision in favour of the now accepted intromission theory, in which light moves from an object into the eye. This led Alhazen to propose that light must have a finite speed, and that the speed of light is variable, decreasing in denser bodies. He argued that light is substantial matter, the propagation of which requires time, even if this is hidden from the senses. Also in the 11th century, Abū Rayhān al-Bīrūnī agreed that light has a finite speed, and observed that the speed of light is much faster than the speed of sound.
In the 13th century, Roger Bacon argued that the speed of light in air was not infinite, using philosophical arguments backed by the writing of Alhazen and Aristotle. In the 1270s, Witelo considered the possibility of light travelling at infinite speed in vacuum, but slowing down in denser bodies.
In the early 17th century, Johannes Kepler believed that the speed of light was infinite since empty space presents no obstacle to it. René Descartes argued that if the speed of light were to be finite, the Sun, Earth, and Moon would be noticeably out of alignment during a lunar eclipse. Although this argument fails when aberration of light is taken into account, the latter was not recognized until the following century. Since such misalignment had not been observed, Descartes concluded the speed of light was infinite. Descartes speculated that if the speed of light were found to be finite, his whole system of philosophy might be demolished. Despite this, in his derivation of Snell's law, Descartes assumed that some kind of motion associated with light was faster in denser media. Pierre de Fermat derived Snell's law using the opposing assumption, the denser the medium the slower light travelled. Fermat also argued in support of a finite speed of light.
First measurement attempts
In 1629, Isaac Beeckman proposed an experiment in which a person observes the flash of a cannon reflecting off a mirror about one mile (1.6 km) away. In 1638, Galileo Galilei proposed an experiment, with an apparent claim to having performed it some years earlier, to measure the speed of light by observing the delay between uncovering a lantern and its perception some distance away. He was unable to distinguish whether light travel was instantaneous or not, but concluded that if it were not, it must nevertheless be extraordinarily rapid. In 1667, the Accademia del Cimento of Florence reported that it had performed Galileo's experiment, with the lanterns separated by about one mile, but no delay was observed. The actual delay in this experiment would have been about 11 microseconds.
The first quantitative estimate of the speed of light was made in 1676 by Ole Rømer. From the observation that the periods of Jupiter's innermost moon Io appeared to be shorter when the Earth was approaching Jupiter than when receding from it, he concluded that light travels at a finite speed, and estimated that it takes light 22 minutes to cross the diameter of Earth's orbit. Christiaan Huygens combined this estimate with an estimate for the diameter of the Earth's orbit to obtain an estimate of speed of light of , which is 27% lower than the actual value.
In his 1704 book Opticks, Isaac Newton reported Rømer's calculations of the finite speed of light and gave a value of "seven or eight minutes" for the time taken for light to travel from the Sun to the Earth (the modern value is 8 minutes 19 seconds). Newton queried whether Rømer's eclipse shadows were coloured. Hearing that they were not, he concluded the different colours travelled at the same speed. In 1729, James Bradley discovered stellar aberration. From this effect he determined that light must travel 10,210 times faster than the Earth in its orbit (the modern figure is 10,066 times faster) or, equivalently, that it would take light 8 minutes 12 seconds to travel from the Sun to the Earth.
Connections with electromagnetism
In the 19th century Hippolyte Fizeau developed a method to determine the speed of light based on time-of-flight measurements on Earth and reported a value of . His method was improved upon by Léon Foucault who obtained a value of in 1862. In the year 1856, Wilhelm Eduard Weber and Rudolf Kohlrausch measured the ratio of the electromagnetic and electrostatic units of charge, 1/, by discharging a Leyden jar, and found that its numerical value was very close to the speed of light as measured directly by Fizeau. The following year Gustav Kirchhoff calculated that an electric signal in a resistanceless wire travels along the wire at this speed.
In the early 1860s, Maxwell showed that, according to the theory of electromagnetism he was working on, electromagnetic waves propagate in empty space at a speed equal to the above Weber/Kohlrausch ratio, and drawing attention to the numerical proximity of this value to the speed of light as measured by Fizeau, he proposed that light is in fact an electromagnetic wave. Maxwell backed up his claim with his own experiment published in the 1868 Philosophical Transactions which determined the ratio of the electrostatic and electromagnetic units of electricity.
"Luminiferous aether"
The wave properties of light were well known since Thomas Young. In the 19th century, physicists believed light was propagating in a medium called aether (or ether). But for electric force, it looks more like the gravitational force in Newton's law. A transmitting medium was not required. After Maxwell theory unified light and electric and magnetic waves, it was favored that both light and electric magnetic waves propagate in the same aether medium (or called the luminiferous aether).
It was thought at the time that empty space was filled with a background medium called the luminiferous aether in which the electromagnetic field existed. Some physicists thought that this aether acted as a preferred frame of reference for the propagation of light and therefore it should be possible to measure the motion of the Earth with respect to this medium, by measuring the isotropy of the speed of light. Beginning in the 1880s several experiments were performed to try to detect this motion, the most famous of which is the experiment performed by Albert A. Michelson and Edward W. Morley in 1887. The detected motion was found to always be nil (within observational error). Modern experiments indicate that the two-way speed of light is isotropic (the same in every direction) to within 6 nanometres per second.
Because of this experiment Hendrik Lorentz proposed that the motion of the apparatus through the aether may cause the apparatus to contract along its length in the direction of motion, and he further assumed that the time variable for moving systems must also be changed accordingly ("local time"), which led to the formulation of the Lorentz transformation. Based on Lorentz's aether theory, Henri Poincaré (1900) showed that this local time (to first order in v/c) is indicated by clocks moving in the aether, which are synchronized under the assumption of constant light speed. In 1904, he speculated that the speed of light could be a limiting velocity in dynamics, provided that the assumptions of Lorentz's theory are all confirmed. In 1905, Poincaré brought Lorentz's aether theory into full observational agreement with the principle of relativity.
Special relativity
In 1905 Einstein postulated from the outset that the speed of light in vacuum, measured by a non-accelerating observer, is independent of the motion of the source or observer. Using this and the principle of relativity as a basis he derived the special theory of relativity, in which the speed of light in vacuum c featured as a fundamental constant, also appearing in contexts unrelated to light. This made the concept of the stationary aether (to which Lorentz and Poincaré still adhered) useless and revolutionized the concepts of space and time.
Increased accuracy of c and redefinition of the metre and second
In the second half of the 20th century, much progress was made in increasing the accuracy of measurements of the speed of light, first by cavity resonance techniques and later by laser interferometer techniques. These were aided by new, more precise, definitions of the metre and second. In 1950, Louis Essen determined the speed as , using cavity resonance. This value was adopted by the 12th General Assembly of the Radio-Scientific Union in 1957. In 1960, the metre was redefined in terms of the wavelength of a particular spectral line of krypton-86, and, in 1967, the second was redefined in terms of the hyperfine transition frequency of the ground state of caesium-133.
In 1972, using the laser interferometer method and the new definitions, a group at the US National Bureau of Standards in Boulder, Colorado determined the speed of light in vacuum to be c = . This was 100 times less uncertain than the previously accepted value. The remaining uncertainty was mainly related to the definition of the metre. As similar experiments found comparable results for c, the 15th General Conference on Weights and Measures in 1975 recommended using the value for the speed of light.
Defined as an explicit constant
In 1983 the 17th meeting of the General Conference on Weights and Measures (CGPM) found that wavelengths from frequency measurements and a given value for the speed of light are more reproducible than the previous standard. They kept the 1967 definition of second, so the caesium hyperfine frequency would now determine both the second and the metre. To do this, they redefined the metre as "the length of the path traveled by light in vacuum during a time interval of 1/ of a second".
As a result of this definition, the value of the speed of light in vacuum is exactly and has become a defined constant in the SI system of units. Improved experimental techniques that, prior to 1983, would have measured the speed of light no longer affect the known value of the speed of light in SI units, but instead allow a more precise realization of the metre by more accurately measuring the wavelength of krypton-86 and other light sources.
In 2011, the CGPM stated its intention to redefine all seven SI base units using what it calls "the explicit-constant formulation", where each "unit is defined indirectly by specifying explicitly an exact value for a well-recognized fundamental constant", as was done for the speed of light. It proposed a new, but completely equivalent, wording of the metre's definition: "The metre, symbol m, is the unit of length; its magnitude is set by fixing the numerical value of the speed of light in vacuum to be equal to exactly when it is expressed in the SI unit ." This was one of the changes that was incorporated in the 2019 revision of the SI, also termed the New SI.
| Physical sciences | Physics | null |
28742 | https://en.wikipedia.org/wiki/Supercontinent | Supercontinent | In geology, a supercontinent is the assembly of most or all of Earth's continental blocks or cratons to form a single large landmass. However, some geologists use a different definition, "a grouping of formerly dispersed continents", which leaves room for interpretation and is easier to apply to Precambrian times. To separate supercontinents from other groupings, a limit has been proposed in which a continent must include at least about 75% of the continental crust then in existence in order to qualify as a supercontinent.
Moving under the forces of plate tectonics, supercontinents have assembled and dispersed multiple times in the geologic past. According to modern definitions, a supercontinent does not exist today; the closest is the current Afro-Eurasian landmass, which covers approximately 57% of Earth's total land area. The last period in which the continental landmasses were near to one another was 336 to 175 million years ago, forming the supercontinent Pangaea. The positions of continents have been accurately determined back to the early Jurassic, shortly before the breakup of Pangaea. Pangaea's predecessor Gondwana is not considered a supercontinent under the first definition since the landmasses of Baltica, Laurentia and Siberia were separate at the time.
A future supercontinent, termed Pangaea Proxima, is hypothesized to form within the next 250 million years.
Theories
The Phanerozoic supercontinent Pangaea began to break up and this distancing continues today. Because Pangaea is the most recent of Earth's supercontinents, it is the best known and understood. Contributing to Pangaea's popularity in the classroom, its reconstruction is almost as simple as fitting together the present continents bordering the Atlantic ocean like puzzle pieces.
For the period before Pangaea, there are two contrasting models for supercontinent evolution through geological time.
Series
The first model theorizes that at least two separate supercontinents existed comprising Vaalbara and Kenorland, with Kenorland comprising Superia and Sclavia. These parts of Neoarchean age broke off at ~2480 and , and portions of them later collided to form Nuna (Northern Europe and North America). Nuna continued to develop during the Mesoproterozoic, primarily by lateral accretion of juvenile arcs, and in Nuna collided with other land masses, forming Rodinia. Between ~825 and Rodinia broke apart. However, before completely breaking up, some fragments of Rodinia had already come together to form Gondwana by . Pangaea formed through the collision of Gondwana, Laurasia (Laurentia and Baltica), and Siberia.
Protopangea–Paleopangea
The second model (Kenorland-Arctica) is based on both palaeomagnetic and geological evidence and proposes that the continental crust comprised a single supercontinent from until break-up during the Ediacaran period after . The reconstruction is derived from the observation that palaeomagnetic poles converge to quasi-static positions for long intervals between ~2.72–2.115 Ga; 1.35–1.13 Ga; and with only small peripheral modifications to the reconstruction. During the intervening periods, the poles conform to a unified apparent polar wander path.
Although it contrasts the first model, the first phase (Protopangea) essentially incorporates Vaalbara and Kenorland of the first model. The explanation for the prolonged duration of the Protopangea–Paleopangea supercontinent appears to be that lid tectonics (comparable to the tectonics operating on Mars and Venus) prevailed during Precambrian times. According to this theory, plate tectonics as seen on the contemporary Earth became dominant only during the latter part of geological times. This approach was widely criticized by many researchers as it uses incorrect application of paleomagnetic data.
Cycles
A supercontinent cycle is the break-up of one supercontinent and the development of another, which takes place on a global scale. Supercontinent cycles are not the same as the Wilson cycle, which is the opening and closing of an individual oceanic basin. The Wilson cycle rarely synchronizes with the timing of a supercontinent cycle. However, supercontinent cycles and Wilson cycles were both involved in the creation of Pangaea and Rodinia.
Secular trends such as carbonatites, granulites, eclogites, and greenstone belt deformation events are all possible indicators of Precambrian supercontinent cyclicity, although the Protopangea–Paleopangea solution implies that Phanerozoic style of supercontinent cycles did not operate during these times. Also, there are instances where these secular trends have a weak, uneven, or absent imprint on the supercontinent cycle; secular methods for supercontinent reconstruction will produce results that have only one explanation, and each explanation for a trend must fit in with the rest.
The following table names reconstructed ancient supercontinents, using Bradley's 2011 looser definition, with an approximate timescale of millions of years ago (Ma).
Volcanism
The causes of supercontinent assembly and dispersal are thought to be driven by convection processes in Earth's mantle. Approximately 660 km into the mantle, a discontinuity occurs, affecting the surface crust through processes involving plumes and superplumes (aka large low-shear-velocity provinces). When a slab of the subducted crust is denser than the surrounding mantle, it sinks to discontinuity. Once the slabs build up, they will sink through to the lower mantle in what is known as a "slab avalanche". This displacement at the discontinuity will cause the lower mantle to compensate and rise elsewhere. The rising mantle can form a plume or superplume.
Besides having compositional effects on the upper mantle by replenishing the large-ion lithophile elements, volcanism affects plate movement. The plates will be moved towards a geoidal low perhaps where the slab avalanche occurred and pushed away from the geoidal high that can be caused by the plumes or superplumes. This causes the continents to push together to form supercontinents and was evidently the process that operated to cause the early continental crust to aggregate into Protopangea.
Dispersal of supercontinents is caused by the accumulation of heat underneath the crust due to the rising of very large convection cells or plumes, and a massive heat release resulted in the final break-up of Paleopangea. Accretion occurs over geoidal lows that can be caused by avalanche slabs or the downgoing limbs of convection cells. Evidence of the accretion and dispersion of supercontinents is seen in the geological rock record.
The influence of known volcanic eruptions does not compare to that of flood basalts. The timing of flood basalts has corresponded with a large-scale continental break-up. However, due to a lack of data on the time required to produce flood basalts, the climatic impact is difficult to quantify. The timing of a single lava flow is also undetermined. These are important factors on how flood basalts influenced paleoclimate.
Plate tectonics
Global palaeogeography and plate interactions as far back as Pangaea are relatively well understood today. However, the evidence becomes more sparse further back in geologic history. Marine magnetic anomalies, passive margin match-ups, geologic interpretation of orogenic belts, paleomagnetism, paleobiogeography of fossils, and distribution of climatically sensitive strata are all methods to obtain evidence for continent locality and indicators of the environment throughout time.
Phanerozoic (541 Ma to present) and Precambrian ( to ) had primarily passive margins and detrital zircons (and orogenic granites), whereas the tenure of Pangaea contained few. Matching edges of continents are where passive margins form. The edges of these continents may rift. At this point, seafloor spreading becomes the driving force. Passive margins are therefore born during the break-up of supercontinents and die during supercontinent assembly. Pangaea's supercontinent cycle is a good example of the efficiency of using the presence or lack of these entities to record the development, tenure, and break-up of supercontinents. There is a sharp decrease in passive margins between 500 and during the timing of Pangaea's assembly. The tenure of Pangaea is marked by a low number of passive margins during 336 to and its break-up is indicated accurately by an increase in passive margins.
Orogenic belts can form during the assembly of continents and supercontinents. The orogenic belts present on continental blocks are classified into three different categories and have implications for interpreting geologic bodies. Intercratonic orogenic belts are characteristic of ocean basin closure. Clear indicators of intracratonic activity contain ophiolites and other oceanic materials that are present in the suture zone. Intracratonic orogenic belts occur as thrust belts and do not contain any oceanic material. However, the absence of ophiolites is not strong evidence for intracratonic belts, because the oceanic material can be squeezed out and eroded away in an intracratonic environment. The third kind of orogenic belt is a confined orogenic belt which is the closure of small basins. The assembly of a supercontinent would have to show intracratonic orogenic belts. However, interpretation of orogenic belts can be difficult.
The collision of Gondwana and Laurasia occurred in the late Palaeozoic. By this collision, the Variscan mountain range was created, along the equator. This 6000-km-long mountain range is usually referred to in two parts: the Hercynian mountain range of the late Carboniferous makes up the eastern part, and the western part is the Appalachian Mountains, uplifted in the early Permian. (The existence of a flat elevated plateau like the Tibetan Plateau is under debate.) The locality of the Variscan range made it influential to both the northern and southern hemispheres. The elevation of the Appalachians would greatly influence global atmospheric circulation.
Climate
Continents affect the climate of the planet drastically, with supercontinents having a larger, more prevalent influence. Continents modify global wind patterns, control ocean current paths, and have a higher albedo than the oceans. Winds are redirected by mountains, and albedo differences cause shifts in onshore winds. Higher elevation in continental interiors produces a cooler, drier climate, the phenomenon of continentality. This is seen today in Eurasia, and rock record shows evidence of continentality in the middle of Pangaea.
Glacial
The term glacial-epoch refers to a long episode of glaciation on Earth over millions of years. Glaciers have major implications on the climate, particularly through sea level change. Changes in the position and elevation of the continents, the paleolatitude and ocean circulation affect the glacial epochs. There is an association between the rifting and breakup of continents and supercontinents and glacial epochs. According to the model for Precambrian supercontinent series, the breakup of Kenorland and Rodinia was associated with the Paleoproterozoic and Neoproterozoic glacial epochs, respectively.
In contrast, the Protopangea–Paleopangea theory shows that these glaciations correlated with periods of low continental velocity, and it is concluded that a fall in tectonic and corresponding volcanic activity was responsible for these intervals of global frigidity. During the accumulation of supercontinents with times of regional uplift, glacial epochs seem to be rare with little supporting evidence. However, the lack of evidence does not allow for the conclusion that glacial epochs are not associated with the collisional assembly of supercontinents. This could just represent a preservation bias.
During the late Ordovician (~458.4 Ma), the particular configuration of Gondwana may have allowed for glaciation and high CO2 levels to occur at the same time. However, some geologists disagree and think that there was a temperature increase at this time. This increase may have been strongly influenced by the movement of Gondwana across the South Pole, which may have prevented lengthy snow accumulation. Although late Ordovician temperatures at the South Pole may have reached freezing, there were no ice sheets during the early Silurian through the late Mississippian Agreement can be met with the theory that continental snow can occur when the edge of a continent is near the pole. Therefore Gondwana, although located tangent to the South Pole, may have experienced glaciation along its coasts.
Precipitation
Though precipitation rates during monsoonal circulations are difficult to predict, there is evidence for a large orographic barrier within the interior of Pangaea during the late Paleozoic The possibility of the southwest–northeast trending Appalachian-Hercynian Mountains makes the region's monsoonal circulations potentially relatable to present-day monsoonal circulations surrounding the Tibetan Plateau, which is known to positively influence the magnitude of monsoonal periods within Eurasia. It is therefore somewhat expected that lower topography in other regions of the supercontinent during the Jurassic would negatively influence precipitation variations. The breakup of supercontinents may have affected local precipitation. When any supercontinent breaks up, there will be an increase in precipitation runoff over the surface of the continental landmasses, increasing silicate weathering and the consumption of CO2.
Temperature
Even though during the Archaean solar radiation was reduced by 30 percent and the Cambrian-Precambrian boundary by 6 percent, the Earth has only experienced three ice ages throughout the Precambrian. Erroneous conclusions are more likely to be made when models are limited to one climatic configuration (which is usually present-day).
Cold winters in continental interiors are due to rate ratios of radiative cooling (greater) and heat transport from continental rims. To raise winter temperatures within continental interiors, the rate of heat transport must increase to become greater than the rate of radiative cooling. Through climate models, alterations in atmospheric CO2 content and ocean heat transport are not comparatively effective.
CO2 models suggest that values were low in the late Cenozoic and Carboniferous-Permian glaciations. Although early Paleozoic values are much larger (more than 10 percent higher than that of today). This may be due to high seafloor spreading rates after the breakup of Precambrian supercontinents and the lack of land plants as a carbon sink.
During the late Permian, it is expected that seasonal Pangaean temperatures varied drastically. Subtropic summer temperatures were warmer than that of today by as much as 6–10 degrees, and mid-latitudes in the winter were less than −30 degrees Celsius. These seasonal changes within the supercontinent were influenced by the large size of Pangaea. And, just like today, coastal regions experienced much less variation.
During the Jurassic, summer temperatures did not rise above zero degrees Celsius along the northern rim of Laurasia, which was the northernmost part of Pangaea (the southernmost portion of Pangaea was Gondwana). Ice-rafted dropstones sourced from Russia are indicators of this northern boundary. The Jurassic is thought to have been approximately 10 degrees Celsius warmer along 90 degrees East paleolongitude compared to the present temperature of today's central Eurasia.
Milankovitch cycles
Many studies of the Milankovitch cycles during supercontinent time periods have focused on the mid-Cretaceous. Present amplitudes of Milankovitch cycles over present-day Eurasia may be mirrored in both the southern and northern hemispheres of the supercontinent Pangaea. Climate modeling shows that summer fluctuations varied 14–16 degrees Celsius on Pangaea, which is similar or slightly higher than summer temperatures of Eurasia during the Pleistocene. The largest-amplitude Milankovitch cycles are expected to have been at mid-to high-latitudes during the Triassic and Jurassic.
Atmospheric gases
Plate tectonics and the chemical composition of the atmosphere (specifically greenhouse gases) are the two most prevailing factors present within the geologic time scale. Continental drift influences both cold and warm climatic episodes. Atmospheric circulation and climate are strongly influenced by the location and formation of continents and supercontinents. Therefore, continental drift influences mean global temperature.
Oxygen levels of the Archaean were negligible, and today they are roughly 21 percent. It is thought that the Earth's oxygen content has risen in stages: six or seven steps that are timed very closely to the development of Earth's supercontinents.
Continents collide
Super-mountains form
Erosion of super-mountains
Large quantities of minerals and nutrients wash out to open ocean
Explosion of marine algae life (partly sourced from noted nutrients)
Mass amounts of oxygen produced during photosynthesis
The process of Earth's increase in atmospheric oxygen content is theorized to have started with the continent-continent collision of huge landmasses forming supercontinents, and therefore possibly supercontinent mountain ranges (super-mountains). These super-mountains would have eroded, and the mass amounts of nutrients, including iron and phosphorus, would have washed into oceans, just as is seen happening today. The oceans would then be rich in nutrients essential to photosynthetic organisms, which would then be able to respire mass amounts of oxygen. There is an apparent direct relationship between orogeny and the atmospheric oxygen content. There is also evidence for increased sedimentation concurrent with the timing of these mass oxygenation events, meaning that the organic carbon and pyrite at these times were more likely to be buried beneath sediment and therefore unable to react with the free oxygen. This sustained the atmospheric oxygen increases.
At there was an increase in molybdenum isotope fractionation. It was temporary but supports the increase in atmospheric oxygen because molybdenum isotopes require free oxygen to fractionate. Between 2.45 and the second period of oxygenation occurred, which has been called the 'great oxygenation event.' Evidence supporting this event includes red beds appearance (meaning that Fe3+ was being produced and became an important component in soils).
The third oxygenation stage approximately is indicated by the disappearance of iron formations. Neodymium isotopic studies suggest that iron formations are usually from continental sources, meaning that dissolved Fe and Fe2+ had to be transported during continental erosion. A rise in atmospheric oxygen prevents Fe transport, so the lack of iron formations may have been the result of an increase in oxygen. The fourth oxygenation event, roughly is based on modeled rates of sulfur isotopes from marine carbonate-associated sulfates. An increase (near doubled concentration) of sulfur isotopes, which is suggested by these models, would require an increase in the oxygen content of the deep oceans.
Between 650 and there were three increases in ocean oxygen levels, this period is the fifth oxygenation stage. One of the reasons indicating this period to be an oxygenation event is the increase in redox-sensitive molybdenum in black shales. The sixth event occurred between 360 and and was identified by models suggesting shifts in the balance of 34S in sulfates and 13C in carbonates, which were strongly influenced by an increase in atmospheric oxygen.
Proxies
Granites and detrital zircons have notably similar and episodic appearances in the rock record. Their fluctuations correlate with Precambrian supercontinent cycles. The U–Pb zircon dates from orogenic granites are among the most reliable aging determinants.
Some issues exist with relying on granite sourced zircons, such as a lack of evenly globally sourced data and the loss of granite zircons by sedimentary coverage or plutonic consumption. Where granite zircons are less adequate, detrital zircons from sandstones appear and make up for the gaps. These detrital zircons are taken from the sands of major modern rivers and their drainage basins. Oceanic magnetic anomalies and paleomagnetic data are the primary resources used for reconstructing continent and supercontinent locations back to roughly 150 Ma.
| Physical sciences | Paleogeography | Earth science |
28743 | https://en.wikipedia.org/wiki/Slide%20rule | Slide rule | A slide rule is a hand-operated mechanical calculator consisting of slidable rulers for evaluating mathematical operations such as multiplication, division, exponents, roots, logarithms, and trigonometry. It is one of the simplest analog computers.
Slide rules exist in a diverse range of styles and generally appear in a linear, circular or cylindrical form. Slide rules manufactured for specialized fields such as aviation or finance typically feature additional scales that aid in specialized calculations particular to those fields. The slide rule is closely related to nomograms used for application-specific computations. Though similar in name and appearance to a standard ruler, the slide rule is not meant to be used for measuring length or drawing straight lines. Nor is it designed for addition or subtraction, which is usually performed using other methods, like using an abacus. Maximum accuracy for standard linear slide rules is about three decimal significant digits, while scientific notation is used to keep track of the order of magnitude of results.
English mathematician and clergyman Reverend William Oughtred and others developed the slide rule in the 17th century based on the emerging work on logarithms by John Napier. It made calculations faster and less error-prone than evaluating on paper. Before the advent of the scientific pocket calculator, it was the most commonly used calculation tool in science and engineering. The slide rule's ease of use, ready availability, and low cost caused its use to continue to grow through the 1950s and 1960s, even as desktop electronic computers were gradually introduced. But after the handheld scientific calculator was introduced in 1972 and became inexpensive in the mid-1970s, slide rules became largely obsolete, so most suppliers departed the business.
In the United States, the slide rule is colloquially called a slipstick.
Basic concepts
Each ruler's scale has graduations labeled with precomputed outputs of various mathematical functions, acting as a lookup table that maps from position on the ruler as each function's input. Calculations that can be reduced to simple addition or subtraction using those precomputed functions can be solved by aligning the two rulers and reading the approximate result.
For example, a number to be multiplied on one logarithmic-scale ruler can be aligned with the start of another such ruler to sum their logarithms. Then by applying the law of the logarithm of a product, the product of the two numbers can be read. More elaborate slide rules can perform other calculations, such as square roots, exponentials, logarithms, and trigonometric functions.
The user may estimate the location of the decimal point in the result by mentally interpolating between labeled graduations. Scientific notation is used to track the decimal point for more precise calculations. Addition and subtraction steps in a calculation are generally done mentally or on paper, not on the slide rule.
Components
Most slide rules consist of three parts:
Frame or base two strips of the same length held parallel with a gap between.
Slide a center strip interlocked with the frame that can move lengthwise relative to the frame.
Runner or glass an exterior sliding piece with a hairline, also known as the "cursor".
Some slide rules ("duplex" models) have scales on both sides of the rule and slide strip, others on one side of the outer strips and both sides of the slide strip (which can usually be pulled out, flipped over and reinserted for convenience), still others on one side only ("simplex" rules). A sliding cursor with a vertical alignment line is used to find corresponding points on scales that are not adjacent to each other or, in duplex models, are on the other side of the rule. The cursor can also record an intermediate result on any of the scales.
Decades
Scales may be grouped in decades, where each decade corresponds to a range of numbers that spans a ratio of 10 (i.e. a range from 10n to 10n+1). For example, the range 1 to 10 is a single decade, and the range from 10 to 100 is another decade. Thus, single-decade scales (named C and D) range from 1 to 10 across the entire length of the slide rule, while double-decade scales (named A and B) range from 1 to 100 over the length of the slide rule.
Operation
Logarithmic scales
The following logarithmic identities transform the operations of multiplication and division to addition and subtraction, respectively:
Multiplication
With two logarithmic scales, the act of positioning the top scale to start at the bottom scale's label for corresponds to shifting the top logarithmic scale by a distance of . This aligns each top scale's number at offset with the bottom scale's number at position . Because , the mark on the bottom scale at that position corresponds to . With and for example, by positioning the top scale to start at the bottom scale's , the result of the multiplication can then be read on the bottom scale under the top scale's :
While the above example lies within one decade, users must mentally account for additional zeroes when dealing with multiple decades. For example, the answer to is found by first positioning the top scale to start above the 2 of the bottom scale, and then reading the marking 1.4 off the bottom two-decade scale where is on the top scale:
But since the is above the second set of numbers that number must be multiplied by . Thus, even though the answer directly reads , the correct answer is .
For an example with even larger numbers, to multiply , the top scale is again positioned to start at the on the bottom scale. Since represents , all numbers in that scale are multiplied by . Thus, any answer in the second set of numbers is multiplied by . Since in the top scale represents , the answer must additionally be multiplied by . The answer directly reads . Multiply by and then by to get the actual answer: .
In general, the on the top is moved to a factor on the bottom, and the answer is read off the bottom where the other factor is on the top. This works because the distances from the mark are proportional to the logarithms of the marked values.
Division
The illustration below demonstrates the computation of . The on the top scale is placed over the on the bottom scale. The resulting quotient, , can then be read below the top scale's :
There is more than one method for doing division, and the method presented here has the advantage that the final result cannot be off-scale, because one has a choice of using the at either end.
With more complex calculations involving multiple factors in the numerator and denominator of an expression, movement of the scales can be minimized by alternating divisions and multiplications. Thus would be computed as and the result, , can be read beneath the in the top scale in the figure above, without the need to register the intermediate result for .
Solving Proportions
Because pairs of numbers that are aligned on the logarithmic scales form constant ratios, no matter how the scales are offset, slide rules can be used to generate equivalent fractions that solve proportion and percent problems.
For example, setting 7.5 on one scale over 10 on the other scale, the user can see that at the same time 1.5 is over 2, 2.25 is over 3, 3 is over 4, 3.75 is over 6, 4.5 is over 6, and 6 is over 8, among other pairs. For a real-life situation where 750 represents a whole 100%, these readings could be interpreted to suggest that 150 is 20%, 225 is 30%, 300 is 40%, 375 is 50%, 450 is 60%, and 600 is 80%.
Other scales
In addition to the logarithmic scales, some slide rules have other mathematical functions encoded on other auxiliary scales. The most popular are trigonometric, usually sine and tangent, common logarithm (log) (for taking the log of a value on a multiplier scale), natural logarithm (ln) and exponential (ex) scales. Others feature scales for calculating hyperbolic functions. On linear rules, the scales and their labeling are highly standardized, with variation usually occurring only in terms of which scales are included and in what order.
The Binary Slide Rule manufactured by Gilson in 1931 performed an addition and subtraction function limited to fractions.
Roots and powers
There are single-decade (C and D), double-decade (A and B), and triple-decade (K) scales. To compute , for example, locate x on the D scale and read its square on the A scale. Inverting this process allows square roots to be found, and similarly for the powers 3, 1/3, 2/3, and 3/2. Care must be taken when the base, x, is found in more than one place on its scale. For instance, there are two nines on the A scale; to find the square root of nine, use the first one; the second one gives the square root of 90.
For problems, use the LL scales. When several LL scales are present, use the one with x on it. First, align the leftmost 1 on the C scale with x on the LL scale. Then, find y on the C scale and go down to the LL scale with x on it. That scale will indicate the answer. If y is "off the scale", locate and square it using the A and B scales as described above. Alternatively, use the rightmost 1 on the C scale, and read the answer off the next higher LL scale. For example, aligning the rightmost 1 on the C scale with 2 on the LL2 scale, 3 on the C scale lines up with 8 on the LL3 scale.
To extract a cube root using a slide rule with only C/D and A/B scales, align 1 on the B cursor with the base number on the A scale (taking care as always to distinguish between the lower and upper halves of the A scale). Slide the slide until the number on the D scale which is against 1 on the C cursor is the same as the number on the B cursor which is against the base number on the A scale. (Examples: A 8, B 2, C 1, D 2; A 27, B 3, C 1, D 3.)
Roots of quadratic equations
Quadratic equations of the form can be solved by first reducing the equation to the form (where and ), and then aligning the index ("1") of the C scale to the value on the D scale. The cursor is then moved along the rule until a position is found where the numbers on the CI and D scales add up to . These two values are the roots of the equation.
Future value of money
The LLN scales can be used to compute and compare the cost or return on a fixed rate loan or investment.
The simplest case is for continuously compounded interest. Example: Taking D as the interest rate in percent,
slide the index (the "1" at the right or left end of the scale) of C to the percent on D. The corresponding value on LL2 directly below the index will be the multiplier for 10 cycles of interest (typically years). The value on LL2 below 2 on the C scale will be the multiplier after 20 cycles, and so on.
Trigonometry
The S, T, and ST scales are used for trig functions and multiples of trig functions, for angles in degrees.
For angles from around 5.7 up to 90 degrees, sines are found by comparing the S scale with C (or D) scale. (On many closed-body rules the S scale relates to the A and B scales instead and covers angles from around 0.57 up to 90 degrees; what follows must be adjusted appropriately.) The S scale has a second set of angles (sometimes in a different color), which run in the opposite direction, and are used for cosines. Tangents are found by comparing the T scale with the C (or D) scale for angles less than 45 degrees. For angles greater than 45 degrees the CI scale is used. Common forms such as can be read directly from x on the S scale to the result on the D scale, when the C scale index is set at k. For angles below 5.7 degrees, sines, tangents, and radians are approximately equal, and are found on the ST or SRT (sines, radians, and tangents) scale, or simply divided by 57.3 degrees/radian. Inverse trigonometric functions are found by reversing the process.
Many slide rules have S, T, and ST scales marked with degrees and minutes (e.g. some Keuffel and Esser models (Doric duplex 5" models, for example), late-model Teledyne-Post Mannheim-type rules). So-called decitrig models use decimal fractions of degrees instead.
Logarithms and exponentials
Base-10 logarithms and exponentials are found using the L scale, which is linear. Some slide rules have a Ln scale, which is for base e. Logarithms to any other base can be calculated by reversing the procedure for calculating powers of a number. For example, log2 values can be determined by lining up either leftmost or rightmost 1 on the C scale with 2 on the LL2 scale, finding the number whose logarithm is to be calculated on the corresponding LL scale, and reading the log2 value on the C scale.
Addition and subtraction
Addition and subtraction aren't typically performed on slide rules, but is possible using either of the following two techniques:
Converting addition and subtraction to division (required for the C and D or comparable scales):
Exploits the identity that the quotient of two variables plus (or minus) one times the divisor equals their sum (or difference):
This is similar to the addition/subtraction technique used for high-speed electronic circuits with a logarithmic number system in specialized computer applications like the Gravity Pipe (GRAPE) supercomputer and hidden Markov models.
Using a linear L scale (available on some models):
After sliding the cursor right (for addition) or left (for subtraction) and returning the slide to 0, the result can be read.
Generalizations
Using (almost) any strictly monotonic scales, other calculations can also be made with one movement. For example, reciprocal scales can be used for the equality (calculating parallel resistances, harmonic mean, etc.), and quadratic scales can be used to solve .
Physical design
Standard linear rules
The width of the slide rule is quoted in terms of the nominal width of the scales. Scales on the most common "10-inch" models are actually 25 cm, as they were made to metric standards, though some rules offer slightly extended scales to simplify manipulation when a result overflows. Pocket rules are typically 5 inches (12 cm). Models a couple of metres (yards) wide were made to be hung in classrooms for teaching purposes.
Typically the divisions mark a scale to a precision of two significant figures, and the user estimates the third figure. Some high-end slide rules have magnifier cursors that make the markings easier to see. Such cursors can effectively double the accuracy of readings, permitting a 10-inch slide rule to serve as well as a 20-inch model.
Various other conveniences have been developed. Trigonometric scales are sometimes dual-labeled, in black and red, with complementary angles, the so-called "Darmstadt" style. Duplex slide rules often duplicate some of the scales on the back. Scales are often "split" to get higher accuracy. For example, instead of reading from an A scale to a D scale to find a square root, it may be possible to read from a D scale to an R1 scale running from 1 to square root of 10 or to an R2 scale running from square root of 10 to 10, where having more subdivisions marked can result in being able to read an answer with one more significant digit.
Circular slide rules
Circular slide rules come in two basic types, one with two cursors, and another with a free dish and one cursor. The dual cursor versions perform multiplication and division by holding a constant angle between the cursors as they are rotated around the dial. The onefold cursor version operates more like the standard slide rule through the appropriate alignment of the scales.
The basic advantage of a circular slide rule is that the widest dimension of the tool was reduced by a factor of about 3 (i.e. by π). For example, a circular would have a maximum precision approximately equal to a ordinary slide rule. Circular slide rules also eliminate "off-scale" calculations, because the scales were designed to "wrap around"; they never have to be reoriented when results are near 1.0—the rule is always on scale. However, for non-cyclical non-spiral scales such as S, T, and LL's, the scale width is narrowed to make room for end margins.
Circular slide rules are mechanically more rugged and smoother-moving, but their scale alignment precision is sensitive to the centering of a central pivot; a minute off-centre of the pivot can result in a worst case alignment error. The pivot does prevent scratching of the face and cursors. The highest accuracy scales are placed on the outer rings. Rather than "split" scales, high-end circular rules use spiral scales for more complex operations like log-of-log scales. One eight-inch premium circular rule had a 50-inch spiral log-log scale. Around 1970, an inexpensive model from B. C. Boykin (Model 510) featured 20 scales, including 50-inch C-D (multiplication) and log scales. The RotaRule featured a friction brake for the cursor.
The main disadvantages of circular slide rules are the difficulty in locating figures along a dish, and limited number of scales. Another drawback of circular slide rules is that less-important scales are closer to the center, and have lower precisions. Most students learned slide rule use on the linear slide rules, and did not find reason to switch.
One slide rule remaining in daily use around the world is the E6B. This is a circular slide rule first created in the 1930s for aircraft pilots to help with dead reckoning. With the aid of scales printed on the frame it also helps with such miscellaneous tasks as converting time, distance, speed, and temperature values, compass errors, and calculating fuel use. The so-called "prayer wheel" is still available in flight shops, and remains widely used. While GPS has reduced the use of dead reckoning for aerial navigation, and handheld calculators have taken over many of its functions, the E6B remains widely used as a primary or backup device and the majority of flight schools demand that their students have some degree of proficiency in its use.
Proportion wheels are simple circular slide rules used in graphic design to calculate aspect ratios. Lining up the original and desired size values on the inner and outer wheels will display their ratio as a percentage in a small window. Though not as common since the advent of computerized layout, they
In 1952, Swiss watch company Breitling introduced a pilot's wristwatch with an integrated circular slide rule specialized for flight calculations: the Breitling Navitimer. The Navitimer circular rule, referred to by Breitling as a "navigation computer", featured airspeed, rate/time of climb/descent, flight time, distance, and fuel consumption functions, as well as kilometer—nautical mile and gallon—liter fuel amount conversion functions.
Cylindrical slide rules
There are two main types of cylindrical slide rules: those with helical scales such as the Fuller calculator, the Otis King and the Bygrave slide rule, and those with bars, such as the Thacher and some Loga models. In either case, the advantage is a much longer scale, and hence potentially greater precision, than afforded by a straight or circular rule.
Materials
Traditionally slide rules were made out of hard wood such as mahogany or boxwood with cursors of glass and metal. At least one high precision instrument was made of steel.
In 1895, a Japanese firm, Hemmi, started to make slide rules from celluloid-clad bamboo, which had the advantages of being dimensionally stable, strong, and naturally self-lubricating. These bamboo slide rules were introduced in Sweden in September, 1933, and probably only a little earlier in Germany.
Scales were also made of celluloid or other polymers, or printed on aluminium. Later cursors were molded from acrylics or polycarbonate, sometimes with Teflon bearing surfaces.
All premium slide rules had numbers and scales deeply engraved, and then filled with paint or other resin. Painted or imprinted slide rules were viewed as inferior, because the markings could wear off or be chemically damaged. Nevertheless, Pickett, an American slide rule company, made only printed scale rules. Premium slide rules included clever mechanical catches so the rule would not fall apart by accident, and bumpers to protect the scales and cursor from rubbing on tabletops.
History
The slide rule was invented around 1620–1630, shortly after John Napier's publication of the concept of the logarithm. In 1620 Edmund Gunter of Oxford developed a calculating device with a single logarithmic scale; with additional measuring tools it could be used to multiply and divide. In c. 1622, William Oughtred of Cambridge combined two handheld Gunter rules to make a device that is recognizably the modern slide rule. Oughtred became involved in a vitriolic controversy over priority, with his one-time student Richard Delamain and the prior claims of Wingate. Oughtred's ideas were only made public in publications of his student William Forster in 1632 and 1653.
In 1677, Henry Coggeshall created a two-foot folding rule for timber measure, called the Coggeshall slide rule, expanding the slide rule's use beyond mathematical inquiry.
In 1722, Warner introduced the two- and three-decade scales, and in 1755 Everard included an inverted scale; a slide rule containing all of these scales is usually known as a "polyphase" rule.
In 1815, Peter Mark Roget invented the log log slide rule, which included a scale displaying the logarithm of the logarithm. This allowed the user to directly perform calculations involving roots and exponents. This was especially useful for fractional powers.
In 1821, Nathaniel Bowditch, described in the American Practical Navigator a "sliding rule" that contained scaled trigonometric functions on the fixed part and a line of log-sines and log-tans on the slider used to solve navigation problems.
In 1845, Paul Cameron of Glasgow introduced a nautical slide rule capable of answering navigation questions, including right ascension and declination of the sun and principal stars.
Modern form
A more modern form of slide rule was created in 1859 by French artillery lieutenant Amédée Mannheim, who was fortunate both in having his rule made by a firm of national reputation, and its adoption by the French Artillery. Mannheim's rule had two major modifications that made it easier to use than previous general-purpose slide rules. Such rules had four basic scales, A, B, C, and D, and D was the only single-decade logarithmic scale; C had two decades, like A and B. Most operations were done on the A and B scales; D was only used for finding squares and square roots.
Mannheim changed the C scale to a single-decade scale and performed most operations with C and D instead of A and B. Because the C and D scales were single-decade, they could be read more precisely, so the rule's results could be more accurate. The change also made it easier to include squares and square roots as part of a larger calculation. Mannheim's rule also had a cursor, unlike almost all preceding rules, so any of the scales could be easily and accurately compared across the rule width. The "Mannheim rule" became the standard slide rule arrangement for the later 19th century and remained a common standard throughout the slide-rule era.
The growth of the engineering profession during the later 19th century drove widespread slide-rule use, beginning in Europe and eventually taking hold in the United States as well. The duplex rule was invented by William Cox in 1891 and was produced by Keuffel and Esser Co. of New York.
In 1881, the American inventor Edwin Thacher introduced his cylindrical rule, which had a much longer scale than standard linear rules and thus could calculate to higher precision, about four to five significant digits. However, the Thacher rule was quite expensive, as well as being non-portable, so it was used in far more limited numbers than conventional slide rules.
Astronomical work also required precise computations, and, in 19th-century Germany, a steel slide rule about two meters long was used at one observatory. It had a microscope attached, giving it accuracy to six decimal places.
In the 1920s, the novelist and engineer Nevil Shute Norway (he called his autobiography Slide Rule) was Chief Calculator on the design of the British R100 airship for Vickers Ltd. from 1924. The stress calculations for each transverse frame required computations by a pair of calculators (people) using Fuller's cylindrical slide rules for two or three months. The simultaneous equation contained up to seven unknown quantities, took about a week to solve, and had to be repeated with a different selection of slack wires if the guess on which of the eight radial wires were slack was wrong and one of the wires guessed to be slack was not slack. After months of labour filling perhaps fifty foolscap sheets with calculations "the truth stood revealed (and) produced a satisfaction almost amounting to a religious experience".
In 1937, physicist Lucy Hayner designed and constructed a circular slide rule in Braille.
Throughout the 1950s and 1960s, the slide rule was the symbol of the engineer's profession in the same way the stethoscope is that of the medical profession.
Aluminium Pickett-brand slide rules were carried on Project Apollo space missions. The model N600-ES owned by Buzz Aldrin that flew with him to the Moon on Apollo 11 was sold at auction in 2007. The model N600-ES taken along on Apollo 13 in 1970 is owned by the National Air and Space Museum.
Some engineering students and engineers carried ten-inch slide rules in belt holsters, a common sight on campuses even into the mid-1970s. Until the advent of the pocket digital calculator, students also might keep a ten- or twenty-inch rule for precision work at home or the office while carrying a five-inch pocket slide rule around with them.
In 2004, education researchers David B. Sher and Dean C. Nataro conceived a new type of slide rule based on prosthaphaeresis, an algorithm for rapidly computing products that predates logarithms. However, there has been little practical interest in constructing one beyond the initial prototype.
Specialized calculators
Slide rules have often been specialized to varying degrees for their field of use, such as excise, proof calculation, engineering, navigation, etc., and some slide rules are extremely specialized for very narrow applications. For example, the John Rabone & Sons 1892 catalog lists a "Measuring Tape and Cattle Gauge", a device to estimate the weight of a cow from its measurements.
There were many specialized slide rules for photographic applications. For example, the actinograph of Hurter and Driffield was a two-slide boxwood, brass, and cardboard device for estimating exposure from time of day, time of year, and latitude.
Specialized slide rules were invented for various forms of engineering, business and banking. These often had common calculations directly expressed as special scales, for example loan calculations, optimal purchase quantities, or particular engineering equations. For example, the Fisher Controls company distributed a customized slide rule adapted to solving the equations used for selecting the proper size of industrial flow control valves.
Pilot balloon slide rules were used by meteorologists in weather services to determine the upper wind velocities from an ascending hydrogen or helium-filled pilot balloon.
The E6-B is a circular slide rule used by pilots and navigators.
Circular slide rules to estimate ovulation dates and fertility are known as wheel calculators.
A Department of Defense publication from 1962 infamously included a special-purpose circular slide rule for calculating blast effects, overpressure, and radiation exposure from a given yield of an atomic bomb.
Decline
The importance of the slide rule began to diminish as electronic computers, a new but rare resource in the 1950s, became more widely available to technical workers during the 1960s.
The first step away from slide rules was the introduction of relatively inexpensive electronic desktop scientific calculators. These included the Wang Laboratories LOCI-2, introduced in 1965, which used logarithms for multiplication and division; and the Hewlett-Packard HP 9100A, introduced in 1968. Both of these were programmable and provided exponential and logarithmic functions; the HP had trigonometric functions (sine, cosine, and tangent) and hyperbolic trigonometric functions as well. The HP used the CORDIC (coordinate rotation digital computer) algorithm, which allows for calculation of trigonometric functions using only shift and add operations. This method facilitated the development of ever smaller scientific calculators.
As with mainframe computing, the availability of these desktop machines did not significantly affect the ubiquitous use of the slide rule, until cheap hand-held scientific electronic calculators became available in the mid-1970s, at which point it rapidly declined. The pocket-sized Hewlett-Packard HP-35 scientific calculator was the first handheld device of its type, but it cost US$395 in 1972. This was justifiable for some engineering professionals, but too expensive for most students.
Around 1974, lower-cost handheld electronic scientific calculators started to make slide rules largely obsolete. By 1975, basic four-function electronic calculators could be purchased for less than $50, and by 1976 the TI-30 scientific calculator was sold for less than $25 ($ adjusted for inflation).
1980 was the final year of the University Interscholastic League (UIL) competition in Texas to use slide rules. The UIL had been originally been organized in 1910 to administer literary events, but had become the governing body of school sports events as well.
Comparison to electronic digital calculators
Even during their heyday, slide rules never caught on with the general public. Addition and subtraction are not well-supported operations on slide rules and doing a calculation on a slide rule tends to be slower than on a calculator. This led engineers to use mathematical equations that favored operations that were easy on a slide rule over more accurate but complex functions; these approximations could lead to inaccuracies and mistakes. On the other hand, the spatial, manual operation of slide rules cultivates in the user an intuition for numerical relationships and scale that people who have used only digital calculators often lack. A slide rule will also display all the terms of a calculation along with the result, thus eliminating uncertainty about what calculation was actually performed. It has thus been compared with reverse Polish notation (RPN) implemented in electronic calculators.
A slide rule requires the user to separately compute the order of magnitude of the answer to position the decimal point in the results. For example, 1.5 × 30 (which equals 45) will show the same result as × 0.03 (which equals ). This separate calculation forces the user to keep track of magnitude in short-term memory (which is error-prone), keep notes (which is cumbersome) or reason about it in every step (which distracts from the other calculation requirements).
The typical arithmetic precision of a slide rule is about three significant digits, compared to many digits on digital calculators. As order of magnitude gets the greatest prominence when using a slide rule, users are less likely to make errors of false precision.
When performing a sequence of multiplications or divisions by the same number, the answer can often be determined by merely glancing at the slide rule without any manipulation. This can be especially useful when calculating percentages (e.g. for test scores) or when comparing prices (e.g. in dollars per kilogram). Multiple speed-time-distance calculations can be performed hands-free at a glance with a slide rule. Other useful linear conversions such as pounds to kilograms can be easily marked on the rule and used directly in calculations.
Being entirely mechanical, a slide rule does not depend on grid electricity or batteries. Mechanical imprecision in slide rules that were poorly constructed or warped by heat or use will lead to errors.
Many sailors keep slide rules as backups for navigation in case of electric failure or battery depletion on long route segments. Slide rules are still commonly used in aviation, particularly for smaller planes. They are being replaced only by integrated, special purpose and expensive flight computers, and not general-purpose calculators. The E6B circular slide rule used by pilots has been in continuous production and remains available in a variety of models. Some wrist watches designed for aviation use still feature slide rule scales to permit quick calculations. The Citizen Skyhawk AT and the Seiko Flightmaster SNA411 are two notable examples.
Contemporary use
Even in the 21st century, some people prefer a slide rule over an electronic calculator as a practical computing device. Others keep their old slide rules out of a sense of nostalgia, or collect them as a hobby.
A popular collectible model is the Keuffel & Esser Deci-Lon, a premium scientific and engineering slide rule available both in a ten-inch (25 cm) "regular" (Deci-Lon 10) and a five-inch "pocket" (Deci-Lon 5) variant. Another prized American model is the eight-inch (20 cm) Scientific Instruments circular rule. Of European rules, Faber-Castell's high-end models are the most popular among collectors.
Although a great many slide rules are circulating on the market, specimens in good condition tend to be expensive. Many rules found for sale on online auction sites are damaged or have missing parts, and the seller may not know enough to supply the relevant information. Replacement parts are scarce, expensive, and generally available only for separate purchase on individual collectors' web sites. The Keuffel and Esser rules from the period up to about 1950 are particularly problematic, because the end-pieces on the cursors, made of celluloid, tend to chemically break down over time. Methods of preserving plastic may be used to slow the deterioration of some older slide rules, and 3D printing may be used to recreate missing or irretrievably broken cursor parts.
There are still a handful of sources for brand new slide rules. The Concise Company of Tokyo, which began as a manufacturer of circular slide rules in July 1954, continues to make and sell them today. In September 2009, on-line retailer ThinkGeek introduced its own brand of straight slide rules, described as "faithful replica[s]" that were "individually hand tooled". These were no longer available in 2012. In addition, Faber-Castell had a number of slide rules in inventory, available for international purchase through their web store, until mid 2018. Proportion wheels are still used in graphic design.
Various slide rule simulator apps are available for Android and iOS-based smart phones and tablets.
Specialized slide rules such as the E6B used in aviation, and gunnery slide rules used in laying artillery are still used though no longer on a routine basis. These rules are used as part of the teaching and instruction process as in learning to use them the student also learns about the principles behind the calculations, it also allows the student to be able to use these instruments as a backup in the event that the modern electronics in general use fail.
Collections
The MIT Museum in Cambridge, Massachusetts, has a collection of hundreds of slide rules, nomograms, and mechanical calculators. The Keuffel and Esser Company collection, from the slide rule manufacturer formerly located in Hoboken, New Jersey, was donated to MIT around 2005, substantially expanding existing holdings. Selected items from the collection are usually on display at the museum.
The International Slide Rule Museum is claimed to be "[the world's] most extensive resource for all things concerning slide rules and logarithmic calculators". The museum's Web page includes extensive literature relative to slide rules in its "Slide Rule Library" section.
| Technology | Basics_4 | null |
28748 | https://en.wikipedia.org/wiki/Speed | Speed | In kinematics, the speed (commonly referred to as v) of an object is the magnitude of the change of its position over time or the magnitude of the change of its position per unit of time; it is thus a non-negative scalar quantity. The average speed of an object in an interval of time is the distance travelled by the object divided by the duration of the interval; the instantaneous speed is the limit of the average speed as the duration of the time interval approaches zero. Speed is the magnitude of velocity (a vector), which indicates additionally the direction of motion.
Speed has the dimensions of distance divided by time. The SI unit of speed is the metre per second (m/s), but the most common unit of speed in everyday usage is the kilometre per hour (km/h) or, in the US and the UK, miles per hour (mph). For air and marine travel, the knot is commonly used.
The fastest possible speed at which energy or information can travel, according to special relativity, is the speed of light in vacuum c = metres per second (approximately or ). Matter cannot quite reach the speed of light, as this would require an infinite amount of energy. In relativity physics, the concept of rapidity replaces the classical idea of speed.
Definition
Historical definition
Italian physicist Galileo Galilei is usually credited with being the first to measure speed by considering the distance covered and the time it takes. Galileo defined speed as the distance covered per unit of time. In equation form, that is
where is speed, is distance, and is time. A cyclist who covers 30 metres in a time of 2 seconds, for example, has a speed of 15 metres per second. Objects in motion often have variations in speed (a car might travel along a street at 50 km/h, slow to 0 km/h, and then reach 30 km/h).
Instantaneous speed
Speed at some instant, or assumed constant during a very short period of time, is called instantaneous speed. By looking at a speedometer, one can read the instantaneous speed of a car at any instant. A car travelling at 50 km/h generally goes for less than one hour at a constant speed, but if it did go at that speed for a full hour, it would travel 50 km. If the vehicle continued at that speed for half an hour, it would cover half that distance (25 km). If it continued for only one minute, it would cover about 833 m.
In mathematical terms, the instantaneous speed is defined as the magnitude of the instantaneous velocity , that is, the derivative of the position with respect to time:
If is the length of the path (also known as the distance) travelled until time , the speed equals the time derivative of :
In the special case where the velocity is constant (that is, constant speed in a straight line), this can be simplified to . The average speed over a finite time interval is the total distance travelled divided by the time duration.
Average speed
Different from instantaneous speed, average speed is defined as the total distance covered divided by the time interval. For example, if a distance of 80 kilometres is driven in 1 hour, the average speed is 80 kilometres per hour. Likewise, if 320 kilometres are travelled in 4 hours, the average speed is also 80 kilometres per hour. When a distance in kilometres (km) is divided by a time in hours (h), the result is in kilometres per hour (km/h).
Average speed does not describe the speed variations that may have taken place during shorter time intervals (as it is the entire distance covered divided by the total time of travel), and so average speed is often quite different from a value of instantaneous speed. If the average speed and the time of travel are known, the distance travelled can be calculated by rearranging the definition to
Using this equation for an average speed of 80 kilometres per hour on a 4-hour trip, the distance covered is found to be 320 kilometres.
Expressed in graphical language, the slope of a tangent line at any point of a distance-time graph is the instantaneous speed at this point, while the slope of a chord line of the same graph is the average speed during the time interval covered by the chord. Average speed of an object is
Vav = s÷t
Difference between speed and velocity
Speed denotes only how fast an object is moving, whereas velocity describes both how fast and in which direction the object is moving. If a car is said to travel at 60 km/h, its speed has been specified. However, if the car is said to move at 60 km/h to the north, its velocity has now been specified.
The big difference can be discerned when considering movement around a circle. When something moves in a circular path and returns to its starting point, its average velocity is zero, but its average speed is found by dividing the circumference of the circle by the time taken to move around the circle. This is because the average velocity is calculated by considering only the displacement between the starting and end points, whereas the average speed considers only the total distance travelled.
Tangential speed
Units
Units of speed include:
metres per second (symbol m s−1 or m/s), the SI derived unit;
kilometres per hour (symbol km/h);
miles per hour (symbol mi/h or mph);
knots (nautical miles per hour, symbol kn or kt);
feet per second (symbol fps or ft/s);
Mach number (dimensionless), speed divided by the speed of sound;
in natural units (dimensionless), speed divided by the speed of light in vacuum (symbol c = ).
Examples of different speeds
Psychology
According to Jean Piaget, the intuition for the notion of speed in humans precedes that of duration, and is based on the notion of outdistancing. Piaget studied this subject inspired by a question asked to him in 1928 by Albert Einstein: "In what order do children acquire the concepts of time and speed?" Children's early concept of speed is based on "overtaking", taking only temporal and spatial orders into consideration, specifically: "A moving object is judged to be more rapid than another when at a given moment the first object is behind and a moment or so later ahead of the other object."
| Physical sciences | Classical mechanics | null |
28758 | https://en.wikipedia.org/wiki/Spacetime | Spacetime | In physics, spacetime, also called the space-time continuum, is a mathematical model that fuses the three dimensions of space and the one dimension of time into a single four-dimensional continuum. Spacetime diagrams are useful in visualizing and understanding relativistic effects, such as how different observers perceive where and when events occur.
Until the turn of the 20th century, the assumption had been that the three-dimensional geometry of the universe (its description in terms of locations, shapes, distances, and directions) was distinct from time (the measurement of when events occur within the universe). However, space and time took on new meanings with the Lorentz transformation and special theory of relativity.
In 1908, Hermann Minkowski presented a geometric interpretation of special relativity that fused time and the three spatial dimensions into a single four-dimensional continuum now known as Minkowski space. This interpretation proved vital to the general theory of relativity, wherein spacetime is curved by mass and energy.
Fundamentals
Definitions
Non-relativistic classical mechanics treats time as a universal quantity of measurement that is uniform throughout, is separate from space, and is agreed on by all observers. Classical mechanics assumes that time has a constant rate of passage, independent of the observer's state of motion, or anything external. It assumes that space is Euclidean: it assumes that space follows the geometry of common sense.
In the context of special relativity, time cannot be separated from the three dimensions of space, because the observed rate at which time passes for an object depends on the object's velocity relative to the observer. General relativity provides an explanation of how gravitational fields can slow the passage of time for an object as seen by an observer outside the field.
In ordinary space, a position is specified by three numbers, known as dimensions. In the Cartesian coordinate system, these are often called x, y and z. A point in spacetime is called an event, and requires four numbers to be specified: the three-dimensional location in space, plus the position in time (Fig. 1). An event is represented by a set of coordinates x, y, z and t. Spacetime is thus four-dimensional.
Unlike the analogies used in popular writings to explain events, such as firecrackers or sparks, mathematical events have zero duration and represent a single point in spacetime. Although it is possible to be in motion relative to the popping of a firecracker or a spark, it is not possible for an observer to be in motion relative to an event.
The path of a particle through spacetime can be considered to be a sequence of events. The series of events can be linked together to form a curve that represents the particle's progress through spacetime. That path is called the particle's world line.
Mathematically, spacetime is a manifold, which is to say, it appears locally "flat" near each point in the same way that, at small enough scales, the surface of a globe appears to be flat. A scale factor, (conventionally called the speed-of-light) relates distances measured in space to distances measured in time. The magnitude of this scale factor (nearly in space being equivalent to one second in time), along with the fact that spacetime is a manifold, implies that at ordinary, non-relativistic speeds and at ordinary, human-scale distances, there is little that humans might observe that is noticeably different from what they might observe if the world were Euclidean. It was only with the advent of sensitive scientific measurements in the mid-1800s, such as the Fizeau experiment and the Michelson–Morley experiment, that puzzling discrepancies began to be noted between observation versus predictions based on the implicit assumption of Euclidean space.
In special relativity, an observer will, in most cases, mean a frame of reference from which a set of objects or events is being measured. This usage differs significantly from the ordinary English meaning of the term. Reference frames are inherently nonlocal constructs, and according to this usage of the term, it does not make sense to speak of an observer as having a location.
In Fig. 1-1, imagine that the frame under consideration is equipped with a dense lattice of clocks, synchronized within this reference frame, that extends indefinitely throughout the three dimensions of space. Any specific location within the lattice is not important. The latticework of clocks is used to determine the time and position of events taking place within the whole frame. The term observer refers to the whole ensemble of clocks associated with one inertial frame of reference.
In this idealized case, every point in space has a clock associated with it, and thus the clocks register each event instantly, with no time delay between an event and its recording. A real observer will see a delay between the emission of a signal and its detection due to the speed of light. To synchronize the clocks, in the data reduction following an experiment, the time when a signal is received will be corrected to reflect its actual time were it to have been recorded by an idealized lattice of clocks.
In many books on special relativity, especially older ones, the word "observer" is used in the more ordinary sense of the word. It is usually clear from context which meaning has been adopted.
Physicists distinguish between what one measures or observes, after one has factored out signal propagation delays, versus what one visually sees without such corrections. Failing to understand the difference between what one measures and what one sees is the source of much confusion among students of relativity.
History
By the mid-1800s, various experiments such as the observation of the Arago spot and differential measurements of the speed of light in air versus water were considered to have proven the wave nature of light as opposed to a corpuscular theory. Propagation of waves was then assumed to require the existence of a waving medium; in the case of light waves, this was considered to be a hypothetical luminiferous aether. The various attempts to establish the properties of this hypothetical medium yielded contradictory results. For example, the Fizeau experiment of 1851, conducted by French physicist Hippolyte Fizeau, demonstrated that the speed of light in flowing water was less than the sum of the speed of light in air plus the speed of the water by an amount dependent on the water's index of refraction.
Among other issues, the dependence of the partial aether-dragging implied by this experiment on the index of refraction (which is dependent on wavelength) led to the unpalatable conclusion that aether simultaneously flows at different speeds for different colors of light. The Michelson–Morley experiment of 1887 (Fig. 1-2) showed no differential influence of Earth's motions through the hypothetical aether on the speed of light, and the most likely explanation, complete aether dragging, was in conflict with the observation of stellar aberration.
George Francis FitzGerald in 1889, and Hendrik Lorentz in 1892, independently proposed that material bodies traveling through the fixed aether were physically affected by their passage, contracting in the direction of motion by an amount that was exactly what was necessary to explain the negative results of the Michelson–Morley experiment. No length changes occur in directions transverse to the direction of motion.
By 1904, Lorentz had expanded his theory such that he had arrived at equations formally identical with those that Einstein was to derive later, i.e. the Lorentz transformation. As a theory of dynamics (the study of forces and torques and their effect on motion), his theory assumed actual physical deformations of the physical constituents of matter. Lorentz's equations predicted a quantity that he called local time, with which he could explain the aberration of light, the Fizeau experiment and other phenomena.
Henri Poincaré was the first to combine space and time into spacetime. He argued in 1898 that the simultaneity of two events is a matter of convention. In 1900, he recognized that Lorentz's "local time" is actually what is indicated by moving clocks by applying an explicitly operational definition of clock synchronization assuming constant light speed. In 1900 and 1904, he suggested the inherent undetectability of the aether by emphasizing the validity of what he called the principle of relativity. In 1905/1906 he mathematically perfected Lorentz's theory of electrons in order to bring it into accordance with the postulate of relativity.
While discussing various hypotheses on Lorentz invariant gravitation, he introduced the innovative concept of a 4-dimensional spacetime by defining various four vectors, namely four-position, four-velocity, and four-force. He did not pursue the 4-dimensional formalism in subsequent papers, however, stating that this line of research seemed to "entail great pain for limited profit", ultimately concluding "that three-dimensional language seems the best suited to the description of our world". Even as late as 1909, Poincaré continued to describe the dynamical interpretation of the Lorentz transform.
In 1905, Albert Einstein analyzed special relativity in terms of kinematics (the study of moving bodies without reference to forces) rather than dynamics. His results were mathematically equivalent to those of Lorentz and Poincaré. He obtained them by recognizing that the entire theory can be built upon two postulates: the principle of relativity and the principle of the constancy of light speed. His work was filled with vivid imagery involving the exchange of light signals between clocks in motion, careful measurements of the lengths of moving rods, and other such examples.
Einstein in 1905 superseded previous attempts of an electromagnetic mass–energy relation by introducing the general equivalence of mass and energy, which was instrumental for his subsequent formulation of the equivalence principle in 1907, which declares the equivalence of inertial and gravitational mass. By using the mass–energy equivalence, Einstein showed that the gravitational mass of a body is proportional to its energy content, which was one of the early results in developing general relativity. While it would appear that he did not at first think geometrically about spacetime, in the further development of general relativity, Einstein fully incorporated the spacetime formalism.
When Einstein published in 1905, another of his competitors, his former mathematics professor Hermann Minkowski, had also arrived at most of the basic elements of special relativity. Max Born recounted a meeting he had made with Minkowski, seeking to be Minkowski's student/collaborator:
Minkowski had been concerned with the state of electrodynamics after Michelson's disruptive experiments at least since the summer of 1905, when Minkowski and David Hilbert led an advanced seminar attended by notable physicists of the time to study the papers of Lorentz, Poincaré et al. Minkowski saw Einstein's work as an extension of Lorentz's, and was most directly influenced by Poincaré.
On 5 November 1907 (a little more than a year before his death), Minkowski introduced his geometric interpretation of spacetime in a lecture to the Göttingen Mathematical society with the title, The Relativity Principle (Das Relativitätsprinzip). On 21 September 1908, Minkowski presented his talk, Space and Time (Raum und Zeit), to the German Society of Scientists and Physicians. The opening words of Space and Time include Minkowski's statement that "Henceforth, space for itself, and time for itself shall completely reduce to a mere shadow, and only some sort of union of the two shall preserve independence." Space and Time included the first public presentation of spacetime diagrams (Fig. 1-4), and included a remarkable demonstration that the concept of the invariant interval (discussed below), along with the empirical observation that the speed of light is finite, allows derivation of the entirety of special relativity.{{refn|group=note|(In the following, the group G∞ is the Galilean group and the group Gc the Lorentz group.) "With respect to this it is clear that the group Gc in the limit for
The spacetime concept and the Lorentz group are closely connected to certain types of sphere, hyperbolic, or conformal geometries and their transformation groups already developed in the 19th century, in which invariant intervals analogous to the spacetime interval are used.
Einstein, for his part, was initially dismissive of Minkowski's geometric interpretation of special relativity, regarding it as überflüssige Gelehrsamkeit (superfluous learnedness). However, in order to complete his search for general relativity that started in 1907, the geometric interpretation of relativity proved to be vital. In 1916, Einstein fully acknowledged his indebtedness to Minkowski, whose interpretation greatly facilitated the transition to general relativity. Since there are other types of spacetime, such as the curved spacetime of general relativity, the spacetime of special relativity is today known as Minkowski spacetime.
Spacetime in special relativity
Spacetime interval
In three dimensions, the distance between two points can be defined using the Pythagorean theorem:
Although two viewers may measure the x, y, and z position of the two points using different coordinate systems, the distance between the points will be the same for both, assuming that they are measuring using the same units. The distance is "invariant".
In special relativity, however, the distance between two points is no longer the same if measured by two different observers, when one of the observers is moving, because of Lorentz contraction. The situation is even more complicated if the two points are separated in time as well as in space. For example, if one observer sees two events occur at the same place, but at different times, a person moving with respect to the first observer will see the two events occurring at different places, because the moving point of view sees itself as stationary, and the position of the event as receding or approaching. Thus, a different measure must be used to measure the effective "distance" between two events.
In four-dimensional spacetime, the analog to distance is the interval. Although time comes in as a fourth dimension, it is treated differently than the spatial dimensions. Minkowski space hence differs in important respects from four-dimensional Euclidean space. The fundamental reason for merging space and time into spacetime is that space and time are separately not invariant, which is to say that, under the proper conditions, different observers will disagree on the length of time between two events (because of time dilation) or the distance between the two events (because of length contraction). Special relativity provides a new invariant, called the spacetime interval, which combines distances in space and in time. All observers who measure the time and distance between any two events will end up computing the same spacetime interval. Suppose an observer measures two events as being separated in time by and a spatial distance Then the squared spacetime interval between the two events that are separated by a distance in space and by in the -coordinate is:
or for three space dimensions,
The constant the speed of light, converts time units (like seconds) into space units (like meters). The squared interval is a measure of separation between events A and B that are time separated and in addition space separated either because there are two separate objects undergoing events, or because a single object in space is moving inertially between its events. The separation interval is the difference between the square of the spatial distance separating event B from event A and the square of the spatial distance traveled by a light signal in that same time interval . If the event separation is due to a light signal, then this difference vanishes and .
When the event considered is infinitesimally close to each other, then we may write
In a different inertial frame, say with coordinates , the spacetime interval can be written in a same form as above. Because of the constancy of speed of light, the light events in all inertial frames belong to zero interval, . For any other infinitesimal event where , one can prove that
which in turn upon integration leads to . The invariance of the spacetime interval between the same events for all inertial frames of reference is one of the fundamental results of special theory of relativity.
Although for brevity, one frequently sees interval expressions expressed without deltas, including in most of the following discussion, it should be understood that in general, means , etc. We are always concerned with differences of spatial or temporal coordinate values belonging to two events, and since there is no preferred origin, single coordinate values have no essential meaning.
The equation above is similar to the Pythagorean theorem, except with a minus sign between the and the terms. The spacetime interval is the quantity not itself. The reason is that unlike distances in Euclidean geometry, intervals in Minkowski spacetime can be negative. Rather than deal with square roots of negative numbers, physicists customarily regard as a distinct symbol in itself, rather than the square of something.
Note: There are two sign conventions in use in the relativity literature:
and
These sign conventions are associated with the metric signatures and A minor variation is to place the time coordinate last rather than first. Both conventions are widely used within the field of study.
In the following discussion, we use the first convention.
In general can assume any real number value. If is positive, the spacetime interval is referred to as timelike. Since spatial distance traversed by any massive object is always less than distance traveled by the light for the same time interval, positive intervals are always timelike. If is negative, the spacetime interval is said to be spacelike. Spacetime intervals are equal to zero when In other words, the spacetime interval between two events on the world line of something moving at the speed of light is zero. Such an interval is termed lightlike or null. A photon arriving in our eye from a distant star will not have aged, despite having (from our perspective) spent years in its passage.
A spacetime diagram is typically drawn with only a single space and a single time coordinate. Fig. 2-1 presents a spacetime diagram illustrating the world lines (i.e. paths in spacetime) of two photons, A and B, originating from the same event and going in opposite directions. In addition, C illustrates the world line of a slower-than-light-speed object. The vertical time coordinate is scaled by so that it has the same units (meters) as the horizontal space coordinate. Since photons travel at the speed of light, their world lines have a slope of ±1. In other words, every meter that a photon travels to the left or right requires approximately 3.3 nanoseconds of time.
Reference frames
To gain insight in how spacetime coordinates measured by observers in different reference frames compare with each other, it is useful to work with a simplified setup with frames in a standard configuration. With care, this allows simplification of the math with no loss of generality in the conclusions that are reached. In Fig. 2-2, two Galilean reference frames (i.e. conventional 3-space frames) are displayed in relative motion. Frame S belongs to a first observer O, and frame S′ (pronounced "S prime") belongs to a second observer O′.
The x, y, z axes of frame S are oriented parallel to the respective primed axes of frame S′.
Frame S′ moves in the x-direction of frame S with a constant velocity v as measured in frame S.
The origins of frames S and S′ are coincident when time t = 0 for frame S and t′ = 0 for frame S′.
Fig. 2-3a redraws Fig. 2-2 in a different orientation. Fig. 2-3b illustrates a relativistic spacetime diagram from the viewpoint of observer O. Since S and S′ are in standard configuration, their origins coincide at times t = 0 in frame S and t′ = 0 in frame S′. The ct′ axis passes through the events in frame S′ which have x′ = 0. But the points with x′ = 0 are moving in the x-direction of frame S with velocity v, so that they are not coincident with the ct axis at any time other than zero. Therefore, the ct′ axis is tilted with respect to the ct axis by an angle θ given by
The x′ axis is also tilted with respect to the x axis. To determine the angle of this tilt, we recall that the slope of the world line of a light pulse is always ±1. Fig. 2-3c presents a spacetime diagram from the viewpoint of observer O′. Event P represents the emission of a light pulse at x′ = 0, ct′ = −a. The pulse is reflected from a mirror situated a distance a from the light source (event Q), and returns to the light source at x′ = 0, ct′ = a (event R).
The same events P, Q, R are plotted in Fig. 2-3b in the frame of observer O. The light paths have slopes = 1 and −1, so that △PQR forms a right triangle with PQ and QR both at 45 degrees to the x and ct axes. Since OP = OQ = OR, the angle between x′ and x must also be θ.
While the rest frame has space and time axes that meet at right angles, the moving frame is drawn with axes that meet at an acute angle. The frames are actually equivalent. The asymmetry is due to unavoidable distortions in how spacetime coordinates can map onto a Cartesian plane, and should be considered no stranger than the manner in which, on a Mercator projection of the Earth, the relative sizes of land masses near the poles (Greenland and Antarctica) are highly exaggerated relative to land masses near the Equator.
Light cone
In Fig. 2–4, event O is at the origin of a spacetime diagram, and the two diagonal lines represent all events that have zero spacetime interval with respect to the origin event. These two lines form what is called the light cone of the event O, since adding a second spatial dimension (Fig. 2-5) makes the appearance that of two right circular cones meeting with their apices at O. One cone extends into the future (t>0), the other into the past (t<0).
A light (double) cone divides spacetime into separate regions with respect to its apex. The interior of the future light cone consists of all events that are separated from the apex by more time (temporal distance) than necessary to cross their spatial distance at lightspeed; these events comprise the timelike future of the event O. Likewise, the timelike past comprises the interior events of the past light cone. So in timelike intervals Δct is greater than Δx, making timelike intervals positive.
The region exterior to the light cone consists of events that are separated from the event O by more space than can be crossed at lightspeed in the given time. These events comprise the so-called spacelike region of the event O, denoted "Elsewhere" in Fig. 2-4. Events on the light cone itself are said to be lightlike (or null separated) from O. Because of the invariance of the spacetime interval, all observers will assign the same light cone to any given event, and thus will agree on this division of spacetime.
The light cone has an essential role within the concept of causality. It is possible for a not-faster-than-light-speed signal to travel from the position and time of O to the position and time of D (Fig. 2-4). It is hence possible for event O to have a causal influence on event D. The future light cone contains all the events that could be causally influenced by O. Likewise, it is possible for a not-faster-than-light-speed signal to travel from the position and time of A, to the position and time of O. The past light cone contains all the events that could have a causal influence on O. In contrast, assuming that signals cannot travel faster than the speed of light, any event, like e.g. B or C, in the spacelike region (Elsewhere), cannot either affect event O, nor can they be affected by event O employing such signalling. Under this assumption any causal relationship between event O and any events in the spacelike region of a light cone is excluded.
Relativity of simultaneity
All observers will agree that for any given event, an event within the given event's future light cone occurs after the given event. Likewise, for any given event, an event within the given event's past light cone occurs before the given event. The before–after relationship observed for timelike-separated events remains unchanged no matter what the reference frame of the observer, i.e. no matter how the observer may be moving. The situation is quite different for spacelike-separated events. Fig. 2-4 was drawn from the reference frame of an observer moving at From this reference frame, event C is observed to occur after event O, and event B is observed to occur before event O.
From a different reference frame, the orderings of these non-causally-related events can be reversed. In particular, one notes that if two events are simultaneous in a particular reference frame, they are necessarily separated by a spacelike interval and thus are noncausally related. The observation that simultaneity is not absolute, but depends on the observer's reference frame, is termed the relativity of simultaneity.
Fig. 2-6 illustrates the use of spacetime diagrams in the analysis of the relativity of simultaneity. The events in spacetime are invariant, but the coordinate frames transform as discussed above for Fig. 2-3. The three events are simultaneous from the reference frame of an observer moving at From the reference frame of an observer moving at the events appear to occur in the order From the reference frame of an observer moving at , the events appear to occur in the order . The white line represents a plane of simultaneity being moved from the past of the observer to the future of the observer, highlighting events residing on it. The gray area is the light cone of the observer, which remains invariant.
A spacelike spacetime interval gives the same distance that an observer would measure if the events being measured were simultaneous to the observer. A spacelike spacetime interval hence provides a measure of proper distance, i.e. the true distance = Likewise, a timelike spacetime interval gives the same measure of time as would be presented by the cumulative ticking of a clock that moves along a given world line. A timelike spacetime interval hence provides a measure of the proper time =
Invariant hyperbola
In Euclidean space (having spatial dimensions only), the set of points equidistant (using the Euclidean metric) from some point form a circle (in two dimensions) or a sphere (in three dimensions). In Minkowski spacetime (having one temporal and one spatial dimension), the points at some constant spacetime interval away from the origin (using the Minkowski metric) form curves given by the two equations
with some positive real constant. These equations describe two families of hyperbolae in an x–ct spacetime diagram, which are termed invariant hyperbolae.
In Fig. 2-7a, each magenta hyperbola connects all events having some fixed spacelike separation from the origin, while the green hyperbolae connect events of equal timelike separation.
The magenta hyperbolae, which cross the x axis, are timelike curves, which is to say that these hyperbolae represent actual paths that can be traversed by (constantly accelerating) particles in spacetime: Between any two events on one hyperbola a causality relation is possible, because the inverse of the slope—representing the necessary speed—for all secants is less than . On the other hand, the green hyperbolae, which cross the ct axis, are spacelike curves because all intervals along these hyperbolae are spacelike intervals: No causality is possible between any two points on one of these hyperbolae, because all secants represent speeds larger than .
Fig. 2-7b reflects the situation in Minkowski spacetime (one temporal and two spatial dimensions) with the corresponding hyperboloids. The invariant hyperbolae displaced by spacelike intervals from the origin generate hyperboloids of one sheet, while the invariant hyperbolae displaced by timelike intervals from the origin generate hyperboloids of two sheets.
The (1+2)-dimensional boundary between space- and time-like hyperboloids, established by the events forming a zero spacetime interval to the origin, is made up by degenerating the hyperboloids to the light cone. In (1+1)-dimensions the hyperbolae degenerate to the two grey 45°-lines depicted in Fig. 2-7a.
Time dilation and length contraction
Fig. 2-8 illustrates the invariant hyperbola for all events that can be reached from the origin in a proper time of 5 meters (approximately ). Different world lines represent clocks moving at different speeds. A clock that is stationary with respect to the observer has a world line that is vertical, and the elapsed time measured by the observer is the same as the proper time. For a clock traveling at 0.3 c, the elapsed time measured by the observer is 5.24 meters (), while for a clock traveling at 0.7 c, the elapsed time measured by the observer is 7.00 meters ().
This illustrates the phenomenon known as time dilation. Clocks that travel faster take longer (in the observer frame) to tick out the same amount of proper time, and they travel further along the x–axis within that proper time than they would have without time dilation. The measurement of time dilation by two observers in different inertial reference frames is mutual. If observer O measures the clocks of observer O′ as running slower in his frame, observer O′ in turn will measure the clocks of observer O as running slower.
Length contraction, like time dilation, is a manifestation of the relativity of simultaneity. Measurement of length requires measurement of the spacetime interval between two events that are simultaneous in one's frame of reference. But events that are simultaneous in one frame of reference are, in general, not simultaneous in other frames of reference.
Fig. 2-9 illustrates the motions of a 1 m rod that is traveling at 0.5 c along the x axis. The edges of the blue band represent the world lines of the rod's two endpoints. The invariant hyperbola illustrates events separated from the origin by a spacelike interval of 1 m. The endpoints O and B measured when = 0 are simultaneous events in the S′ frame. But to an observer in frame S, events O and B are not simultaneous. To measure length, the observer in frame S measures the endpoints of the rod as projected onto the x-axis along their world lines. The projection of the rod's world sheet onto the x axis yields the foreshortened length OC.
(not illustrated) Drawing a vertical line through A so that it intersects the x′ axis demonstrates that, even as OB is foreshortened from the point of view of observer O, OA is likewise foreshortened from the point of view of observer O′. In the same way that each observer measures the other's clocks as running slow, each observer measures the other's rulers as being contracted.
In regards to mutual length contraction, Fig. 2-9 illustrates that the primed and unprimed frames are mutually rotated by a hyperbolic angle (analogous to ordinary angles in Euclidean geometry). Because of this rotation, the projection of a primed meter-stick onto the unprimed x-axis is foreshortened, while the projection of an unprimed meter-stick onto the primed x′-axis is likewise foreshortened.
Mutual time dilation and the twin paradox
Mutual time dilation
Mutual time dilation and length contraction tend to strike beginners as inherently self-contradictory concepts. If an observer in frame S measures a clock, at rest in frame S', as running slower than his', while S' is moving at speed v in S, then the principle of relativity requires that an observer in frame S' likewise measures a clock in frame S, moving at speed −v in S', as running slower than hers. How two clocks can run both slower than the other, is an important question that "goes to the heart of understanding special relativity."
This apparent contradiction stems from not correctly taking into account the different settings of the necessary, related measurements. These settings allow for a consistent explanation of the only apparent contradiction. It is not about the abstract ticking of two identical clocks, but about how to measure in one frame the temporal distance of two ticks of a moving clock. It turns out that in mutually observing the duration between ticks of clocks, each moving in the respective frame, different sets of clocks must be involved. In order to measure in frame S the tick duration of a moving clock W′ (at rest in S′), one uses two additional, synchronized clocks W1 and W2 at rest in two arbitrarily fixed points in S with the spatial distance d.
Two events can be defined by the condition "two clocks are simultaneously at one place", i.e., when W′ passes each W1 and W2. For both events the two readings of the collocated clocks are recorded. The difference of the two readings of W1 and W2 is the temporal distance of the two events in S, and their spatial distance is d. The difference of the two readings of W′ is the temporal distance of the two events in S′. In S′ these events are only separated in time, they happen at the same place in S′. Because of the invariance of the spacetime interval spanned by these two events, and the nonzero spatial separation d in S, the temporal distance in S′ must be smaller than the one in S: the smaller temporal distance between the two events, resulting from the readings of the moving clock W′, belongs to the slower running clock W′.
Conversely, for judging in frame S′ the temporal distance of two events on a moving clock W (at rest in S), one needs two clocks at rest in S′.
In this comparison the clock W is moving by with velocity −v. Recording again the four readings for the events, defined by "two clocks simultaneously at one place", results in the analogous temporal distances of the two events, now temporally and spatially separated in S′, and only temporally separated but collocated in S. To keep the spacetime interval invariant, the temporal distance in S must be smaller than in S′, because of the spatial separation of the events in S′: now clock W is observed to run slower.
The necessary recordings for the two judgements, with "one moving clock" and "two clocks at rest" in respectively S or S′, involves two different sets, each with three clocks. Since there are different sets of clocks involved in the measurements, there is no inherent necessity that the measurements be reciprocally "consistent" such that, if one observer measures the moving clock to be slow, the other observer measures the one's clock to be fast.
Fig. 2-10 illustrates the previous discussion of mutual time dilation with Minkowski diagrams. The upper picture reflects the measurements as seen from frame S "at rest" with unprimed, rectangular axes, and frame S′ "moving with v > 0", coordinatized by primed, oblique axes, slanted to the right; the lower picture shows frame S′ "at rest" with primed, rectangular coordinates, and frame S "moving with −v < 0", with unprimed, oblique axes, slanted to the left.
Each line drawn parallel to a spatial axis (x, x′) represents a line of simultaneity. All events on such a line have the same time value (ct, ct′). Likewise, each line drawn parallel to a temporal axis (ct, ct′) represents a line of equal spatial coordinate values (x, x′).
One may designate in both pictures the origin O (= ) as the event, where the respective "moving clock" is collocated with the "first clock at rest" in both comparisons. Obviously, for this event the readings on both clocks in both comparisons are zero. As a consequence, the worldlines of the moving clocks are the slanted to the right ct′-axis (upper pictures, clock W′) and the slanted to the left ct-axes (lower pictures, clock W). The worldlines of W1 and W′1 are the corresponding vertical time axes (ct in the upper pictures, and ct′ in the lower pictures).
In the upper picture the place for W2 is taken to be Ax > 0, and thus the worldline (not shown in the pictures) of this clock intersects the worldline of the moving clock (the ct′-axis) in the event labelled A, where "two clocks are simultaneously at one place". In the lower picture the place for W′2 is taken to be Cx′ < 0, and so in this measurement the moving clock W passes W′2 in the event C.
In the upper picture the ct-coordinate At of the event A (the reading of W2) is labeled B, thus giving the elapsed time between the two events, measured with W1 and W2, as OB. For a comparison, the length of the time interval OA, measured with W′, must be transformed to the scale of the ct-axis. This is done by the invariant hyperbola (see also Fig. 2-8) through A, connecting all events with the same spacetime interval from the origin as A. This yields the event C on the ct-axis, and obviously: OC < OB, the "moving" clock W′ runs slower.
To show the mutual time dilation immediately in the upper picture, the event D may be constructed as the event at x′ = 0 (the location of clock W′ in S′), that is simultaneous to C (OC has equal spacetime interval as OA) in S′. This shows that the time interval OD is longer than OA, showing that the "moving" clock runs slower.
In the lower picture the frame S is moving with velocity −v in the frame S′ at rest. The worldline of clock W is the ct-axis (slanted to the left), the worldline of W′1 is the vertical ct′-axis, and the worldline of W′2 is the vertical through event C, with ct′-coordinate D. The invariant hyperbola through event C scales the time interval OC to OA, which is shorter than OD; also, B is constructed (similar to D in the upper pictures) as simultaneous to A in S, at x = 0. The result OB > OC corresponds again to above.
The word "measure" is important. In classical physics an observer cannot affect an observed object, but the object's state of motion can affect the observer's observations of the object.
Twin paradox
Many introductions to special relativity illustrate the differences between Galilean relativity and special relativity by posing a series of "paradoxes". These paradoxes are, in fact, ill-posed problems, resulting from our unfamiliarity with velocities comparable to the speed of light. The remedy is to solve many problems in special relativity and to become familiar with its so-called counter-intuitive predictions. The geometrical approach to studying spacetime is considered one of the best methods for developing a modern intuition.
The twin paradox is a thought experiment involving identical twins, one of whom makes a journey into space in a high-speed rocket, returning home to find that the twin who remained on Earth has aged more. This result appears puzzling because each twin observes the other twin as moving, and so at first glance, it would appear that each should find the other to have aged less. The twin paradox sidesteps the justification for mutual time dilation presented above by avoiding the requirement for a third clock. Nevertheless, the twin paradox is not a true paradox because it is easily understood within the context of special relativity.
The impression that a paradox exists stems from a misunderstanding of what special relativity states. Special relativity does not declare all frames of reference to be equivalent, only inertial frames. The traveling twin's frame is not inertial during periods when she is accelerating. Furthermore, the difference between the twins is observationally detectable: the traveling twin needs to fire her rockets to be able to return home, while the stay-at-home twin does not.Even with no (de)acceleration i.e. using one inertial frame O for constant, high-velocity outward journey and another inertial frame I for constant, high-velocity inward journey – the sum of the elapsed time in those frames (O and I) is shorter than the elapsed time in the stationary inertial frame S. Thus acceleration and deceleration is not the cause of shorter elapsed time during the outward and inward journey. Instead the use of two different constant, high-velocity inertial frames for outward and inward journey is really the cause of shorter elapsed time total. Granted, if the same twin has to travel outward and inward leg of the journey and safely switch from outward to inward leg of the journey, the acceleration and deceleration is required. If the travelling twin could ride the high-velocity outward inertial frame and instantaneously switch to high-velocity inward inertial frame the example would still work. The point is that real reason should be stated clearly. The asymmetry is because of the comparison of sum of elapsed times in two different inertial frames (O and I) to the elapsed time in a single inertial frame S.
These distinctions should result in a difference in the twins' ages. The spacetime diagram of Fig. 2-11 presents the simple case of a twin going straight out along the x axis and immediately turning back. From the standpoint of the stay-at-home twin, there is nothing puzzling about the twin paradox at all. The proper time measured along the traveling twin's world line from O to C, plus the proper time measured from C to B, is less than the stay-at-home twin's proper time measured from O to A to B. More complex trajectories require integrating the proper time between the respective events along the curve (i.e. the path integral) to calculate the total amount of proper time experienced by the traveling twin.
Complications arise if the twin paradox is analyzed from the traveling twin's point of view.
Weiss's nomenclature, designating the stay-at-home twin as Terence and the traveling twin as Stella, is hereafter used.
Stella is not in an inertial frame. Given this fact, it is sometimes incorrectly stated that full resolution of the twin paradox requires general relativity:
Although general relativity is not required to analyze the twin paradox, application of the Equivalence Principle of general relativity does provide some additional insight into the subject. Stella is not stationary in an inertial frame. Analyzed in Stella's rest frame, she is motionless for the entire trip. When she is coasting her rest frame is inertial, and Terence's clock will appear to run slow. But when she fires her rockets for the turnaround, her rest frame is an accelerated frame and she experiences a force which is pushing her as if she were in a gravitational field. Terence will appear to be high up in that field and because of gravitational time dilation, his clock will appear to run fast, so much so that the net result will be that Terence has aged more than Stella when they are back together. The theoretical arguments predicting gravitational time dilation are not exclusive to general relativity. Any theory of gravity will predict gravitational time dilation if it respects the principle of equivalence, including Newton's theory.
Gravitation
This introductory section has focused on the spacetime of special relativity, since it is the easiest to describe. Minkowski spacetime is flat, takes no account of gravity, is uniform throughout, and serves as nothing more than a static background for the events that take place in it. The presence of gravity greatly complicates the description of spacetime. In general relativity, spacetime is no longer a static background, but actively interacts with the physical systems that it contains. Spacetime curves in the presence of matter, can propagate waves, bends light, and exhibits a host of other phenomena. A few of these phenomena are described in the later sections of this article.
Basic mathematics of spacetime
Galilean transformations
A basic goal is to be able to compare measurements made by observers in relative motion. If there is an observer O in frame S who has measured the time and space coordinates of an event, assigning this event three Cartesian coordinates and the time as measured on his lattice of synchronized clocks (see Fig. 1-1). A second observer O′ in a different frame S′ measures the same event in her coordinate system and her lattice of synchronized clocks . With inertial frames, neither observer is under acceleration, and a simple set of equations allows us to relate coordinates to . Given that the two coordinate systems are in standard configuration, meaning that they are aligned with parallel coordinates and that when , the coordinate transformation is as follows:
Fig. 3-1 illustrates that in Newton's theory, time is universal, not the velocity of light. Consider the following thought experiment: The red arrow illustrates a train that is moving at 0.4 c with respect to the platform. Within the train, a passenger shoots a bullet with a speed of 0.4 c in the frame of the train. The blue arrow illustrates that a person standing on the train tracks measures the bullet as traveling at 0.8 c. This is in accordance with our naive expectations.
More generally, assuming that frame S′ is moving at velocity v with respect to frame S, then within frame S′, observer O′ measures an object moving with velocity . Velocity u with respect to frame S, since , , and , can be written as = = . This leads to and ultimately
or
which is the common-sense Galilean law for the addition of velocities.
Relativistic composition of velocities
The composition of velocities is quite different in relativistic spacetime. To reduce the complexity of the equations slightly, we introduce a common shorthand for the ratio of the speed of an object relative to light,
Fig. 3-2a illustrates a red train that is moving forward at a speed given by . From the primed frame of the train, a passenger shoots a bullet with a speed given by , where the distance is measured along a line parallel to the red axis rather than parallel to the black x axis. What is the composite velocity u of the bullet relative to the platform, as represented by the blue arrow? Referring to Fig. 3-2b:
From the platform, the composite speed of the bullet is given by .
The two yellow triangles are similar because they are right triangles that share a common angle α. In the large yellow triangle, the ratio .
The ratios of corresponding sides of the two yellow triangles are constant, so that = . So and .
Substitute the expressions for b and r into the expression for u in step 1 to yield Einstein's formula for the addition of velocities:
The relativistic formula for addition of velocities presented above exhibits several important features:
If and v are both very small compared with the speed of light, then the product /c2 becomes vanishingly small, and the overall result becomes indistinguishable from the Galilean formula (Newton's formula) for the addition of velocities: u = + v. The Galilean formula is a special case of the relativistic formula applicable to low velocities.
If is set equal to c, then the formula yields u = c regardless of the starting value of v. The velocity of light is the same for all observers regardless their motions relative to the emitting source.
Time dilation and length contraction revisited
It is straightforward to obtain quantitative expressions for time dilation and length contraction. Fig. 3-3 is a composite image containing individual frames taken from two previous animations, simplified and relabeled for the purposes of this section.
To reduce the complexity of the equations slightly, there are a variety of different shorthand notations for ct:
and are common.
One also sees very frequently the use of the convention
In Fig. 3-3a, segments OA and OK represent equal spacetime intervals. Time dilation is represented by the ratio OB/OK. The invariant hyperbola has the equation where k = OK, and the red line representing the world line of a particle in motion has the equation w = x/β = xc/v. A bit of algebraic manipulation yields
The expression involving the square root symbol appears very frequently in relativity, and one over the expression is called the Lorentz factor, denoted by the Greek letter gamma :
If v is greater than or equal to c, the expression for becomes physically meaningless, implying that c is the maximum possible speed in nature. For any v greater than zero, the Lorentz factor will be greater than one, although the shape of the curve is such that for low speeds, the Lorentz factor is extremely close to one.
In Fig. 3-3b, segments OA and OK represent equal spacetime intervals. Length contraction is represented by the ratio OB/OK. The invariant hyperbola has the equation , where k = OK, and the edges of the blue band representing the world lines of the endpoints of a rod in motion have slope 1/β = c/v. Event A has coordinates
(x, w) = (γk, γβk). Since the tangent line through A and B has the equation w = (x − OB)/β, we have γβk = (γk − OB)/β and
Lorentz transformations
The Galilean transformations and their consequent commonsense law of addition of velocities work well in our ordinary low-speed world of planes, cars and balls. Beginning in the mid-1800s, however, sensitive scientific instrumentation began finding anomalies that did not fit well with the ordinary addition of velocities.
Lorentz transformations are used to transform the coordinates of an event from one frame to another in special relativity.
The Lorentz factor appears in the Lorentz transformations:
The inverse Lorentz transformations are:
When v ≪ c and x is small enough, the v2/c2 and vx/c2 terms approach zero, and the Lorentz transformations approximate to the Galilean transformations.
etc., most often really mean etc. Although for brevity the Lorentz transformation equations are written without deltas, x means Δx, etc. We are, in general, always concerned with the space and time differences between events.
Calling one set of transformations the normal Lorentz transformations and the other the inverse transformations is misleading, since there is no intrinsic difference between the frames. Different authors call one or the other set of transformations the "inverse" set. The forwards and inverse transformations are trivially related to each other, since the S frame can only be moving forwards or reverse with respect to . So inverting the equations simply entails switching the primed and unprimed variables and replacing v with −v.
Example: Terence and Stella are at an Earth-to-Mars space race. Terence is an official at the starting line, while Stella is a participant. At time , Stella's spaceship accelerates instantaneously to a speed of 0.5 c. The distance from Earth to Mars is 300 light-seconds (about ). Terence observes Stella crossing the finish-line clock at . But Stella observes the time on her ship chronometer to be as she passes the finish line, and she calculates the distance between the starting and finish lines, as measured in her frame, to be 259.81 light-seconds (about ).
1).
Deriving the Lorentz transformations
There have been many dozens of derivations of the Lorentz transformations since Einstein's original work in 1905, each with its particular focus. Although Einstein's derivation was based on the invariance of the speed of light, there are other physical principles that may serve as starting points. Ultimately, these alternative starting points can be considered different expressions of the underlying principle of locality, which states that the influence that one particle exerts on another can not be transmitted instantaneously.
The derivation given here and illustrated in Fig. 3-5 is based on one presented by Bais and makes use of previous results from the Relativistic Composition of Velocities, Time Dilation, and Length Contraction sections. Event P has coordinates (w, x) in the black "rest system" and coordinates in the red frame that is moving with velocity parameter . To determine and in terms of w and x (or the other way around) it is easier at first to derive the inverse Lorentz transformation.
There can be no such thing as length expansion/contraction in the transverse directions. y must equal y and must equal z, otherwise whether a fast moving 1 m ball could fit through a 1 m circular hole would depend on the observer. The first postulate of relativity states that all inertial frames are equivalent, and transverse expansion/contraction would violate this law.
From the drawing, w = a + b and
From previous results using similar triangles, we know that .
Because of time dilation,
Substituting equation (4) into yields .
Length contraction and similar triangles give us and
Substituting the expressions for s, a, r and b into the equations in Step 2 immediately yield
The above equations are alternate expressions for the t and x equations of the inverse Lorentz transformation, as can be seen by substituting ct for w, for , and v/c for β. From the inverse transformation, the equations of the forwards transformation can be derived by solving for and .
Linearity of the Lorentz transformations
The Lorentz transformations have a mathematical property called linearity, since and are obtained as linear combinations of x and t, with no higher powers involved. The linearity of the transformation reflects a fundamental property of spacetime that was tacitly assumed in the derivation, namely, that the properties of inertial frames of reference are independent of location and time. In the absence of gravity, spacetime looks the same everywhere. All inertial observers will agree on what constitutes accelerating and non-accelerating motion. Any one observer can use her own measurements of space and time, but there is nothing absolute about them. Another observer's conventions will do just as well.
A result of linearity is that if two Lorentz transformations are applied sequentially, the result is also a Lorentz transformation.
Example: Terence observes Stella speeding away from him at 0.500 c, and he can use the Lorentz transformations with to relate Stella's measurements to his own. Stella, in her frame, observes Ursula traveling away from her at 0.250 c, and she can use the Lorentz transformations with to relate Ursula's measurements with her own. Because of the linearity of the transformations and the relativistic composition of velocities, Terence can use the Lorentz transformations with to relate Ursula's measurements with his own.
Doppler effect
The Doppler effect is the change in frequency or wavelength of a wave for a receiver and source in relative motion. For simplicity, we consider here two basic scenarios: (1) The motions of the source and/or receiver are exactly along the line connecting them (longitudinal Doppler effect), and (2) the motions are at right angles to the said line (transverse Doppler effect). We are ignoring scenarios where they move along intermediate angles.
Longitudinal Doppler effect
The classical Doppler analysis deals with waves that are propagating in a medium, such as sound waves or water ripples, and which are transmitted between sources and receivers that are moving towards or away from each other. The analysis of such waves depends on whether the source, the receiver, or both are moving relative to the medium. Given the scenario where the receiver is stationary with respect to the medium, and the source is moving directly away from the receiver at a speed of vs for a velocity parameter of βs, the wavelength is increased, and the observed frequency f is given by
On the other hand, given the scenario where source is stationary, and the receiver is moving directly away from the source at a speed of vr for a velocity parameter of βr, the wavelength is not changed, but the transmission velocity of the waves relative to the receiver is decreased, and the observed frequency f is given by
Light, unlike sound or water ripples, does not propagate through a medium, and there is no distinction between a source moving away from the receiver or a receiver moving away from the source. Fig. 3-6 illustrates a relativistic spacetime diagram showing a source separating from the receiver with a velocity parameter so that the separation between source and receiver at time is . Because of time dilation, Since the slope of the green light ray is −1, Hence, the relativistic Doppler effect is given by
Transverse Doppler effect
Suppose that a source and a receiver, both approaching each other in uniform inertial motion along non-intersecting lines, are at their closest approach to each other. It would appear that the classical analysis predicts that the receiver detects no Doppler shift. Due to subtleties in the analysis, that expectation is not necessarily true. Nevertheless, when appropriately defined, transverse Doppler shift is a relativistic effect that has no classical analog. The subtleties are these:
<!—end plainlist—>
In scenario (a), the point of closest approach is frame-independent and represents the moment where there is no change in distance versus time (i.e. dr/dt = 0 where r is the distance between receiver and source) and hence no longitudinal Doppler shift. The source observes the receiver as being illuminated by light of frequency , but also observes the receiver as having a time-dilated clock. In frame S, the receiver is therefore illuminated by blueshifted light of frequency
In scenario (b) the illustration shows the receiver being illuminated by light from when the source was closest to the receiver, even though the source has moved on. Because the source's clocks are time dilated as measured in frame S, and since dr/dt was equal to zero at this point, the light from the source, emitted from this closest point, is redshifted with frequency
Scenarios (c) and (d) can be analyzed by simple time dilation arguments. In (c), the receiver observes light from the source as being blueshifted by a factor of , and in (d), the light is redshifted. The only seeming complication is that the orbiting objects are in accelerated motion. However, if an inertial observer looks at an accelerating clock, only the clock's instantaneous speed is important when computing time dilation. (The converse, however, is not true.) Most reports of transverse Doppler shift refer to the effect as a redshift and analyze the effect in terms of scenarios (b) or (d).
Energy and momentum
Extending momentum to four dimensions
In classical mechanics, the state of motion of a particle is characterized by its mass and its velocity. Linear momentum, the product of a particle's mass and velocity, is a vector quantity, possessing the same direction as the velocity: . It is a conserved quantity, meaning that if a closed system is not affected by external forces, its total linear momentum cannot change.
In relativistic mechanics, the momentum vector is extended to four dimensions. Added to the momentum vector is a time component that allows the spacetime momentum vector to transform like the spacetime position vector . In exploring the properties of the spacetime momentum, we start, in Fig. 3-8a, by examining what a particle looks like at rest. In the rest frame, the spatial component of the momentum is zero, i.e. , but the time component equals mc.
We can obtain the transformed components of this vector in the moving frame by using the Lorentz transformations, or we can read it directly from the figure because we know that and , since the red axes are rescaled by gamma. Fig. 3-8b illustrates the situation as it appears in the moving frame. It is apparent that the space and time components of the four-momentum go to infinity as the velocity of the moving frame approaches c.
We will use this information shortly to obtain an expression for the four-momentum.
Momentum of light
Light particles, or photons, travel at the speed of c, the constant that is conventionally known as the speed of light. This statement is not a tautology, since many modern formulations of relativity do not start with constant speed of light as a postulate. Photons therefore propagate along a lightlike world line and, in appropriate units, have equal space and time components for every observer.
A consequence of Maxwell's theory of electromagnetism is that light carries energy and momentum, and that their ratio is a constant: . Rearranging, , and since for photons, the space and time components are equal, E/c must therefore be equated with the time component of the spacetime momentum vector.
Photons travel at the speed of light, yet have finite momentum and energy. For this to be so, the mass term in γmc must be zero, meaning that photons are massless particles. Infinity times zero is an ill-defined quantity, but E/c is well-defined.
By this analysis, if the energy of a photon equals E in the rest frame, it equals in a moving frame. This result can be derived by inspection of Fig. 3-9 or by application of the Lorentz transformations, and is consistent with the analysis of Doppler effect given previously.
Mass–energy relationship
Consideration of the interrelationships between the various components of the relativistic momentum vector led Einstein to several important conclusions.
In the low speed limit as approaches zero, approaches 1, so the spatial component of the relativistic momentum approaches mv, the classical term for momentum. Following this perspective, γm can be interpreted as a relativistic generalization of m. Einstein proposed that the relativistic mass of an object increases with velocity according to the formula .
Likewise, comparing the time component of the relativistic momentum with that of the photon, , so that Einstein arrived at the relationship . Simplified to the case of zero velocity, this is Einstein's equation relating energy and mass.
Another way of looking at the relationship between mass and energy is to consider a series expansion of at low velocity:
The second term is just an expression for the kinetic energy of the particle. Mass indeed appears to be another form of energy.
The concept of relativistic mass that Einstein introduced in 1905, mrel, although amply validated every day in particle accelerators around the globe (or indeed in any instrumentation whose use depends on high velocity particles, such as electron microscopes, old-fashioned color television sets, etc.), has nevertheless not proven to be a fruitful concept in physics in the sense that it is not a concept that has served as a basis for other theoretical development. Relativistic mass, for instance, plays no role in general relativity.
For this reason, as well as for pedagogical concerns, most physicists currently prefer a different terminology when referring to the relationship between mass and energy. "Relativistic mass" is a deprecated term. The term "mass" by itself refers to the rest mass or invariant mass, and is equal to the invariant length of the relativistic momentum vector. Expressed as a formula,
This formula applies to all particles, massless as well as massive. For photons where mrest equals zero, it yields, .
Four-momentum
Because of the close relationship between mass and energy, the four-momentum (also called 4-momentum) is also called the energy–momentum 4-vector. Using an uppercase P to represent the four-momentum and a lowercase p to denote the spatial momentum, the four-momentum may be written as
or alternatively,
using the convention that
Conservation laws
In physics, conservation laws state that certain particular measurable properties of an isolated physical system do not change as the system evolves over time. In 1915, Emmy Noether discovered that underlying each conservation law is a fundamental symmetry of nature. The fact that physical processes do not care where in space they take place (space translation symmetry) yields conservation of momentum, the fact that such processes do not care when they take place (time translation symmetry) yields conservation of energy, and so on. In this section, we examine the Newtonian views of conservation of mass, momentum and energy from a relativistic perspective.
Total momentum
To understand how the Newtonian view of conservation of momentum needs to be modified in a relativistic context, we examine the problem of two colliding bodies limited to a single dimension.
In Newtonian mechanics, two extreme cases of this problem may be distinguished yielding mathematics of minimum complexity:
(1) The two bodies rebound from each other in a completely elastic collision.
(2) The two bodies stick together and continue moving as a single particle. This second case is the case of completely inelastic collision.
For both cases (1) and (2), momentum, mass, and total energy are conserved. However, kinetic energy is not conserved in cases of inelastic collision. A certain fraction of the initial kinetic energy is converted to heat.
In case (2), two masses with momentums
and collide to produce a single particle of conserved mass traveling at the center of mass velocity of the original system, . The total momentum is conserved.
Fig. 3-10 illustrates the inelastic collision of two particles from a relativistic perspective. The time components and add up to total E/c of the resultant vector, meaning that energy is conserved. Likewise, the space components and add up to form p of the resultant vector. The four-momentum is, as expected, a conserved quantity. However, the invariant mass of the fused particle, given by the point where the invariant hyperbola of the total momentum intersects the energy axis, is not equal to the sum of the invariant masses of the individual particles that collided. Indeed, it is larger than the sum of the individual masses: .
Looking at the events of this scenario in reverse sequence, we see that non-conservation of mass is a common occurrence: when an unstable elementary particle spontaneously decays into two lighter particles, total energy is conserved, but the mass is not. Part of the mass is converted into kinetic energy.
Choice of reference frames
The freedom to choose any frame in which to perform an analysis allows us to pick one which may be particularly convenient. For analysis of momentum and energy problems, the most convenient frame is usually the "center-of-momentum frame" (also called the zero-momentum frame, or COM frame). This is the frame in which the space component of the system's total momentum is zero. Fig. 3-11 illustrates the breakup of a high speed particle into two daughter particles. In the lab frame, the daughter particles are preferentially emitted in a direction oriented along the original particle's trajectory. In the COM frame, however, the two daughter particles are emitted in opposite directions, although their masses and the magnitude of their velocities are generally not the same.
Energy and momentum conservation
In a Newtonian analysis of interacting particles, transformation between frames is simple because all that is necessary is to apply the Galilean transformation to all velocities. Since , the momentum . If the total momentum of an interacting system of particles is observed to be conserved in one frame, it will likewise be observed to be conserved in any other frame.
Conservation of momentum in the COM frame amounts to the requirement that both before and after collision. In the Newtonian analysis, conservation of mass dictates that . In the simplified, one-dimensional scenarios that we have been considering, only one additional constraint is necessary before the outgoing momenta of the particles can be determined—an energy condition. In the one-dimensional case of a completely elastic collision with no loss of kinetic energy, the outgoing velocities of the rebounding particles in the COM frame will be precisely equal and opposite to their incoming velocities. In the case of a completely inelastic collision with total loss of kinetic energy, the outgoing velocities of the rebounding particles will be zero.
Newtonian momenta, calculated as , fail to behave properly under Lorentzian transformation. The linear transformation of velocities is replaced by the highly nonlinear
so that a calculation demonstrating conservation of momentum in one frame will be invalid in other frames. Einstein was faced with either having to give up conservation of momentum, or to change the definition of momentum. This second option was what he chose.
The relativistic conservation law for energy and momentum replaces the three classical conservation laws for energy, momentum and mass. Mass is no longer conserved independently, because it has been subsumed into the total relativistic energy. This makes the relativistic conservation of energy a simpler concept than in nonrelativistic mechanics, because the total energy is conserved without any qualifications. Kinetic energy converted into heat or internal potential energy shows up as an increase in mass.
Introduction to curved spacetime
Technical topics
Is spacetime really curved?
In Poincaré's conventionalist views, the essential criteria according to which one should select a Euclidean versus non-Euclidean geometry would be economy and simplicity. A realist would say that Einstein discovered spacetime to be non-Euclidean. A conventionalist would say that Einstein merely found it more convenient to use non-Euclidean geometry. The conventionalist would maintain that Einstein's analysis said nothing about what the geometry of spacetime really is.
Such being said,
Is it possible to represent general relativity in terms of flat spacetime?
Are there any situations where a flat spacetime interpretation of general relativity may be more convenient than the usual curved spacetime interpretation?
In response to the first question, a number of authors including Deser, Grishchuk, Rosen, Weinberg, etc. have provided various formulations of gravitation as a field in a flat manifold. Those theories are variously called "bimetric gravity", the "field-theoretical approach to general relativity", and so forth. Kip Thorne has provided a popular review of these theories.
The flat spacetime paradigm posits that matter creates a gravitational field that causes rulers to shrink when they are turned from circumferential orientation to radial, and that causes the ticking rates of clocks to dilate. The flat spacetime paradigm is fully equivalent to the curved spacetime paradigm in that they both represent the same physical phenomena. However, their mathematical formulations are entirely different. Working physicists routinely switch between using curved and flat spacetime techniques depending on the requirements of the problem. The flat spacetime paradigm is convenient when performing approximate calculations in weak fields. Hence, flat spacetime techniques tend be used when solving gravitational wave problems, while curved spacetime techniques tend be used in the analysis of black holes.
Asymptotic symmetries
The spacetime symmetry group for Special Relativity is the Poincaré group, which is a ten-dimensional group of three Lorentz boosts, three rotations, and four spacetime translations. It is logical to ask what symmetries if any might apply in General Relativity. A tractable case might be to consider the symmetries of spacetime as seen by observers located far away from all sources of the gravitational field. The naive expectation for asymptotically flat spacetime symmetries might be simply to extend and reproduce the symmetries of flat spacetime of special relativity, viz., the Poincaré group.
In 1962 Hermann Bondi, M. G. van der Burg, A. W. Metzner and Rainer K. Sachs addressed this asymptotic symmetry problem in order to investigate the flow of energy at infinity due to propagating gravitational waves. Their first step was to decide on some physically sensible boundary conditions to place on the gravitational field at lightlike infinity to characterize what it means to say a metric is asymptotically flat, making no a priori assumptions about the nature of the asymptotic symmetry group—not even the assumption that such a group exists. Then after designing what they considered to be the most sensible boundary conditions, they investigated the nature of the resulting asymptotic symmetry transformations that leave invariant the form of the boundary conditions appropriate for asymptotically flat gravitational fields.
What they found was that the asymptotic symmetry transformations actually do form a group and the structure of this group does not depend on the particular gravitational field that happens to be present. This means that, as expected, one can separate the kinematics of spacetime from the dynamics of the gravitational field at least at spatial infinity. The puzzling surprise in 1962 was their discovery of a rich infinite-dimensional group (the so-called BMS group) as the asymptotic symmetry group, instead of the finite-dimensional Poincaré group, which is a subgroup of the BMS group. Not only are the Lorentz transformations asymptotic symmetry transformations, there are also additional transformations that are not Lorentz transformations but are asymptotic symmetry transformations. In fact, they found an additional infinity of transformation generators known as supertranslations. This implies the conclusion that General Relativity (GR) does not reduce to special relativity in the case of weak fields at long distances.
Riemannian geometry
Curved manifolds
For physical reasons, a spacetime continuum is mathematically defined as a four-dimensional, smooth, connected Lorentzian manifold . This means the smooth Lorentz metric has signature . The metric determines the , as well as determining the geodesics of particles and light beams. About each point (event) on this manifold, coordinate charts are used to represent observers in reference frames. Usually, Cartesian coordinates are used. Moreover, for simplicity's sake, units of measurement are usually chosen such that the speed of light is equal to 1.
A reference frame (observer) can be identified with one of these coordinate charts; any such observer can describe any event . Another reference frame may be identified by a second coordinate chart about . Two observers (one in each reference frame) may describe the same event but obtain different descriptions.
Usually, many overlapping coordinate charts are needed to cover a manifold. Given two coordinate charts, one containing (representing an observer) and another containing (representing another observer), the intersection of the charts represents the region of spacetime in which both observers can measure physical quantities and hence compare results. The relation between the two sets of measurements is given by a non-singular coordinate transformation on this intersection. The idea of coordinate charts as local observers who can perform measurements in their vicinity also makes good physical sense, as this is how one actually collects physical data—locally.
For example, two observers, one of whom is on Earth, but the other one who is on a fast rocket to Jupiter, may observe a comet crashing into Jupiter (this is the event ). In general, they will disagree about the exact location and timing of this impact, i.e., they will have different 4-tuples (as they are using different coordinate systems). Although their kinematic descriptions will differ, dynamical (physical) laws, such as momentum conservation and the first law of thermodynamics, will still hold. In fact, relativity theory requires more than this in the sense that it stipulates these (and all other physical) laws must take the same form in all coordinate systems. This introduces tensors into relativity, by which all physical quantities are represented.
Geodesics are said to be timelike, null, or spacelike if the tangent vector to one point of the geodesic is of this nature. Paths of particles and light beams in spacetime are represented by timelike and null (lightlike) geodesics, respectively.
Privileged character of 3+1 spacetime
| Physical sciences | Basics_6 | null |
28767 | https://en.wikipedia.org/wiki/Statics | Statics | Statics is the branch of classical mechanics that is concerned with the analysis of force and torque acting on a physical system that does not experience an acceleration, but rather is in equilibrium with its environment.
If is the total of the forces acting on the system, is the mass of the system and is the acceleration of the system, Newton's second law states that (the bold font indicates a vector quantity, i.e. one with both magnitude and direction). If , then . As for a system in static equilibrium, the acceleration equals zero, the system is either at rest, or its center of mass moves at constant velocity.
The application of the assumption of zero acceleration to the summation of moments acting on the system leads to , where is the summation of all moments acting on the system, is the moment of inertia of the mass and is the angular acceleration of the system. For a system where , it is also true that
Together, the equations (the 'first condition for equilibrium') and (the 'second condition for equilibrium') can be used to solve for unknown quantities acting on the system.
History
Archimedes (c. 287–c. 212 BC) did pioneering work in statics.
Later developments in the field of statics are found in works of Thebit.
Background
Force
Force is the action of one body on another. A force is either a push or a pull, and it tends to move a body in the direction of its action. The action of a force is characterized by its magnitude, by the direction of its action, and by its point of application (or point of contact). Thus, force is a vector quantity, because its effect depends on the direction as well as on the magnitude of the action.
Forces are classified as either contact or body forces. A contact force is produced by direct physical contact; an example is the force exerted on a body by a supporting surface. A body force is generated by virtue of the position of a body within a force field such as a gravitational, electric, or magnetic field and is independent of contact with any other body; an example of a body force is the weight of a body in the Earth's gravitational field.
Moment of a force
In addition to the tendency to move a body in the direction of its application, a force can also tend to rotate a body about an axis. The axis may be any line which neither intersects nor is parallel to the line of action of the force. This rotational tendency is known as moment of force (M). Moment is also referred to as torque.
Moment about a point
The magnitude of the moment of a force at a point O, is equal to the perpendicular distance from O to the line of action of F, multiplied by the magnitude of the force: , where
F = the force applied
d = the perpendicular distance from the axis to the line of action of the force. This perpendicular distance is called the moment arm.
The direction of the moment is given by the right hand rule, where counter clockwise (CCW) is out of the page, and clockwise (CW) is into the page. The moment direction may be accounted for by using a stated sign convention, such as a plus sign (+) for counterclockwise moments and a minus sign (−) for clockwise moments, or vice versa. Moments can be added together as vectors.
In vector format, the moment can be defined as the cross product between the radius vector, r (the vector from point O to the line of action), and the force vector, F:
Varignon's theorem
Varignon's theorem states that the moment of a force about any point is equal to the sum of the moments of the components of the force about the same point.
Equilibrium equations
The static equilibrium of a particle is an important concept in statics. A particle is in equilibrium only if the resultant of all forces acting on the particle is equal to zero. In a rectangular coordinate system the equilibrium equations can be represented by three scalar equations, where the sums of forces in all three directions are equal to zero. An engineering application of this concept is determining the tensions of up to three cables under load, for example the forces exerted on each cable of a hoist lifting an object or of guy wires restraining a hot air balloon to the ground.
Moment of inertia
In classical mechanics, moment of inertia, also called mass moment, rotational inertia, polar moment of inertia of mass, or the angular mass, (SI units kg·m²) is a measure of an object's resistance to changes to its rotation. It is the inertia of a rotating body with respect to its rotation. The moment of inertia plays much the same role in rotational dynamics as mass does in linear dynamics, describing the relationship between angular momentum and angular velocity, torque and angular acceleration, and several other quantities. The symbols I and J are usually used to refer to the moment of inertia or polar moment of inertia.
While a simple scalar treatment of the moment of inertia suffices for many situations, a more advanced tensor treatment allows the analysis of such complicated systems as spinning tops and gyroscopic motion.
The concept was introduced by Leonhard Euler in his 1765 book Theoria motus corporum solidorum seu rigidorum; he discussed the moment of inertia and many related concepts, such as the principal axis of inertia.
Applications
Solids
Statics is used in the analysis of structures, for instance in architectural and structural engineering. Strength of materials is a related field of mechanics that relies heavily on the application of static equilibrium. A key concept is the center of gravity of a body at rest: it represents an imaginary point at which all the mass of a body resides. The position of the point relative to the foundations on which a body lies determines its stability in response to external forces. If the center of gravity exists outside the foundations, then the body is unstable because there is a torque acting: any small disturbance will cause the body to fall or topple. If the center of gravity exists within the foundations, the body is stable since no net torque acts on the body. If the center of gravity coincides with the foundations, then the body is said to be metastable.
Fluids
Hydrostatics, also known as fluid statics, is the study of fluids at rest (i.e. in static equilibrium). The characteristic of any fluid at rest is that the force exerted on any particle of the fluid is the same at all points at the same depth (or altitude) within the fluid. If the net force is greater than zero the fluid will move in the direction of the resulting force. This concept was first formulated in a slightly extended form by French mathematician and philosopher Blaise Pascal in 1647 and became known as Pascal's Law. It has many important applications in hydraulics. Archimedes, Abū Rayhān al-Bīrūnī, Al-Khazini and Galileo Galilei were also major figures in the development of hydrostatics.
| Physical sciences | Basics_10 | null |
28769 | https://en.wikipedia.org/wiki/Maritime%20transport | Maritime transport | Maritime transport (or ocean transport) or more generally waterborne transport, is the transport of people (passengers) or goods (cargo) via waterways. Freight transport by sea has been widely used throughout recorded history. The advent of aviation has diminished the importance of sea travel for passengers, though it is still popular for short trips and pleasure cruises. Transport by water is cheaper than transport by air or ground, but significantly slower for longer distances. Maritime transport accounts for roughly 80% of international trade, according to UNCTAD in 2020.
Maritime transport can be realized over any distance by boat, ship, sailboat or barge, over oceans and lakes, through canals or along rivers. Shipping may be for commerce, recreation, or military purposes. While extensive inland shipping is less critical today, the major waterways of the world including many canals are still very important and are integral parts of worldwide economies. Particularly, especially any material can be moved by water; however, water transport becomes impractical when material delivery is time-critical such as various types of perishable produce. Still, water transport is highly cost effective with regular schedulable cargoes, such as trans-oceanic shipping of consumer products – and especially for heavy loads or bulk cargos, such as coal, coke, ores, or grains. Arguably, the Industrial Revolution had its first impacts where cheap water transport by canal, navigations, or shipping by all types of watercraft on natural waterways supported cost-effective bulk transport.
Containerization revolutionized maritime transport starting in the 1970s. "General cargo" includes goods packaged in boxes, cases, pallets, and barrels. When a cargo is carried in more than one mode, it is intermodal or co-modal.
Description
A nation's shipping fleet (variously called merchant navy, merchant marine, or merchant fleet) consists of the ships operated by civilian crews to transport passengers or cargo from one place to another. Merchant shipping also includes water transport over the river and canal systems connecting inland destinations, large and small. For example, during the early modern era, cities in the Hanseatic League began taming Northern Europe's rivers and harbors. Similarly, the Saint Lawrence Seaway connects the port cities on the Great Lakes in Canada and the United States with the Atlantic Ocean shipping routes, while the various Illinois canals connect the Great Lakes and Canada with New Orleans. Ores, coal, and grains can travel along the rivers of the American Midwest to Pittsburgh or to Birmingham, Alabama. Professional mariners are known as merchant seamen, merchant sailors, and merchant mariners, or simply seamen, sailors, or mariners. The terms "seaman" or "sailor" may also refer to a member of a country's martial navy.
According to the 2005 CIA World Factbook, the total number of merchant ships of at least 1,000 gross register tons in the world was 30,936. In 2010, it was 38,988, an increase of 26%, across many countries. , a quarter of all merchant mariners were born in the Philippines.
Liners and tramps
A ship may also be categorized as to how it is operated.
A liner will have a regular run and operate to a schedule. The scheduled operation requires that such ships are better equipped to deal with causes of potential delay such as bad weather. They are generally higher powered than tramp ships with better seakeeping qualities, thus they are significantly more expensive to build. Liners are typically built for passenger and container operation though past common uses also included mail and general cargo.
A tramp (trader) has no fixed run but will go wherever a suitable cargo takes it. Thus a ship and crew may be chartered from the ship owner to fetch a cargo of grain from Canada to Latvia, the ship may then be required to carry a cargo of coal from Britain to Melanesia. Bulk carriers and some cruise ships are examples of ships built to operate in this manner.
Ships and watercraft
Ships and other watercraft are used for maritime transport. Types can be distinguished by propulsion, size or cargo type. Recreational or educational craft still use wind power, while some smaller craft use internal combustion engines to drive one or more propellers, or in the case of jet boats, an inboard water jet. In shallow-draft areas, such as the Everglades, some craft, such as the hovercraft, are propelled by large pusher-prop fans.
Most modern merchant ships can be placed in one of a few categories, such as:
Typical in-transit times
A cargo ship sailing from a European port to a US one will typically take 10–12 days depending on water currents and other factors. In order to make container ship transport more economical, ship operators sometimes reduce cruising speed, thereby increasing transit time, to reduce fuel consumption, a strategy referred to as "slow steaming".
History
Professional mariners
A ship's complement can be divided into four categories:
The deck department
The engine department
The steward's department
Other.
Deck department
Officer positions in the deck department include but not limited to the Master and his Chief, Second, and Third officers. The official classifications for unlicensed members of the deck department are Able Seaman and Ordinary Seaman.
A common deck crew for a ship includes:
(1) Chief Officer/Chief Mate
(1) Second Officer/Second Mate
(1) Third Officer/Third Mate
(1) Boatswain
(2–6) Able Seamen
(0–2) Ordinary Seamen
A deck cadet is a person who is carrying out mandatory sea time to achieve their officer of the watch certificate. Their time on board is spent learning the operations and tasks of everyday life on a merchant vessel.
Engine department
A ship's engine department consists of the members of a ship's crew that operate and maintain the propulsion and other systems on board the vessel. Engine staff also deal with the "Hotel" facilities on board, notably the sewage, lighting, air conditioning and water systems. They deal with bulk fuel transfers, and require training in firefighting and first aid, as well as in dealing with the ship's boats and other nautical tasks- especially with cargo loading/discharging gear and safety systems, though the specific cargo discharge function remains the responsibility of deck officers and deck workers. On LPG and LNG tankers, however, a cargo engineer works with the deck department during cargo operations, as well as being a watchkeeping engineer.
A common engine crew for a ship includes:
(1) Chief engineer
(1) Second engineer / first assistant engineer
(1) Third engineer / second assistant engineer
(1 or 2) Fourth engineer / third assistant engineer
(0–2) Fifth engineer / junior engineer
(1–3) Oiler (unlicensed qualified rating)
(0–3) Greaser (unlicensed qualified rating)
(1–5) Entry-level rating (such as wiper (occupation), utilityman, etc.)
Many American ships also carry a motorman. Other possible positions include machinist, electrician, refrigeration engineer, and tankerman. Engine cadets are engineer trainees who are completing sea time necessary before they can obtain a watchkeeping license.
Steward's department
A typical steward's department for a cargo ship would be composed of a Chief Steward, a chief cook, and a Steward's Assistant. All three positions are typically filled by unlicensed personnel. The chief steward directs, instructs, and assigns personnel performing such functions as preparing and serving meals; cleaning and maintaining officers' quarters and steward department areas; and receiving, issuing, and inventorying stores. On large passenger vessels, the Catering Department is headed by the Chief Purser and managed by Assistant Pursers. Although they enjoy the benefits of having officer rank, they generally progress through the ranks to become pursers. Under the pursers are the department heads – such as chief cook, head waiter, head barman etc. They are responsible for the administration of their own areas.
The chief steward also plans menus and compiles supply, overtime, and cost control records. They may requisition or purchase stores and equipment. They may bake bread, rolls, cakes, pies, and pastries. A chief steward's duties may overlap with those of the Steward's Assistant, the chief cook, and other Steward's Department crewmembers.
In the United States Merchant Marine, a chief steward must have a Merchant Mariner's Document issued by the United States Coast Guard. Because of international law, conventions, and agreements, all chief cooks who sail internationally are similarly documented by their respective countries.
Other departments
Staff officer positions on a ship, including Junior Assistant Purser, Senior Assistant Purser, Purser, Chief Purser, Medical Doctor, Professional Nurse, Marine Physician Assistant, and hospital corpsman, are considered administrative positions and are therefore regulated by Certificates of Registry issued by the United States Coast Guard. Pilots are also merchant marine officers and are licensed by the Coast Guard. Formerly, there was also a radio department, headed by a chief radio officer and supported by a number of radio officers. Since the introduction of GMDSS (Satellite communications) and the subsequent exemptions from carrying radio officers if the vessel is so equipped, this department has fallen away, although many ships do still carry specialist radio officers, particularly passenger vessels. Many radio officers became 'electro-technical officers', and transferred into the engine department.
Life at sea
Mariners spend much of their life beyond the reach of land. They sometimes face dangerous conditions at sea or on lakes – the fishing port of Gloucester, Massachusetts has a seaside memorial listing over 10,000 fishermen who lost their lives to the sea, and the Great Lakes have seen over 10,000 lost vessels since the 1800s, yet men and women still go to sea. For some, the attraction is a life unencumbered with the restraints of life ashore. Seagoing adventure and a chance to see the world also appeal to many seafarers. Whatever the calling, those who live and work at sea invariably confront social isolation.
Findings by the Seafarer's International Research Center indicate a leading cause of mariners leaving the industry is "almost invariably because they want to be with their families." U.S. merchant ships typically do not allow family members to accompany seafarers on voyages. Industry experts increasingly recognize isolation, stress, and fatigue as occupational hazards. Advocacy groups such as International Labour Organization, a United Nations agency, and the Nautical Institute are seeking improved international standards for mariners. Satellite phones have improved communication and efficiency aboard sea-faring ships. This technology has contributed to crew welfare, although both equipment and fees are expensive.
Ocean voyages are steeped in routine. Maritime tradition dictates that each day be divided into six four-hour periods. Three groups of watch keepers from the engine and deck departments work four hours on then have eight hours off watch keeping. However, there are many overtime jobs to be done daily. This cycle repeats endlessly, 24 hours a day while the ship is at sea. Members of the steward department typically are day workers who put in at least eight-hour shifts. Operations at sea, including repairs, safeguarding against piracy, securing cargo, underway replenishment, and other duties provide opportunities for overtime work. Service aboard ships typically extends for months at a time, followed by protracted shore leave. However, some seamen secure jobs on ships they like and stay aboard for years.
The quick turnaround of many modern ships, spending only a few hours in port, limits a seafarer's free-time ashore. Moreover, some foreign seamen entering U.S. ports from a watch list of 25 countries face restrictions on shore leave due to maritime security concerns. However, shore leave restrictions while in U.S. ports impact American seamen as well. For example, the International Organization of Masters, Mates & Pilots notes a trend of U.S. shipping terminal operators restricting seamen from traveling from the ship to the terminal gate. Furthermore, in cases where transit is allowed, special "security fees" are at times assessed.
Such restrictions on shore leave, coupled with reduced time in port, translate into longer periods at sea. Mariners report that extended periods at sea living and working with shipmates, who for the most part are strangers, takes getting used to. At the same time, there is an opportunity to meet people from other ethnic and cultural backgrounds. Recreational opportunities have improved aboard some U.S. ships, which may feature gyms and day rooms for watching movies, swapping sea stories, and other activities. And in some cases, especially tankers, it is possible for a mariner to be accompanied by members of his family. However, a mariner's off-duty time is largely a solitary affair, pursuing hobbies, reading, writing letters, and sleeping.
On modern ocean-going vessels, typically registered with a flag of convenience, life has changed immensely in the last 20 years. Most large vessels include a gym and often a swimming pool for use by the crew. Since the Exxon Valdez incident, the focus of leisure time activity has shifted from having officer and crew bars, to simply having lounge-style areas where officers or crew can sit to watch movies. With many companies now providing TVs and DVD players in cabins, and enforcing strict smoking policies, it is not surprising that the bar is now a much quieter place on most ships. In some instances games consoles are provided for the officers and crew. The officers enjoy a much higher standard of living on board ocean-going vessels.
Crews are generally poorly paid, poorly qualified and have to complete contracts of approximately 9 months before returning home on leave. They often come from countries where the average industrial wage is still very low, such as the Philippines or India. Officers however, come from all over the world and it is not uncommon to mix the nationality of the officers on board ships. Officers are often the recipients of university degrees and have completed vast amounts of training in order to reach their rank. Officers benefit e.g. by having larger, more comfortable cabins and table service for their meals.
Contracts average at the 4 month mark for officers, with generous leave. Most ocean-going vessels now operate an unmanned engine room system allowing engineers to work days only. The engine room is computer controlled by night, although the duty engineer will make inspections during unmanned operation. Engineers work in a hot, humid, noisy atmosphere. Communication in the engine room is therefore by hand signals and lip-reading, and good teamwork often stands in place of any communication at all.
Environmental impact
The environmental impact of shipping includes greenhouse gas emissions, acoustic, and oil pollution. The International Maritime Organization (IMO) estimates that Carbon dioxide emissions from shipping were equal to 2.2% of the global human-made emissions in 2012 and expects them to rise 50 to 250 percent by 2050 if no action is taken. The IEA forecasts that ammonia will meet approximately 45% of shipping fuel demands by 2050.
The 2021 European Maritime Transport Environmental Report assessed the sector's environmental impact. Shipping contributes 13.5% of EU transport emissions, trailing road transport (71%) and aviation (14.4%). SO2 emissions from ships have declined due to stricter regulations. Maritime activities have significantly increased underwater noise and contributed to the spread of non-indigenous species. Despite increased oil transport, only eight major oil spills occurred in EU waters in the past decade.
Infrastructure
For a port to efficiently send and receive cargo, it requires infrastructure: docks, bollards, pilings, cranes, bulk cargo handling equipment, and so on – equipment and organization supporting the role of the facilities. From pier to pier these may differ, one dock handling intermodal transport needs (container-ships linked to rail by cranes); another bulk handling capabilities (such as conveyors, elevators, tanks, pumps) for loading and unloading bulk cargoes like grain, coal, or fuels. Others may be outfitted as passenger terminals or for mixed mode operations.
Generally, Harbors, seaports and marinas all host watercraft, and consist of components such as piers, wharfs, docks and roadsteads.
| Technology | Maritime transport | null |
28825 | https://en.wikipedia.org/wiki/Submarine | Submarine | A submarine (or sub) is a watercraft capable of independent operation underwater. (It differs from a submersible, which has more limited underwater capability.) The term “submarine” is also sometimes used historically or informally to refer to remotely operated vehicles and robots, or to medium-sized or smaller vessels (such as the midget submarine and the wet sub). Submarines are referred to as boats rather than ships regardless of their size.
Although experimental submarines had been built earlier, submarine design took off during the 19th century, and submarines were adopted by several navies. They were first used widely during World War I (1914–1918), and are now used in many navies, large and small. Their military uses include: attacking enemy surface ships (merchant and military) or other submarines; aircraft carrier protection; blockade running; nuclear deterrence; stealth operations in denied areas when gathering intelligence and doing reconnaissance; denying or influencing enemy movements; conventional land attacks (for example, launching a cruise missile); and covert insertion of frogmen or special forces. Their civilian uses include: marine science; salvage; exploration; and facility inspection and maintenance. Submarines can be modified for specialized functions such as search-and-rescue missions and undersea cable repair. They are also used in the tourism industry and in undersea archaeology. Modern deep-diving submarines derive from the bathyscaphe, which evolved from the diving bell.
Most large submarines consist of a cylindrical body with hemispherical (or conical) ends and a vertical structure, usually located amidships, which houses communications and sensing devices as well as periscopes. In modern submarines, this structure is called the "sail" in American usage and "fin" in European usage. A feature of earlier designs was the "conning tower": a separate pressure hull above the main body of the boat that enabled the use of shorter periscopes. There is a propeller (or pump jet) at the rear, and various hydrodynamic control fins. Smaller, deep-diving, and specialty submarines may deviate significantly from this traditional design. Submarines dive and resurface by using diving planes and by changing the amount of water and air in ballast tanks to affect their buoyancy.
Submarines encompass a wide range of types and capabilities. They range from small, autonomous examples, such as one- or two-person subs that operate for a few hours, to vessels that can remain submerged for six months, such as the Russian (the biggest submarines ever built). Submarines can work at depths that are greater than what is practicable (or even survivable) for human divers.
History
Etymology
The word submarine means 'underwater' or 'under-sea' (as in submarine canyon, submarine pipeline) though as a noun it generally refers to a vessel that can travel underwater. The term is a contraction of submarine boat. and occurs as such in several languages, e.g. French (), and Spanish (), although others retain the original term, such as Dutch (), German (), Swedish (), and Russian (: ), all of which mean 'submarine boat'. By naval tradition, submarines are usually referred to as boats rather than as ships, regardless of their size. Although referred to informally as boats, U.S. submarines employ the designation USS (United States Ship) at the beginning of their names, such as . In the Royal Navy, the designation HMS can refer to "His Majesty's Ship" or "His Majesty's Submarine", though the latter is sometimes rendered "HMS/m" and submarines are generally referred to as boats rather than ships.
Early human-powered submersibles
16th and 17th centuries
According to a report in Opusculum Taisnieri published in 1562:
In 1578, the English mathematician William Bourne recorded in his book Inventions or Devises one of the first plans for an underwater navigation vehicle. A few years later the Scottish mathematician and theologian John Napier wrote in his Secret Inventions (1596) that "These inventions besides devises of sayling under water with divers other devises and strategems for harming of the enemyes by the Grace of God and worke of expert Craftsmen I hope to perform." It is unclear whether he carried out his idea.
Jerónimo de Ayanz y Beaumont (1553–1613) created detailed designs for two types of air-renovated submersible vehicles. They were equipped with oars, autonomous floating snorkels worked by inner pumps, portholes and gloves used for the crew to manipulate underwater objects. Ayanaz planned to use them for warfare, using them to approach enemy ships undetected and set up timed gunpowder charges on their hulls.
The first submersible of whose construction there exists reliable information was designed and built in 1620 by Cornelis Drebbel, a Dutchman in the service of James I of England. It was propelled by means of oars.
18th century
By the mid-18th century, over a dozen patents for submarines/submersible boats had been granted in England. In 1747, Nathaniel Symons patented and built the first known working example of the use of a ballast tank for submersion. His design used leather bags that could fill with water to submerge the craft. A mechanism was used to twist the water out of the bags and cause the boat to resurface. In 1749, the Gentlemen's Magazine reported that a similar design had initially been proposed by Giovanni Borelli in 1680. Further design improvement stagnated for over a century, until application of new technologies for propulsion and stability.
The first military submersible was (1775), a hand-powered acorn-shaped device designed by the American David Bushnell to accommodate a single person. It was the first verified submarine capable of independent underwater operation and movement, and the first to use screws for propulsion.
19th century
In 1800, France built , a human-powered submarine designed by American Robert Fulton. They gave up on the experiment in 1804, as did the British, when they reconsidered Fulton's submarine design.
In 1850, Wilhelm Bauer's was built in Germany. It remains the oldest known surviving submarine in the world.
In 1864, late in the American Civil War, the Confederate navy's became the first military submarine to sink an enemy vessel, the Union sloop-of-war , using a gun-powder-filled keg on a spar as a torpedo charge. The Hunley also sank. The explosion's shock waves may have killed its crew instantly, preventing them from pumping the bilge or propelling the submarine.
In 1866, was the first submarine to successfully dive, cruise underwater, and resurface under the crew's control. The design by German American Julius H. Kroehl (in German, Kröhl) incorporated elements that are still used in modern submarines.
In 1866, was built at the Chilean government's request by Karl Flach, a German engineer and immigrant. It was the fifth submarine built in the world and, along with a second submarine, was intended to defend the port of Valparaiso against attack by the Spanish Navy during the Chincha Islands War.
Mechanically powered submarines
Submarines could not be put into widespread or routine service use by navies until suitable engines were developed. The era from 1863 to 1904 marked a pivotal time in submarine development, and several important technologies appeared. A number of nations built and used submarines. Diesel electric propulsion became the dominant power system and equipment such as the periscope became standardized. Countries conducted many experiments on effective tactics and weapons for submarines, which led to their large impact in World War I.
1863–1904
The first submarine not relying on human power for propulsion was the French (Diver), launched in 1863, which used compressed air at . Narcís Monturiol designed the first air-independent and combustion-powered submarine, , which was launched in Barcelona, Spain in 1864.
The submarine became feasible as potential weapon with the development of the Whitehead torpedo, designed in 1866 by British engineer Robert Whitehead, the first practical self-propelled torpedo. The spar torpedo that had been developed earlier by the Confederate States Navy was considered to be impracticable, as it was believed to have sunk both its intended target, and H. L. Hunley, the submarine that deployed it.
The Irish inventor John Philip Holland built a model submarine in 1876 and in 1878 demonstrated the Holland I prototype. This was followed by a number of unsuccessful designs. In 1896, he designed the Holland Type VI submarine, which used internal combustion engine power on the surface and electric battery power underwater. Launched on 17 May 1897 at Navy Lt. Lewis Nixon's Crescent Shipyard in Elizabeth, New Jersey, Holland VI was purchased by the United States Navy on 11 April 1900, becoming the Navy's first commissioned submarine, christened .
Discussions between the English clergyman and inventor George Garrett and the Swedish industrialist Thorsten Nordenfelt led to the first practical steam-powered submarines, armed with torpedoes and ready for military use. The first was Nordenfelt I, a 56-tonne, vessel similar to Garrett's ill-fated (1879), with a range of , armed with a single torpedo, in 1885.
A reliable means of propulsion for the submerged vessel was only made possible in the 1880s with the advent of the necessary electric battery technology. The first electrically powered boats were built by Isaac Peral y Caballero in Spain (who built ), Dupuy de Lôme (who built ) and Gustave Zédé (who built Sirène) in France, and James Franklin Waddington (who built Porpoise) in England. Peral's design featured torpedoes and other systems that later became standard in submarines.
Commissioned in June 1900, the French steam and electric employed the now typical double-hull design, with a pressure hull inside the outer shell. These 200-ton ships had a range of over underwater. The French submarine Aigrette in 1904 further improved the concept by using a diesel rather than a gasoline engine for surface power. Large numbers of these submarines were built, with seventy-six completed before 1914.
The Royal Navy commissioned five s from Vickers, Barrow-in-Furness, under licence from the Holland Torpedo Boat Company from 1901 to 1903. Construction of the boats took longer than anticipated, with the first only ready for a diving trial at sea on 6 April 1902. Although the design had been purchased entirely from the US company, the actual design used was an untested improvement to the original Holland design using a new petrol engine.
These types of submarines were first used during the Russo-Japanese War of 1904–05. Due to the blockade at Port Arthur, the Russians sent their submarines to Vladivostok, where by 1 January 1905 there were seven boats, enough to create the world's first "operational submarine fleet". The new submarine fleet began patrols on 14 February, usually lasting for about 24 hours each. The first confrontation with Japanese warships occurred on 29 April 1905 when the Russian submarine Som was fired upon by Japanese torpedo boats, but then withdrew.
World War I
Military submarines first made a significant impact in World War I. Forces such as the U-boats of Germany saw action in the First Battle of the Atlantic, and were responsible for sinking , which was sunk as a result of unrestricted submarine warfare and is often cited among the reasons for the entry of the United States into the war.
At the outbreak of the war, Germany had only twenty submarines available for combat, although these included vessels of the diesel-engined U-19 class, which had a sufficient range of and speed of to allow them to operate effectively around the entire British coast., By contrast, the Royal Navy had a total of 74 submarines, though of mixed effectiveness. In August 1914, a flotilla of ten U-boats sailed from their base in Heligoland to attack Royal Navy warships in the North Sea in the first submarine war patrol in history.
The U-boats' ability to function as practical war machines relied on new tactics, their numbers, and submarine technologies such as combination diesel–electric power system developed in the preceding years. More submersibles than true submarines, U-boats operated primarily on the surface using regular engines, submerging occasionally to attack under battery power. They were roughly triangular in cross-section, with a distinct keel to control rolling while surfaced, and a distinct bow. During World War I more than 5,000 Allied ships were sunk by U-boats.
The British responded to the German developments in submarine technology with the creation of the K-class submarines. However, these submarines were notoriously dangerous to operate due to their various design flaws and poor maneuverability.
World War II
During World War II, Germany used submarines to devastating effect in the Battle of the Atlantic, where it attempted to cut Britain's supply routes by sinking more merchant ships than Britain could replace. These merchant ships were vital to supply Britain's population with food, industry with raw material, and armed forces with fuel and armaments. Although the U-boats had been updated in the interwar years, the major innovation was improved communications, encrypted using the Enigma cipher machine. This allowed for mass-attack naval tactics (Rudeltaktik, commonly known as "wolfpack"), which ultimately ceased to be effective when the U-boat's Enigma was cracked. By the end of the war, almost 3,000 Allied ships (175 warships, 2,825 merchantmen) had been sunk by U-boats. Although successful early in the war, Germany's U-boat fleet suffered heavy casualties, losing 793 U-boats and about 28,000 submariners out of 41,000, a casualty rate of about 70%.
The Imperial Japanese Navy operated the most varied fleet of submarines of any navy, including Kaiten crewed torpedoes, midget submarines ( and es), medium-range submarines, purpose-built supply submarines and long-range fleet submarines. They also had submarines with the highest submerged speeds during World War II (s) and submarines that could carry multiple aircraft (s). They were also equipped with one of the most advanced torpedoes of the conflict, the oxygen-propelled Type 95. Nevertheless, despite their technical prowess, Japan chose to use its submarines for fleet warfare, and consequently were relatively unsuccessful, as warships were fast, maneuverable and well-defended compared to merchant ships.
The submarine force was the most effective anti-ship weapon in the American arsenal. Submarines, though only about 2 percent of the U.S. Navy, destroyed over 30 percent of the Japanese Navy, including 8 aircraft carriers, 1 battleship and 11 cruisers. US submarines also destroyed over 60 percent of the Japanese merchant fleet, crippling Japan's ability to supply its military forces and industrial war effort. Allied submarines in the Pacific War destroyed more Japanese shipping than all other weapons combined. This feat was considerably aided by the Imperial Japanese Navy's failure to provide adequate escort forces for the nation's merchant fleet.
During World War II, 314 submarines served in the US Navy, of which nearly 260 were deployed to the Pacific. When the Japanese attacked Hawaii in December 1941, 111 boats were in commission; 203 submarines from the , , and es were commissioned during the war. During the war, 52 US submarines were lost to all causes, with 48 directly due to hostilities. US submarines sank 1,560 enemy vessels, a total tonnage of 5.3 million tons (55% of the total sunk).
The Royal Navy Submarine Service was used primarily in the classic Axis blockade. Its major operating areas were around Norway, in the Mediterranean (against the Axis supply routes to North Africa), and in the Far East. In that war, British submarines sank 2 million tons of enemy shipping and 57 major warships, the latter including 35 submarines. Among these is the only documented instance of a submarine sinking another submarine while both were submerged. This occurred when engaged ; the Venturer crew manually computed a successful firing solution against a three-dimensionally maneuvering target using techniques which became the basis of modern torpedo computer targeting systems. Seventy-four British submarines were lost, the majority, forty-two, in the Mediterranean.
Cold-War military models
The first launch of a cruise missile (SSM-N-8 Regulus) from a submarine occurred in July 1953, from the deck of , a World War II fleet boat modified to carry the missile with a nuclear warhead. Tunny and its sister boat, , were the United States' first nuclear deterrent patrol submarines. In the 1950s, nuclear power partially replaced diesel–electric propulsion. Equipment was also developed to extract oxygen from sea water. These two innovations gave submarines the ability to remain submerged for weeks or months. Most of the naval submarines built since that time in the US, the Soviet Union (now Russia), the UK, and France have been powered by a nuclear reactor.
In 1959–1960, the first ballistic missile submarines were put into service by both the United States () and the Soviet Union () as part of the Cold War nuclear deterrent strategy.
During the Cold War, the US and the Soviet Union maintained large submarine fleets that engaged in cat-and-mouse games. The Soviet Union lost at least four submarines during this period: was lost in 1968 (a part of which the CIA retrieved from the ocean floor with the Howard Hughes-designed ship Glomar Explorer), in 1970, in 1986, and in 1989 (which held a depth record among military submarines—). Many other Soviet subs, such as (the first Soviet nuclear submarine, and the first Soviet sub to reach the North Pole) were badly damaged by fire or radiation leaks. The US lost two nuclear submarines during this time: due to equipment failure during a test dive while at its operational limit, and due to unknown causes.
During the Indo-Pakistani War of 1971, the Pakistan Navy's sank the Indian frigate . This was the first sinking by a submarine since World War II. During the same war, , a Tench-class submarine on loan to Pakistan from the US, was sunk by the Indian Navy. It was the first submarine combat loss since World War II. In 1982 during the Falklands War, the Argentine cruiser was sunk by the British submarine , the first sinking by a nuclear-powered submarine in war. Some weeks later, on 16 June, during the Lebanon War, an unnamed Israeli submarine torpedoed and sank the Lebanese coaster Transit, which was carrying 56 Palestinian refugees to Cyprus, in the belief that the vessel was evacuating anti-Israeli militias. The ship was hit by two torpedoes, managed to run aground but eventually sank. There were 25 dead, including her captain. The Israeli Navy disclosed the incident in November 2018.
Usage
Military
Before and during World War II, the primary role of the submarine was anti-surface ship warfare. Submarines would attack either on the surface using deck guns, or submerged using torpedoes. They were particularly effective in sinking Allied transatlantic shipping in both World Wars, and in disrupting Japanese supply routes and naval operations in the Pacific in World War II.
Mine-laying submarines were developed in the early part of the 20th century. The facility was used in both World Wars. Submarines were also used for inserting and removing covert agents and military forces in special operations, for intelligence gathering, and to rescue aircrew during air attacks on islands, where the airmen would be told of safe places to crash-land so the submarines could rescue them. Submarines could carry cargo through hostile waters or act as supply vessels for other submarines.
Submarines could usually locate and attack other submarines only on the surface, although managed to sink with a four torpedo spread while both were submerged. The British developed a specialized anti-submarine submarine in WWI, the R class. After WWII, with the development of the homing torpedo, better sonar systems, and nuclear propulsion, submarines also became able to hunt each other effectively.
The development of submarine-launched ballistic missile and submarine-launched cruise missiles gave submarines a substantial and long-ranged ability to attack both land and sea targets with a variety of weapons ranging from cluster bombs to nuclear weapons.
The primary defense of a submarine lies in its ability to remain concealed in the depths of the ocean. Early submarines could be detected by the sound they made. Water is an excellent conductor of sound (much better than air), and submarines can detect and track comparatively noisy surface ships from long distances. Modern submarines are built with an emphasis on stealth. Advanced propeller designs, extensive sound-reducing insulation, and special machinery help a submarine remain as quiet as ambient ocean noise, making them difficult to detect. It takes specialized technology to find and attack modern submarines.
Active sonar uses the reflection of sound emitted from the search equipment to detect submarines. It has been used since WWII by surface ships, submarines and aircraft (via dropped buoys and helicopter "dipping" arrays), but it reveals the emitter's position, and is susceptible to counter-measures.
A concealed military submarine is a real threat, and because of its stealth, can force an enemy navy to waste resources searching large areas of ocean and protecting ships against attack. This advantage was vividly demonstrated in the 1982 Falklands War when the British nuclear-powered submarine sank the Argentine cruiser . After the sinking the Argentine Navy recognized that they had no effective defense against submarine attack, and the Argentine surface fleet withdrew to port for the remainder of the war. An Argentine submarine remained at sea, however.
Civilian
Although the majority of the world's submarines are military, there are some civilian submarines, which are used for tourism, exploration, oil and gas platform inspections, and pipeline surveys. Some are also used in illegal activities.
The Submarine Voyage ride opened at Disneyland in 1959, but although it ran under water, it was not a true submarine, as it ran on tracks and was open to the atmosphere. The first tourist submarine was , which went into service in 1964 at Expo64. By 1997, there were 45 tourist submarines operating around the world. Submarines with a crush depth in the range of are operated in several areas worldwide, typically with bottom depths around , with a carrying capacity of 50 to 100 passengers.
In a typical operation a surface vessel carries passengers to an offshore operating area and loads them into the submarine. The submarine then visits underwater points of interest such as natural or artificial reef structures. To surface safely without danger of collision the location of the submarine is marked with an air release and movement to the surface is coordinated by an observer in a support craft.
A recent development is the deployment of so-called narco-submarines by South American drug smugglers to evade law enforcement detection. Although they occasionally deploy true submarines, most are self-propelled semi-submersibles, where a portion of the craft remains above water at all times. In September 2011, Colombian authorities seized a 16-meter-long submersible that could hold a crew of 5, costing about $2 million. The vessel belonged to FARC rebels and had the capacity to carry at least 7 tonnes of drugs.
Polar operations
1903 – Simon Lake submarine Protector surfaced through ice off Newport, Rhode Island.
1930 – operated under ice near Spitsbergen.
1937 – Soviet submarine Krasnogvardeyets operated under ice in the Denmark Strait.
1941–45 – German U-boats operated under ice from the Barents Sea to the Laptev Sea.
1946 – used upward-beamed fathometer in Operation Nanook in the Davis Strait.
1946–47 – used under-ice sonar in Operation High Jump in the Antarctic.
1947 – used upward-beamed echo sounder under pack ice in the Chukchi Sea.
1948 – developed techniques for making vertical ascents and descents through polynyas in the Chukchi Sea.
1952 – used an expanded upward-beamed sounder array in the Beaufort Sea.
1957 – reached 87 degrees north near Spitsbergen.
3 August 1958 – Nautilus used an inertial navigation system to reach the North Pole.
17 March 1959 – surfaced through the ice at the north pole.
1960 – transited under ice over the shallow ( deep) Bering-Chukchi shelf.
1960 – transited the Northwest Passage under ice.
1962 – Soviet reached the north pole.
1970 – carried out an extensive undersea mapping survey of the Siberian continental shelf.
1971 – reached the North Pole.
conducted three Polar Exercises: 1976 (with US actor Charlton Heston aboard); 1984 joint operations with ; and 1990 joint exercises with .
6 May 1986 – , and meet and surface together at the Geographic North Pole. First three-submarine surfacing at the Pole.
19 May 1987 – joined and at the North Pole.
March 2007 – participated in the Joint US Navy/Royal Navy Ice Exercise 2007 (ICEX-2007) in the Arctic Ocean with the .
March 2009 – took part in Ice Exercise 2009 to test submarine operability and war-fighting capability in Arctic conditions.
Technology
Buoyancy and trim
All surface ships, as well as surfaced submarines, are in a positively buoyant condition, weighing less than the volume of water they would displace if fully submerged. To submerge hydrostatically, a ship must have negative buoyancy, either by increasing its own weight or decreasing its displacement of water. To control their displacement and weight, submarines have ballast tanks, which can hold varying amounts of water and air.
For general submersion or surfacing, submarines use the main ballast tanks (MBTs), which are ambient pressure tanks, filled with water to submerge or with air to surface. While submerged, MBTs generally remain flooded, which simplifies their design, and on many submarines, these tanks are a section of the space between the light hull and the pressure hull. For more precise control of depth, submarines use smaller depth control tanks (DCTs)—also called hard tanks (due to their ability to withstand higher pressure) or trim tanks. These are variable buoyancy pressure vessels, a type of buoyancy control device. The amount of water in depth control tanks can be adjusted to hydrostatically change depth or to maintain a constant depth as outside conditions (mainly water density) change. Depth control tanks may be located either near the submarine's center of gravity, to minimise the effect on trim, or separated along the length of the hull so they can also be used to adjust static trim by transfer of water between them.
When submerged, the water pressure on a submarine's hull can reach for steel submarines and up to for titanium submarines like , while interior pressure remains relatively unchanged. This difference results in hull compression, which decreases displacement. Water density also marginally increases with depth, as the salinity and pressure are higher. This change in density incompletely compensates for hull compression, so buoyancy decreases as depth increases. A submerged submarine is in an unstable equilibrium, having a tendency to either sink or float to the surface. Keeping a constant depth requires continual operation of either the depth control tanks or control surfaces.
Submarines in a neutral buoyancy condition are not intrinsically trim-stable. To maintain desired longitudinal trim, submarines use forward and aft trim tanks. Pumps move water between the tanks, changing weight distribution and pitching the sub up or down. A similar system may be used to maintain transverse trim.
Control surfaces
The hydrostatic effect of variable ballast tanks is not the only way to control the submarine underwater. Hydrodynamic maneuvering is done by several control surfaces, collectively known as diving planes or hydroplanes, which can be moved to create hydrodynamic forces when a submarine moves longitudinally at sufficient speed. In the classic cruciform stern configuration, the horizontal stern planes serve the same purpose as the trim tanks, controlling the trim. Most submarines additionally have forward horizontal planes, normally placed on the bow until the 1960s but often on the sail on later designs, where they are closer to the center of gravity and can control depth with less effect on the trim.
An obvious way to configure the control surfaces at the stern of a submarine is to use vertical planes to control yaw and horizontal planes to control pitch, which gives them the shape of a cross when seen from astern of the vessel. In this configuration, which long remained the dominant one, the horizontal planes are used to control the trim and depth and the vertical planes to control sideways maneuvers, like the rudder of a surface ship.
Alternatively, the rear control surfaces can be combined into what has become known as an X-stern or an X-form rudder. Although less intuitive, such a configuration has turned out to have several advantages over the traditional cruciform arrangement. First, it improves maneuverability, horizontally as well as vertically. Second, the control surfaces are less likely to get damaged when landing on, or departing from, the seabed as well as when mooring and unmooring alongside. Finally, it is safer in that one of the two diagonal lines can counteract the other with respect to vertical as well as horizontal motion if one of them accidentally gets stuck.
The x-stern was first tried in practice in the early 1960s on the USS Albacore, an experimental submarine of the US Navy. While the arrangement was found to be advantageous, it was nevertheless not used on US production submarines that followed due to the fact that it requires the use of a computer to manipulate the control surfaces to the desired effect. Instead, the first to use an x-stern in standard operations was the Swedish Navy with its Sjöormen class, the lead submarine of which was launched in 1967, before the Albacore had even finished her test runs. Since it turned out to work very well in practice, all subsequent classes of Swedish submarines (Näcken, Västergötland, Gotland, and Blekinge class) have or will come with an x-rudder.
The Kockums shipyard responsible for the design of the x-stern on Swedish submarines eventually exported it to Australia with the Collins class as well as to Japan with the Sōryū class. With the introduction of the type 212, the German and Italian Navies came to feature it as well. The US Navy with its Columbia class, the British Navy with its Dreadnought class, and the French Navy with its Barracuda class are all about to join the x-stern family. Hence, as judged by the situation in the early 2020s, the x-stern is about to become the dominant technology.
When a submarine performs an emergency surfacing, all depth and trim control methods are used simultaneously, together with propelling the boat upwards. Such surfacing is very quick, so the vessel may even partially jump out of the water, potentially damaging submarine systems.
Hull
Overview
Modern submarines are cigar-shaped. This design, also used in very early submarines, is sometimes called a "teardrop hull". It reduces hydrodynamic drag when the sub is submerged, but decreases the sea-keeping capabilities and increases drag while surfaced. Since the limitations of the propulsion systems of early submarines forced them to operate surfaced most of the time, their hull designs were a compromise. Because of the slow submerged speeds of those subs, usually, well below 10 kt (18 km/h), the increased drag for underwater travel was acceptable. Late in World War II, when technology allowed faster and longer submerged operation and increased aircraft surveillance forced submarines to stay submerged, hull designs became teardrop shaped again to reduce drag and noise. was a unique research submarine that pioneered the American version of the teardrop hull form (sometimes referred to as an "Albacore hull") of modern submarines. On modern military submarines the outer hull is covered with a layer of sound-absorbing rubber, or anechoic plating, to reduce detection.
The occupied pressure hulls of deep-diving submarines such as are spherical instead of cylindrical. This allows a more even distribution of stress and efficient use of materials to withstand external pressure as it gives the most internal volume for structural weight and is the most efficient shape to avoid buckling instability in compression. A frame is usually affixed to the outside of the pressure hull, providing attachment for ballast and trim systems, scientific instrumentation, battery packs, syntactic flotation foam, and lighting.
A raised tower on top of a standard submarine accommodates the periscope and electronics masts, which can include radio, radar, electronic warfare, and other systems. It might also include a snorkel mast. In many early classes of submarines (see history), the control room, or "conn", was located inside this tower, which was known as the "conning tower". Since then, the conn has been located within the hull of the submarine, and the tower is now called the "sail" or "fin". The conn is distinct from the "bridge", a small open platform in the top of the sail, used for observation during surface operation.
"Bathtubs" are related to conning towers but are used on smaller submarines. The bathtub is a metal cylinder surrounding the hatch that prevents waves from breaking directly into the cabin. It is needed because surfaced submarines have limited freeboard, that is, they lie low in the water. Bathtubs help prevent swamping the vessel.
Single and double hulls
Modern submarines and submersibles usually have, as did the earliest models, a single hull. Large submarines generally have an additional hull or hull sections outside. This external hull, which actually forms the shape of submarine, is called the outer hull (casing in the Royal Navy) or light hull, as it does not have to withstand a pressure difference. Inside the outer hull there is a strong hull, or pressure hull, which withstands sea pressure and has normal atmospheric pressure inside.
As early as World War I, it was realized that the optimal shape for withstanding pressure conflicted with the optimal shape for seakeeping and minimal drag at the surface, and construction difficulties further complicated the problem. This was solved either by a compromise shape, or by using two layered hulls: the internal strength hull for withstanding pressure, and an external fairing for hydrodynamic shape. Until the end of World War II, most submarines had an additional partial casing on the top, bow and stern, built of thinner metal, which was flooded when submerged. Germany went further with the Type XXI, a general predecessor of modern submarines, in which the pressure hull was fully enclosed inside the light hull, but optimized for submerged navigation, unlike earlier designs that were optimized for surface operation.
After World War II, approaches split. The Soviet Union changed its designs, basing them on German developments. All post-World War II heavy Soviet and Russian submarines are built with a double hull structure. American and most other Western submarines switched to a primarily single-hull approach. They still have light hull sections in the bow and stern, which house main ballast tanks and provide a hydrodynamically optimized shape, but the main cylindrical hull section has only a single plating layer. Double hulls are being considered for future submarines in the United States to improve payload capacity, stealth and range.
Pressure hull
The pressure hull is generally constructed of thick high-strength steel with a complex structure and high strength reserve, and is separated by watertight bulkheads into several compartments. There are also examples of more than two hulls in a submarine, like the , which has two main pressure hulls and three smaller ones for control room, torpedoes and steering gear, with the missile launch system between the main hulls, all surrounded and supported by the outer light hydrodynamic hull. When submerged the pressure hull provides most of the buoyancy for the whole vessel.
The dive depth cannot be increased easily. Simply making the hull thicker increases the structural weight and requires reduction of onboard equipment weight, and increasing the diameter requires a proportional increase in thickness for the same material and architecture, ultimately resulting in a pressure hull that does not have sufficient buoyancy to support its own weight, as in a bathyscaphe. This is acceptable for civilian research submersibles, but not military submarines, which need to carry a large equipment, crew, and weapons load to fulfill their function. Construction materials with greater specific strength and specific modulus are needed.
WWI submarines had hulls of carbon steel, with a maximum depth. During WWII, high-strength alloyed steel was introduced, allowing depths. High-strength alloy steel remains the primary material for submarines today, with depths, which cannot be exceeded on a military submarine without design compromises. To exceed that limit, a few submarines were built with titanium hulls. Titanium alloys can be stronger than steel, lighter, and most importantly, have higher immersed specific strength and specific modulus. Titanium is also not ferromagnetic, important for stealth. Titanium submarines were built by the Soviet Union, which developed specialized high-strength alloys. It has produced several types of titanium submarines. Titanium alloys allow a major increase in depth, but other systems must be redesigned to cope, so test depth was limited to for the , the deepest-diving combat submarine. An may have successfully operated at , though continuous operation at such depths would produce excessive stress on many submarine systems. Titanium does not flex as readily as steel, and may become brittle after many dive cycles. Despite its benefits, the high cost of titanium construction led to the abandonment of titanium submarine construction as the Cold War ended. Deep-diving civilian submarines have used thick acrylic pressure hulls. Although the specific strength and specific modulus of acrylic are not very high, the density is only 1.18g/cm3, so it is only very slightly denser than water, and the buoyancy penalty of increased thickness is correspondingly low.
The deepest deep-submergence vehicle (DSV) to date is Trieste. On 5 October 1959, Trieste departed San Diego for Guam aboard the freighter Santa Maria to participate in Project Nekton, a series of very deep dives in the Mariana Trench. On 23 January 1960, Trieste reached the ocean floor in the Challenger Deep (the deepest southern part of the Mariana Trench), carrying Jacques Piccard (son of Auguste) and Lieutenant Don Walsh, USN. This was the first time a vessel, crewed or uncrewed, had reached the deepest point in the Earth's oceans. The onboard systems indicated a depth of , although this was later revised to and more accurate measurements made in 1995 have found the Challenger Deep slightly shallower, at .
Building a pressure hull is difficult, as it must withstand pressures at its required diving depth. When the hull is perfectly round in cross-section, the pressure is evenly distributed, and causes only hull compression. If the shape is not perfect, the hull deflects more in some places and buckling instability is the usual failure mode. Inevitable minor deviations are resisted by stiffener rings, but even a one-inch (25 mm) deviation from roundness results in over 30 percent decrease of maximal hydrostatic load and consequently dive depth. The hull must therefore be constructed with high precision. All hull parts must be welded without defects, and all joints are checked multiple times with different methods, contributing to the high cost of modern submarines. (For example, each attack submarine costs US$2.6 billion, over US$200,000 per ton of displacement.)
Propulsion
The first submarines were propelled by humans. The first mechanically driven submarine was the 1863 French , which used compressed air for propulsion. Anaerobic propulsion was first employed by the Spanish Ictineo II in 1864, which used a solution of zinc, manganese dioxide, and potassium chlorate to generate sufficient heat to power a steam engine, while also providing oxygen for the crew. A similar system was not employed again until 1940 when the German Navy tested a hydrogen peroxide-based system, the Walter turbine, on the experimental V-80 submarine and later on the naval and type XVII submarines; the system was further developed for the British , completed in 1958.
Until the advent of nuclear marine propulsion, most 20th-century submarines used electric motors and batteries for running underwater and combustion engines on the surface, and for battery recharging. Early submarines used gasoline (petrol) engines but this quickly gave way to kerosene (paraffin) and then diesel engines because of reduced flammability and, with diesel, improved fuel-efficiency and thus also greater range. A combination of diesel and electric propulsion became the norm.
Initially, the combustion engine and the electric motor were in most cases connected to the same shaft so that both could directly drive the propeller. The combustion engine was placed at the front end of the stern section with the electric motor behind it followed by the propeller shaft. The engine was connected to the motor by a clutch and the motor in turn connected to the propeller shaft by another clutch.
With only the rear clutch engaged, the electric motor could drive the propeller, as required for fully submerged operation. With both clutches engaged, the combustion engine could drive the propeller, as was possible when operating on the surface or, at a later stage, when snorkeling. The electric motor would in this case serve as a generator to charge the batteries or, if no charging was needed, be allowed to rotate freely. With only the front clutch engaged, the combustion engine could drive the electric motor as a generator for charging the batteries without simultaneously forcing the propeller to move.
The motor could have multiple armatures on the shaft, which could be electrically coupled in series for slow speed and in parallel for high speed (these connections were called "group down" and "group up", respectively).
Diesel–electric transmission
While most early submarines used a direct mechanical connection between the combustion engine and the propeller, an alternative solution was considered as well as implemented at a very early stage. That solution consists in first converting the work of the combustion engine into electric energy via a dedicated generator. This energy is then used to drive the propeller via the electric motor and, to the extent required, for charging the batteries. In this configuration, the electric motor is thus responsible for driving the propeller at all times, regardless of whether air is available so that the combustion engine can also be used or not.
Among the pioneers of this alternative solution was the very first submarine of the Swedish Navy, (later renamed Ub no 1), launched in 1904. While its design was generally inspired by the first submarine commissioned by the US Navy, USS Holland, it deviated from the latter in at least three significant ways: by adding a periscope, by replacing the gasoline engine by a semidiesel engine (a hot-bulb engine primarily meant to be fueled by kerosene, later replaced by a true diesel engine) and by severing the mechanical link between the combustion engine and the propeller by instead letting the former drive a dedicated generator. By so doing, it took three significant steps toward what was eventually to become the dominant technology for conventional (i.e., non-nuclear) submarines.
In the following years, the Swedish Navy added another seven submarines in three different classes (, , and ) using the same propulsion technology but fitted with true diesel engines rather than semidiesels from the outset. Since by that time, the technology was usually based on the diesel engine rather than some other type of combustion engine, it eventually came to be known as diesel–electric transmission.
Like many other early submarines, those initially designed in Sweden were quite small (less than 200 tonnes) and thus confined to littoral operation. When the Swedish Navy wanted to add larger vessels, capable of operating further from the shore, their designs were purchased from companies abroad that already had the required experience: first Italian (Fiat-Laurenti) and later German (A.G. Weser and IvS). As a side-effect, the diesel–electric transmission was temporarily abandoned.
However, diesel–electric transmission was immediately reintroduced when Sweden began designing its own submarines again in the mid-1930s. From that point onwards, it has been consistently used for all new classes of Swedish submarines, albeit supplemented by air-independent propulsion (AIP) as provided by Stirling engines beginning with HMS Näcken in 1988.
Another early adopter of diesel–electric transmission was the US Navy, whose Bureau of Engineering proposed its use in 1928. It was subsequently tried in the S-class submarines , , and before being put into production with the Porpoise class of the 1930s. From that point onwards, it continued to be used on most US conventional submarines.
Apart from the British U-class and some submarines of the Imperial Japanese Navy that used separate diesel generators for low speed running, few navies other than those of Sweden and the US made much use of diesel–electric transmission before 1945. After World War II, by contrast, it gradually became the dominant mode of propulsion for conventional submarines. However, its adoption was not always swift. Notably, the Soviet Navy did not introduce diesel–electric transmission on its conventional submarines until 1980 with its Paltus class.
If diesel–electric transmission had only brought advantages and no disadvantages in comparison with a system that mechanically connects the diesel engine to the propeller, it would undoubtedly have become dominant much earlier. The disadvantages include the following:
It entails a loss of fuel-efficiency as well as power by converting the output of the diesel engine into electricity. While both generators and electric motors are known to be very efficient, their efficiency nevertheless falls short of 100 percent.
It requires an additional component in the form of a dedicated generator. Since the electric motor is always used to drive the propeller it can no longer step in to take on generator service as well.
It does not allow the diesel engine and the electrical motor to join forces by simultaneously driving the propeller mechanically for maximum speed when the submarine is surfaced or snorkeling. This may, however, be of little practical importance inasmuch as the option it prevents is one that would leave the submarine at a risk of having to dive with its batteries at least partly depleted.
The reason why diesel–electric transmission has become the dominant alternative in spite of these disadvantages is of course that it also comes with many advantages and that, on balance, these have eventually been found to be more important. The advantages include the following:
It reduces external noise by severing the direct and rigid mechanical link between the relatively noisy diesel engine(s) on the one hand and the propeller shaft(s) and hull on the other. With stealth being of paramount importance to submarines, this is a very significant advantage.
It increases the readiness to dive, which is of course of vital importance for a submarine. The only thing required from a propulsion point of view is to shut down the diesel(s).
It makes the speed of the diesel engine(s) temporarily independent of the speed of the submarine. This in turn often makes it possible to run the diesel(s) at close to optimal speed from a fuel-efficiency as well as durability point of view. It also makes it possible to reduce the time spent surfaced or snorkeling by running the diesel(s) at maximum speed without affecting the speed of the submarine itself.
It eliminates the clutches otherwise required to connect the diesel engine, the electric motor, and the propeller shaft. This in turn saves space, increases reliability and reduces maintenance costs.
It increases flexibility with regard to how the driveline components are configured, positioned, and maintained. For example, the diesel no longer has to be aligned with the electric motor and propeller shaft, two diesels can be used to power a single propeller (or vice versa), and one diesel can be turned off for maintenance as long as a second is available to provide the required amount of electricity.
It facilitates the integration of additional primary sources of energy, beside the diesel engine(s), such as various kinds of air-independent power (AIP) systems. With one or more electric motors always driving the propeller(s), such systems can easily be introduced as yet another source of electric energy in addition to the diesel engine(s) and the batteries.
Snorkel
During World War II the Germans experimented with the idea of the schnorchel (snorkel) from captured Dutch submarines but did not see the need for them until rather late in the war. The schnorchel is a retractable pipe that supplies air to the diesel engines while submerged at periscope depth, allowing the boat to cruise and recharge its batteries while maintaining a degree of stealth.
Especially as first implemented however, it turned out to be far from a perfect solution. There were problems with the device's valve sticking shut or closing as it dunked in rough weather. Since the system used the entire pressure hull as a buffer, the diesels would instantaneously suck huge volumes of air from the boat's compartments, and the crew often suffered painful ear injuries. Speed was limited to , lest the device snap from stress. The schnorchel also created noise that made the boat easier to detect with sonar, yet more difficult for the on-board sonar to detect signals from other vessels. Finally, allied radar eventually became sufficiently advanced that the schnorchel mast could be detected beyond visual range.
While the snorkel renders a submarine far less detectable, it is thus not perfect. In clear weather, diesel exhausts can be seen on the surface to a distance of about three miles, while "periscope feather" (the wave created by the snorkel or periscope moving through the water) is visible from far off in calm sea conditions. Modern radar is also capable of detecting a snorkel in calm sea conditions.
The problem of the diesels causing a vacuum in the submarine when the head valve is submerged still exists in later model diesel submarines but is mitigated by high-vacuum cut-off sensors that shut down the engines when the vacuum in the ship reaches a pre-set point. Modern snorkel induction masts have a fail-safe design using compressed air, controlled by a simple electrical circuit, to hold the "head valve" open against the pull of a powerful spring. Seawater washing over the mast shorts out exposed electrodes on top, breaking the control, and shutting the "head valve" while it is submerged. US submarines did not adopt the use of snorkels until after WWII.
Air-independent propulsion
During World War II, German Type XXI submarines (also known as "Elektroboote") were the first submarines designed to operate submerged for extended periods. Initially they were to carry hydrogen peroxide for long-term, fast air-independent propulsion, but were ultimately built with very large batteries instead. At the end of the War, the British and Soviets experimented with hydrogen peroxide/kerosene (paraffin) engines that could run surfaced and submerged. The results were not encouraging. Though the Soviet Union deployed a class of submarines with this engine type (codenamed by NATO), they were considered unsuccessful.
The United States also used hydrogen peroxide in an experimental midget submarine, X-1. It was originally powered by a hydrogen peroxide/diesel engine and battery system until an explosion of her hydrogen peroxide supply on 20 May 1957. X-1 was later converted to use diesel–electric drive.
Today several navies use air-independent propulsion. Notably Sweden uses Stirling technology on the and s. The Stirling engine is heated by burning diesel fuel with liquid oxygen from cryogenic tanks. A newer development in air-independent propulsion is hydrogen fuel cells, first used on the German Type 212 submarine, with nine 34 kW or two 120 kW cells. Fuel cells are also used in the new Spanish s although with the fuel stored as ethanol and then converted into hydrogen before use.
One new technology that is being introduced starting with the Japanese Navy's eleventh Sōryū-class submarine (JS Ōryū) is a more modern battery, the lithium-ion battery. These batteries have about double the electric storage of traditional batteries, and by changing out the lead-acid batteries in their normal storage areas plus filling up the large hull space normally devoted to AIP engine and fuel tanks with many tons of lithium-ion batteries, modern submarines can actually return to a "pure" diesel–electric configuration yet have the added underwater range and power normally associated with AIP equipped submarines.
Nuclear power
Steam power was resurrected in the 1950s with a nuclear-powered steam turbine driving a generator. By eliminating the need for atmospheric oxygen, the time that a submarine could remain submerged was limited only by its food stores, as breathing air was recycled and fresh water distilled from seawater. More importantly, a nuclear submarine has unlimited range at top speed. This allows it to travel from its operating base to the combat zone in a much shorter time and makes it a far more difficult target for most anti-submarine weapons. Nuclear-powered submarines have a relatively small battery and diesel engine/generator powerplant for emergency use if the reactors must be shut down.
Nuclear power is now used in all large submarines, but due to the high cost and large size of nuclear reactors, smaller submarines still use diesel–electric propulsion. The ratio of larger to smaller submarines depends on strategic needs. The US Navy, French Navy, and the British Royal Navy operate only nuclear submarines, which is explained by the need for distant operations. Other major operators rely on a mix of nuclear submarines for strategic purposes and diesel–electric submarines for defense. Most fleets have no nuclear submarines, due to the limited availability of nuclear power and submarine technology.
Diesel–electric submarines have a stealth advantage over their nuclear counterparts. Nuclear submarines generate noise from coolant pumps and turbo-machinery needed to operate the reactor, even at low power levels. Some nuclear submarines such as the American can operate with their reactor coolant pumps secured, making them quieter than electric subs. A conventional submarine operating on batteries is almost completely silent, the only noise coming from the shaft bearings, propeller, and flow noise around the hull, all of which stops when the sub hovers in mid-water to listen, leaving only the noise from crew activity. Commercial submarines usually rely only on batteries, since they operate in conjunction with a mother ship.
Several serious nuclear and radiation accidents have involved nuclear submarine mishaps. The reactor accident in 1961 resulted in 8 deaths and more than 30 other people were over-exposed to radiation. The reactor accident in 1968 resulted in 9 fatalities and 83 other injuries. The accident in 1985 resulted in 10 fatalities and 49 other radiation injuries.
Alternative
Oil-fired steam turbines powered the British K-class submarines, built during World War I and later, to give them the surface speed to keep up with the battle fleet. The K-class subs were not very successful, however.
Toward the end of the 20th century, some submarines—such as the British Vanguard class—began to be fitted with pump-jet propulsors instead of propellers. Though these are heavier, more expensive, and less efficient than a propeller, they are significantly quieter, providing an important tactical advantage.
Armament
The success of the submarine is inextricably linked to the development of the torpedo, invented by Robert Whitehead in 1866. His invention (essentially the same now as it was 140 years ago), allowed the submarine make the leap from novelty to a weapon of war. Prior to the development and miniaturization of sonar sensitive enough to track a submerged submarine, attacks were exclusively restricted to ships and submarines operating near or at the surface. Targeting of unguided torpedoes was initially done by eye, but by World War II analog targeting computers began to proliferate, being able to calculate basic firing solutions. Nonetheless, multiple "straight-running" torpedoes could be required to ensure a target was hit. With at most 20 to 25 torpedoes stored on board, the number of attacks a submarine could make was limited. To increase combat endurance starting in World War I submarines also functioned as submersible gunboats, using their deck guns against unarmed targets, and diving to escape and engage enemy warships. The initial importance of these deck guns encouraged the development of the unsuccessful Submarine Cruiser such as the French and the Royal Navy's and M-class submarines. With the arrival of anti-submarine warfare (ASW) aircraft, guns became more for defense than attack. A more practical method of increasing combat endurance was the external torpedo tube, loaded only in port.
The ability of submarines to approach enemy harbours covertly led to their use as minelayers. Minelaying submarines of World War I and World War II were specially built for that purpose. Modern submarine-laid mines, such as the British Mark 5 Stonefish and Mark 6 Sea Urchin, can be deployed from a submarine's torpedo tubes.
After World War II, both the US and the USSR experimented with submarine-launched cruise missiles such as the SSM-N-8 Regulus and P-5 Pyatyorka. Such missiles required the submarine to surface to fire its missiles. They were the forerunners of modern submarine-launched cruise missiles, which can be fired from the torpedo tubes of submerged submarines, for example, the US BGM-109 Tomahawk and Russian RPK-2 Viyuga and versions of surface-to-surface anti-ship missiles such as the Exocet and Harpoon, encapsulated for submarine launch. Ballistic missiles can also be fired from a submarine's torpedo tubes, for example, missiles such as the anti-submarine SUBROC. With internal volume as limited as ever and the desire to carry heavier warloads, the idea of the external launch tube was revived, usually for encapsulated missiles, with such tubes being placed between the internal pressure and outer streamlined hulls. Guided torpedoes also proliferated extensively during and after World War II, even further increasing the combat endurance and lethality of submarines and allowing them to engage other submarines at depth (with the latter now being one of the primary missions of the modern attack submarine).
The strategic mission of the SSM-N-8 and the P-5 was taken up by submarine-launched ballistic missile beginning with the US Navy's Polaris missile, and subsequently the Poseidon and Trident missiles.
Germany is working on the torpedo tube-launched short-range IDAS missile, which can be used against ASW helicopters, as well as surface ships and coastal targets.
Sensors
A submarine can have a variety of sensors, depending on its missions. Modern military submarines rely almost entirely on a suite of passive and active sonars to locate targets. Active sonar relies on an audible "ping" to generate echoes to reveal objects around the submarine. Active systems are rarely used, as doing so reveals the sub's presence. Passive sonar is a set of sensitive hydrophones set into the hull or trailed in a towed array, normally trailing several hundred feet behind the sub. The towed array is the mainstay of NATO submarine detection systems, as it reduces the flow noise heard by operators. Hull mounted sonar is employed in addition to the towed array, as the towed array can not work in shallow depth and during maneuvering. In addition, sonar has a blind spot "through" the submarine, so a system on both the front and back works to eliminate that problem. As the towed array trails behind and below the submarine, it also allows the submarine to have a system both above and below the thermocline at the proper depth; sound passing through the thermocline is distorted resulting in a lower detection range.
Submarines also carry radar equipment to detect surface ships and aircraft. Submarine captains are more likely to use radar detection gear than active radar to detect targets, as radar can be detected far beyond its own return range, revealing the submarine. Periscopes are rarely used, except for position fixes and to verify a contact's identity.
Civilian submarines, such as the or the Russian Mir submersibles, rely on small active sonar sets and viewing ports to navigate. The human eye cannot detect sunlight below about underwater, so high intensity lights are used to illuminate the viewing area.
Navigation
Early submarines had few navigation aids, but modern subs have a variety of navigation systems. Modern military submarines use an inertial guidance system for navigation while submerged, but drift error unavoidably builds over time. To counter this, the crew occasionally uses the Global Positioning System to obtain an accurate position. The periscope—a retractable tube with a prism system that provides a view of the surface—is only used occasionally in modern submarines, since the visibility range is short. The and s use photonics masts rather than hull-penetrating optical periscopes. These masts must still be deployed above the surface, and use electronic sensors for visible light, infrared, laser range-finding, and electromagnetic surveillance. One benefit to hoisting the mast above the surface is that while the mast is above the water the entire sub is still below the water and is much harder to detect visually or by radar.
Communication
Military submarines use several systems to communicate with distant command centers or other ships. One is VLF (very low frequency) radio, which can reach a submarine either on the surface or submerged to a fairly shallow depth, usually less than . ELF (extremely low frequency) can reach a submarine at greater depths, but has a very low bandwidth and is generally used to call a submerged sub to a shallower depth where VLF signals can reach. A submarine also has the option of floating a long, buoyant wire antenna to a shallower depth, allowing VLF transmissions by a deeply submerged boat.
By extending a radio mast, a submarine can also use a "burst transmission" technique. A burst transmission takes only a fraction of a second, minimizing a submarine's risk of detection.
To communicate with other submarines, a system known as Gertrude is used. Gertrude is basically a sonar telephone. Voice communication from one submarine is transmitted by low power speakers into the water, where it is detected by passive sonars on the receiving submarine. The range of this system is probably very short, and using it radiates sound into the water, which can be heard by the enemy.
Civilian submarines can use similar, albeit less powerful systems to communicate with support ships or other submersibles in the area.
Life support systems
With nuclear power or air-independent propulsion, submarines can remain submerged for months at a time. Conventional diesel submarines must periodically resurface or run on snorkel to recharge their batteries. Most modern military submarines generate breathing oxygen by electrolysis of fresh water (using a device called an "Electrolytic Oxygen Generator"). Emergency oxygen can be produced by burning sodium chlorate candles. Atmosphere control equipment includes a Carbon dioxide scrubber, which uses a spray of monoethanolamine (MEA) absorbent to remove the gas from the air, after which the MEA is heated in a boiler to release the CO2 which is then pumped overboard. Emergency scrubbing can also be done with lithium hydroxide, which is consumable. A machine that uses a catalyst to convert carbon monoxide into carbon dioxide (removed by the scrubber) and bonds hydrogen produced from the ship's storage battery with oxygen in the atmosphere to produce water, is also used. An atmosphere monitoring system samples the air from different areas of the ship for nitrogen, oxygen, hydrogen, R-12 and R-114 refrigerants, carbon dioxide, carbon monoxide, and other gases. Poisonous gases are removed, and oxygen is replenished by use of an oxygen bank located in a main ballast tank. Some heavier submarines have two oxygen bleed stations (forward and aft). The oxygen in the air is sometimes kept a few percent less than atmospheric concentration to reduce fire risk.
Fresh water is produced by either an evaporator or a reverse osmosis unit. The primary use for fresh water is to provide feedwater for the reactor and steam propulsion plants. It is also available for showers, sinks, cooking and cleaning once propulsion plant needs have been met. Seawater is used to flush toilets, and the resulting "blackwater" is stored in a sanitary tank until it is blown overboard using pressurized air or pumped overboard by using a special sanitary pump. The blackwater-discharge system requires skill to operate, and isolation valves must be closed before discharge. The German Type VIIC boat was lost with casualties because of human error while using this system. Water from showers and sinks is stored separately in "grey water" tanks and discharged overboard using drain pumps.
Trash on modern large submarines is usually disposed of using a tube called a Trash Disposal Unit (TDU), where it is compacted into a galvanized steel can. At the bottom of the TDU is a large ball valve. An ice plug is set on top of the ball valve to protect it, the cans atop the ice plug. The top breech door is shut, and the TDU is flooded and equalized with sea pressure, the ball valve is opened and the cans fall out assisted by scrap iron weights in the cans. The TDU is also flushed with seawater to ensure it is completely empty and the ball valve is clear before closing the valve.
Crew
A typical nuclear submarine has a crew of over 80; conventional boats typically have fewer than 40. The conditions on a submarine can be difficult because crew members must work in isolation for long periods of time, without family contact, and in cramped conditions. Submarines normally maintain radio silence to avoid detection. Operating a submarine is dangerous, even in peacetime, and many submarines have been lost in accidents.
Women
Most navies prohibited women from serving on submarines, even after they had been permitted to serve on surface warships. The Royal Norwegian Navy became the first navy to allow women on its submarine crews in 1985. The Royal Danish Navy allowed female submariners in 1988. Others followed suit including the Swedish Navy (1989), the Royal Australian Navy (1998), the Spanish Navy (1999), the German Navy (2001) and the Canadian Navy (2002). In 1995, Solveig Krey of the Royal Norwegian Navy became the first female officer to assume command on a military submarine, HNoMS Kobben.
On 8 December 2011, British Defence Secretary Philip Hammond announced that the UK's ban on women in submarines was to be lifted from 2013. Previously there were fears that women were more at risk from a build-up of carbon dioxide in the submarine. But a study showed no medical reason to exclude women, though pregnant women would still be excluded. Similar dangers to the pregnant woman and her fetus barred women from submarine service in Sweden in 1983, when all other positions were made available for them in the Swedish Navy. Today, pregnant women are still not allowed to serve on submarines in Sweden. However, the policymakers thought that it was discriminatory with a general ban and demanded that women should be tried on their individual merits and have their suitability evaluated and compared to other candidates. Further, they noted that a woman complying with such high demands is unlikely to become pregnant. In May 2014, three women became the RN's first female submariners.
Women have served on US Navy surface ships since 1993, and , began serving on submarines for the first time. Until presently, the Navy allowed only three exceptions to women being on board military submarines: female civilian technicians for a few days at most, women midshipmen on an overnight during summer training for Navy ROTC and Naval Academy, and family members for one-day dependent cruises. In 2009, senior officials, including then-Secretary of the Navy Ray Mabus, Joint Chief of Staff Admiral Michael Mullen, and Chief of Naval Operations Admiral Gary Roughead, began the process of finding a way to implement women on submarines. The US Navy rescinded its "no women on subs" policy in 2010.
Both the US and British navies operate nuclear-powered submarines that deploy for periods of six months or longer. Other navies that permit women to serve on submarines operate conventionally powered submarines, which deploy for much shorter periods—usually only for a few months. Prior to the change by the US, no nation using nuclear submarines permitted women to serve on board.
In 2011, the first class of female submarine officers graduated from Naval Submarine School's Submarine Officer Basic Course (SOBC) at the Naval Submarine Base New London. Additionally, more senior ranking and experienced female supply officers from the surface warfare specialty attended SOBC as well, proceeding to fleet Ballistic Missile (SSBN) and Guided Missile (SSGN) submarines along with the new female submarine line officers beginning in late 2011. By late 2011, several women were assigned to the Ohio-class ballistic missile submarine . On 15 October 2013, the US Navy announced that two of the smaller Virginia-class attack submarines, and , would have female crew-members by January 2015.
In 2020, Japan's national naval submarine academy accepted its first female candidate.
Abandoning the vessel
In an emergency, submarines can contact other ships to assist in rescue, and pick up the crew when they abandon ship. The crew can use escape sets such as the Submarine Escape Immersion Equipment to abandon the submarine via an escape trunk, which is a small airlock compartment that provides a route for crew to escape from a downed submarine at ambient pressure in small groups, while minimising the amount of water admitted to the submarine. The crew can avoid lung injury from over-expansion of air in the lungs due to the pressure change known as pulmonary barotrauma by maintaining an open airway and exhaling during the ascent. Following escape from a pressurized submarine, in which the air pressure is higher than atmospheric due to water ingress or other reasons, the crew is at risk of developing decompression sickness on return to surface pressure.
An alternative escape means is via a deep-submergence rescue vehicle that can dock onto the disabled submarine, establish a seal around the escape hatch, and transfer personnel at the same pressure as the interior of the submarine. If the submarine has been pressurised the survivors can lock into a decompression chamber on the submarine rescue ship and transfer under pressure for safe surface decompression.
| Technology | Naval warfare | null |
28837 | https://en.wikipedia.org/wiki/Siege%20tower | Siege tower | A Roman siege tower or breaching tower (or in the Middle Ages, a belfry) is a specialized siege engine, constructed to protect assailants and ladders while approaching the defensive walls of a fortification. The tower was often rectangular with four wheels with its height roughly equal to that of the wall or sometimes higher to allow archers or crossbowmen to stand on top of the tower and shoot arrows or quarrels into the fortification. Because the towers were wooden and thus flammable, they had to have some non-flammable covering of iron or fresh animal skins.
Evidence for use of siege towers in Ancient Egypt and Anatolia dates to the Bronze Age. They were used extensively in warfare of the ancient Near East after the Late Bronze Age collapse, and in Egypt by Kushites from Sudan who founded the 25th dynasty. During classical antiquity they were common among Hellenistic Greek armies of the 4th century BC and later Roman armies of Europe and the Mediterranean, while also seeing use in ancient China during the Warring States Period and Han dynasty. Siege towers were of unwieldy dimensions and, like trebuchets, were therefore mostly constructed on site of the siege. Taking considerable time to construct, siege towers were mainly built if the defense of the opposing fortification could not be overcome by ladder assault ("escalade"), by mining, or by breaking walls or gates with tools such as battering rams.
The siege tower sometimes housed spearmen, pikemen, and swordsmen or archers and crossbowmen, who shot arrows and quarrels at the defenders. Because of the size of the tower it would often be the first target of large stone catapults, but it had its own projectiles with which to retaliate.
Siege towers were used to get troops over an enemy curtain wall. When a siege tower was near a wall, it would drop a gangplank between it and the wall. Troops could then rush onto the walls and into the castle or city. Some siege towers also had battering rams which they used to bash down the defensive walls around a city or a castle gate.
Ancient use
In the First Intermediate Period tomb of General Intef at Thebes (modern Luxor, Egypt), a mobile siege tower is shown in the battle scenes. In modern Harpoot, Turkey, an artistically Akkadian style stone carved relief dated circa 2000 BC was found depicting a siege tower, the earliest known visual depiction from Anatolia (although siege towers were later described in Hittite cuneiform writing).
Siege towers were used by the armies of the Neo-Assyrian Empire in the 9th century BC, under Ashurnasirpal II (r. 884 BC – 859 BC). Reliefs from his reign, and subsequent reigns, depict siege towers in use with a number of other siege works, including ramps and battering rams.
Centuries after they were employed in Assyria, the use of the siege tower spread throughout the Mediterranean. During the siege of Memphis in the 8th century BC, siege towers were built by Kush for the army led by Piye (founder of the Nubian 25th dynasty), in order to enhance the efficiency of Kushite archers and slingers.
After leaving Thebes, Piye's first objective was besieging Ashmunein. Having assembled his army for their lack of success so far, the King then undertook the personal supervision of operations including the erection of a siege tower from which Kushite archers could fire down into the city.
During the siege of Syracuse in 413 BC, Athenians erected a siege tower on ship hull. Alexander did the same at Tyre (322 BC) as did Marcellus in Syracuse (214 BC). Towers were used against both land and naval targets. At the time of Emperor Aggripa, ship towers were built with a lighter, collapsible design that could be stowed flat on the deck when not in use, lowering the center of gravity.
The biggest siege towers of antiquity, such as the Hellenistic Greek Helepolis (meaning "The Taker of Cities" in Greek) of the siege of Rhodes in 305 BC by Demetrius I of Macedon, could be as high as and as wide as . Such large engines would require a rack and pinion to be moved effectively. It was manned by 200 soldiers and was divided into nine stories; the different levels housed various types of catapults and ballistae. Subsequent siege towers down through the centuries often had similar engines.
However, large siege towers could be defeated by the defenders by flooding the ground in front of the wall, creating a moat that caused the tower to get bogged in the mud. The siege of Rhodes illustrates the important point that the larger siege towers needed level ground. Many castles and hill-top towns and forts were virtually invulnerable to siege tower attack simply due to topography. Smaller siege towers might be used on top of siege-mounds, made of earth, rubble and timber mounds in order to overtop a defensive wall. For example, the remains of such a siege-ramp at Masada, Israel built by the Romans during the siege of Masada (72–73 AD) have survived and can still be seen today.
On the other hand, almost all the largest cities were on large rivers, or the coast, and so did have part of their circuit wall vulnerable to these towers. Furthermore, the tower for such a target might be prefabricated elsewhere and brought dismantled to the target city by water. In some rare circumstances, such towers were mounted on ships to assault the coastal wall of a city: at the Roman siege of Cyzicus during the Third Mithridatic War, for example, towers were used in conjunction with more conventional siege weapons.
One of the oldest references to the mobile siege tower in Ancient China was a written dialogue primarily discussing naval warfare. In the Chinese Yuejueshu (Lost Records of the State of Yue) written by the later Han dynasty author Yuan Kang in the year 52 AD, Wu Zixu (526 BC – 484 BC) purportedly discussed different ship types with King Helü of Wu (r. 514 BC – 496 BC) while explaining military preparedness. Before labeling the types of warships used, Wu said:
Medieval and later use
With the collapse of the Western Roman Empire into independent states, and the Eastern Roman Empire on the defensive, the use of siege towers reached its height during the medieval period. Siege towers were used when the Avars laid siege unsuccessfully to Constantinople in 626, as the Chronicon Paschale recounts:
At this siege, the attackers also made use mobile armoured shelters known as sows or cats, which were used throughout the medieval period and allowed workers to fill in moats with protection from the defenders (thus levelling the ground for the siege towers to be moved to the walls). However, the construction of a sloping talus at the base of a castle wall (as was common in crusader fortification) could have reduced the effectiveness of this tactic to an extent.
Siege towers also became more elaborate during the medieval period; at the siege of Kenilworth in 1266, for example, 200 archers and 11 catapults operated from a single tower. Even then, the siege lasted almost a year, making it the longest siege in all of English history. They were not invulnerable either, as during the Fall of Constantinople in 1453, Ottoman siege towers were sprayed by the defenders with Greek fire.
Siege towers became vulnerable and obsolete with the development of large cannon. They had only ever existed to get assaulting troops over high walls and towers and large cannons also made high walls obsolete as fortification took a new direction. However, later constructions known as battery towers took on a similar role in the gunpowder age; like siege-towers, these were built out of wood on-site for mounting siege artillery. One of these was built by the Russian military engineer Ivan Vyrodkov during the siege of Kazan in 1552 (as part of the Russo-Kazan Wars), and could hold ten large-calibre cannon and fifty lighter cannons. Likely, it was a development of the gulyay-gorod (that is a mobile fortification assembled on wagons or sleds from prefabricated wall-sized shields with holes for cannons). Later battery towers were often used by the Ukrainian Cossacks.
During the Imjin War, the Japanese utilized siege towers to scale the walls of Jinju but were beaten back several times by Korean cannons. In the early 19th century, the Joseon Army utilized siege towers to lay siege to Jeonju where the last of Hong Gyeong-Rae's Rebellion made their stand but were beaten back several times by the rebels.
Modern parallels
In modern warfare, some vehicles used by police tactical units, counterterrorists, and special forces can be fitted with mechanical assault ladders with ramps. These are essentially modernized siege towers with elements of escalade ladders, and are used to raid a structure through its upper levels. These assault ladders are not as large or as tall as their predecessors, and are typically only capable of reaching roughly the third or fourth floor of a structure.
On 1 March 2007, police officers entered Ungdomshuset in Copenhagen, Denmark using boom cranes in a manner similar to siege towers. The officers were placed in containers that the crane operators raised and placed against the structure's windows, from which the officers then entered.
| Technology | Military technology: General | null |
28848 | https://en.wikipedia.org/wiki/Scabies | Scabies | Scabies (; also sometimes known as the seven-year itch) is a contagious human skin infestation by the tiny (0.2–0.45 mm) mite Sarcoptes scabiei, variety hominis. The word is from . The most common symptoms are severe itchiness and a pimple-like rash. Occasionally, tiny burrows may appear on the skin. In a first-ever infection, the infected person usually develops symptoms within two to six weeks. During a second infection, symptoms may begin within 24 hours. These symptoms can be present across most of the body or just in certain areas such as the wrists, between fingers, or along the waistline. The head may be affected, but this is typically only in young children. The itch is often worse at night. Scratching may cause skin breakdown and an additional bacterial infection in the skin.
Scabies is caused by infection with the female mite Sarcoptes scabiei var. hominis, an ectoparasite. The mites burrow into the skin to live and deposit eggs. The symptoms of scabies are due to an allergic reaction to the mites. Often, only between 10 and 15 mites are involved in an infection. Scabies most often spreads during a relatively long period of direct skin contact with an infected person (at least 10 minutes) such as that which may occur during sexual activity or living together. Spread of the disease may occur even if the person has not developed symptoms yet. Crowded living conditions, such as those found in child-care facilities, group homes, and prisons, increase the risk of spread. Areas with a lack of access to water also have disease rates. Crusted scabies is a more severe form of the disease, not essentially different but an infestation by huge numbers of mites that typically only affects those with a poor immune system; the number of mites also makes them much more contagious. In these cases, the spread of infection may occur during brief contact or by contaminated objects. The mite is tiny and at the limit of detection with the human eye. It is not readily obvious; factors that aid in detection are good lighting, magnification, and knowing what to look for. Diagnosis is based either on detecting the mite (confirmed scabies), detecting typical lesions in a typical distribution with typical historical features (clinical scabies), or detecting atypical lesions or atypical distribution of lesions with only some historical features present (suspected scabies).
Several medications are available to treat those infected, including oral and topical ivermectin, permethrin, crotamiton, and lindane creams. Sexual contacts within the last month and people who live in the same house should also be treated at the same time. Bedding and clothing used in the last three days should be washed in hot water and dried in a hot dryer. As the mite does not live for more than three days away from human skin, more washing is not needed. Symptoms may continue for two to four weeks following treatment. If after this time symptoms continue, retreatment may be needed.
Scabies is one of the three most common skin disorders in children, along with ringworm and bacterial skin infections. As of 2015, it affects about 204 million people (2.8% of the world population). It is equally common in both sexes. The young and the old are more commonly affected. It also occurs more commonly in the developing world and tropical climates. Other animals do not spread human scabies; similar infection in other animals is known as sarcoptic mange, and is typically caused by slightly different but related mites.
Signs and symptoms
The characteristic symptoms of a scabies infection include intense itching and superficial burrows. Because the host develops the symptoms as a reaction to the mites' presence over time, typically a delay of four to six weeks occurs between the onset of infestation and the onset of itching. Similarly, symptoms often persist for one to several weeks after successful eradication of the mites. As noted, those re-exposed to scabies after successful treatment may exhibit symptoms of the new infestation in a much shorter period—as little as one to four days.
Itching
In the classic scenario, the itch is made worse by warmth and is usually experienced as being worse at night, possibly because distractions are fewer. As a symptom, it is less common in the elderly.
Rash
The superficial burrows of scabies usually occur in the area of the finger webs, feet, ventral wrists, elbows, back, buttocks, and external genitals. Except in infants and the immunosuppressed, infection generally does not occur in the skin of the face or scalp. The burrows are created by the excavation of the adult mite in the epidermis. Acropustulosis, or blisters and pustules on the palms and soles of the feet, are characteristic symptoms of scabies in infants.
In most people, the trails of the burrowing mites are linear or S-shaped tracks in the skin, often accompanied by rows of small, pimple-like mosquito or insect bites. Lesions are symmetrical and mainly affect the hands, wrists, axillae, thighs, buttocks, waist, soles of the feet, areola, and vulva in females, and penis and scrotum in males. The neck and above are usually not affected, except in cases of crusted scabies and infestations of infants, the elderly, and the immunocompromised. Symptoms typically appear two to six weeks after infestation for individuals never before exposed to scabies. For those having been previously exposed, the symptoms can appear within several days after infestation. However, symptoms may appear after several months or years.
Crusted scabies
The elderly, disabled, and people with impaired immune systems, such as those with HIV/AIDS, cancer, or those on immunosuppressive medications, are susceptible to crusted scabies (also called Norwegian scabies). On those with weaker immune systems, the host becomes a more fertile breeding ground for the mites, which spread over the host's body, except the face. The mites in crusted scabies are not more virulent than in noncrusted scabies but are much more numerous, sometimes up to two million. People with crusted scabies exhibit scaly rashes, slight itching, and thick crusts of skin that contain large numbers of scabies mites. For this reason, persons with crusted scabies are more contagious to others than those with typical scabies. Such areas make eradication of mites particularly difficult, as the crusts protect the mites from topical miticides/scabicides, necessitating prolonged treatment of these areas.
Cause
Scabies mite
In the 18th century, Italian biologists Giovanni Cosimo Bonomo and Diacinto Cestoni (1637–1718) described the mite now called Sarcoptes scabiei, variety hominis, as the cause of scabies. Sarcoptes is a genus of skin parasites and part of the larger family of mites collectively known as scab mites. These organisms have eight legs as adults and are placed in the same phylogenetic class (Arachnida) as spiders and ticks.
S. scabiei mites are under 0.5 mm in size; they are sometimes visible as pinpoints of white. Gravid females tunnel into the dead, outermost layer (stratum corneum) of a host's skin and deposit eggs in the shallow burrows. The eggs hatch into larvae in three to ten days. These young mites move about on the skin and molt into a "nymphal" stage, before maturing as adults, which live three to four weeks in the host's skin. Males roam on top of the skin, occasionally burrowing into the skin. In general, the total number of adult mites infesting a healthy hygienic person with non-crusted scabies is small, about 11 females in burrows, on average.
The movement of mites within and on the skin produces an intense itch, which has the characteristics of a delayed cell-mediated inflammatory response to allergens. IgE antibodies are present in the serum and the site of infection, which react to multiple protein allergens in the body of the mite. Some of these cross-react to allergens from house dust mites. Immediate antibody-mediated allergic reactions (wheals) have been elicited in infected persons, but not in those not infected; immediate hypersensitivity of this type is thought to explain the observed far more rapid allergic skin response to reinfection seen in persons who have been infected previously, especially within the previous year or two.
Transmission
Scabies is contagious and can be contracted through prolonged physical contact with an infested person. This includes sexual intercourse, although a majority of cases are acquired through other forms of skin-to-skin contact. Less commonly, scabies infestation can happen through the sharing of clothes, towels, and bedding, but this is not a major mode of transmission; individual mites can survive for only two to three days, at most, away from human skin at room temperature. As with lice, a latex condom is ineffective against scabies transmission during intercourse, because mites typically migrate from one individual to the next at sites other than the sex organs.
Healthcare workers are at risk of contracting scabies from patients, because they may be in extended contact with them.
Pathophysiology
The symptoms are caused by an allergic reaction of the host's body to mite proteins, though exactly which proteins remains a topic of study. The mite proteins are also present in the gut, and in mite feces, which are deposited under the skin. The allergic reaction is both of the delayed (cell-mediated) and immediate (antibody-mediated) type, and involves IgE (antibodies are presumed to mediate the very rapid symptoms on reinfection). The allergy-type symptoms (itching) continue for some days, and even several weeks, after all mites are killed. New lesions may appear for a few days after mites are eradicated. Nodular lesions from scabies may continue to be symptomatic for weeks after the mites have been killed.
Rates of scabies are negatively related to temperature and positively related to humidity.
Diagnosis
Scabies may be diagnosed clinically in geographical areas where it is common when diffuse itching presents along with either a lesion in two typical spots or itchiness is present in another household member. The classical sign of scabies is the burrow made by a mite within the skin. To detect the burrow, the suspected area is rubbed with ink from a fountain pen or a topical tetracycline solution, which glows under a special light. The skin is then wiped with an alcohol pad. If the person is infected with scabies, the characteristic zigzag or S pattern of the burrow will appear across the skin; however, interpreting this test may be difficult, as the burrows are scarce and may be obscured by scratch marks. A definitive diagnosis is made by finding either the scabies mites or their eggs and fecal pellets. Searches for these signs involve either scraping a suspected area, mounting the sample in potassium hydroxide and examining it under a microscope, or using dermoscopy to examine the skin directly.
Differential diagnosis
Symptoms of early scabies infestation mirror other skin diseases, including dermatitis, syphilis, erythema multiforme, various urticaria-related syndromes, allergic reactions, ringworm-related diseases, and other ectoparasites such as lice and fleas.
Prevention of passing on scabies to other people
Mass-treatment programs that use topical permethrin or oral ivermectin have been effective in reducing the prevalence of scabies in several populations. No vaccine is available for scabies. The simultaneous treatment of all close contacts is recommended, even if they show no symptoms of infection (asymptomatic), to reduce rates of recurrence. Since mites can survive for only two to three days without a host, other objects in the environment pose little risk of transmission except in the case of crusted scabies. Therefore, cleaning is of little importance. Rooms used by those with crusted scabies require thorough cleaning.
Management
Treatment
Several medications are effective in treating scabies. Treatment should involve the entire household and any others who have had recent, prolonged contact with the infested individual. In addition to treating the infestation, options to control itchiness include antihistamines and prescription anti-inflammatory agents. Bedding, clothing and towels used during the previous three days should be washed in hot water and dried in a hot dryer.
Treatment protocols for crusted scabies are significantly more intense than for common scabies.
Permethrin
Permethrin, a pyrethroid insecticide, is the most effective treatment for scabies, and remains the treatment of choice. It is applied from the neck down, usually before sleep, and left on for about 8 to 14 hours, then washed off in the morning. Care should be taken to coat the entire skin surface, not just symptomatic areas; any patch of skin left untreated can provide a "safe haven" for one or more mites to survive. One application is normally sufficient, as permethrin kills eggs, hatchlings, and adult mites, though many physicians recommend a second application three to seven days later as a precaution. Crusted scabies may require multiple applications or supplemental treatment with oral ivermectin (below). Permethrin may cause slight irritation of the skin that is usually tolerable. In recent years, concern is growing about permethrin-resistant scabies, although some researchers refer to this as pseudo-resistance.
Ivermectin
Oral ivermectin is effective in eradicating scabies, often in a single dose. It is the treatment of choice for crusted scabies, and is sometimes prescribed in combination with a topical agent. It has not been tested on infants, and is not recommended for children under six years of age.
Topical ivermectin preparations are effective for scabies in adults. It has also been useful for sarcoptic mange, the veterinary analog of human scabies.
One review found that the efficacy of permethrin is similar to that of systemic or topical ivermectin. A separate review found that although oral ivermectin is usually effective for the treatment of scabies, it does have a higher treatment failure rate than topical permethrin. Another review found that oral ivermectin provided a reasonable balance between efficacy and safety. A study has demonstrated that scabies is markedly reduced in populations taking ivermectin regularly; the drug is widely used for treating scabies and other parasitic diseases, particularly among the poor and disadvantaged in the tropics, beginning with the developer Merck providing the drug at no cost to treat onchocerciasis from 1987.
Others
Other treatments include lindane, benzyl benzoate, crotamiton, malathion, and sulfur preparations. Lindane is effective, but concerns over potential neurotoxicity have limited its availability in many countries. It is banned in California, but may be used in other states as a second-line treatment. Sulfur ointments or benzyl benzoate are often used in the developing world due to their low cost; Some 10% sulfur solutions have been shown to be effective, and sulfur ointments are typically used for at least a week, though many people find the odor of sulfur products unpleasant. Crotamiton has been found to be less effective than permethrin in limited studies. Crotamiton or sulfur preparations are sometimes recommended instead of permethrin for children, due to concerns over dermal absorption of permethrin.
Communities
Scabies is endemic in many developing countries, where it tends to be particularly problematic in rural and remote areas. In such settings, community-wide control strategies are required to reduce the rate of disease, as treatment of only individuals is ineffective due to the high rate of reinfection. Large-scale mass drug administration strategies may be required where coordinated interventions aim to treat whole communities in one concerted effort. Although such strategies have shown to be able to reduce the burden of scabies in these kinds of communities, debate remains about the best strategy to adopt, including the choice of drug.
The resources required to implement such large-scale interventions in a cost-effective and sustainable way are significant. Furthermore, since endemic scabies is largely restricted to poor and remote areas, it is a public health issue that has not attracted much attention from policymakers and international donors.
Epidemiology
Scabies is one of the three most common skin disorders in children, along with tinea and pyoderma. As of 2010, it affects about 100 million people (1.5% of the population) and its frequency is not related to gender. The mites are distributed around the world and equally infect all ages, races, and socioeconomic classes in different climates. Scabies is more often seen in crowded areas with unhygienic living conditions. Globally as of 2009, an estimated 300 million cases of scabies occur each year, although various parties claim the figure is either over- or underestimated. About 1–10% of the global population is estimated to be infected with scabies, but in certain populations, the infection rate may be as high as 50–80%.
History
Scabies has been observed in humans since ancient times. Archeological evidence from Egypt and the Middle East suggests scabies was present as early as 494 BC. In the fourth century BC, Aristotle reported on "lice" that "escape from little pimples if they are pricked" – a description consistent with scabies. Arab physician Ibn Zuhr is believed to have been the first to provide a clinical description of the scabies mites.
Roman encyclopedist and medical writer Aulus Cornelius Celsus (circa 25 BC – 50 AD) is credited with naming the disease "scabies" and describing its characteristic features. The parasitic etiology of scabies was documented by Italian physician Giovanni Cosimo Bonomo (1663–1696) in his 1687 letter, "Observations concerning the fleshworms of the human body". Bonomo's description established scabies as one of the first human diseases with a well-understood cause.
In Europe in the late 19th through mid-20th centuries, a sulfur-bearing ointment called by the medical eponym of Wilkinson's ointment was widely used for topical treatment of scabies. The contents and origins of several versions of the ointment were detailed in correspondence published in the British Medical Journal in 1945.
In the 1995 documentary Anne Frank Remembered, Bloeme Evers-Emden told of how she was selected from Auschwitz and sent to a work camp where conditions were sufficiently improved that she was able to survive until the liberation. She said Anne Frank was rejected for this transfer because she had contracted scabies.
Scabies in animals
Scabies may occur in some domestic and wild animals; the mites that cause these infestations are of different subspecies from the one typically causing the human form. These subspecies can infest animals that are not their usual hosts, but such infections do not last long. Scabies-infected animals experience severe itching and secondary skin infections. They often lose weight and become frail.
The most frequently diagnosed form of scabies in domestic animals is sarcoptic mange, caused by the subspecies Sarcoptes scabiei canis, most commonly in dogs and cats. Sarcoptic mange is transmissible to humans who come into prolonged contact with infested animals, and is distinguished from human scabies by its distribution on skin surfaces covered by clothing. Scabies-infected domestic fowl develop what is known as "scaly leg". Domestic animals that have gone feral and have no veterinary care are frequently affected by scabies. Nondomestic animals have also been observed to develop scabies. Gorillas, for instance, are known to be susceptible to infection by contact with items used by humans, and it is a fatal disease of wombats.
Scabies is also a concern for cattle.
Society and culture
The International Alliance for the Control of Scabies was started in 2012, and brings together over 150 researchers, clinicians, and public-health experts from more than 15 countries. It has managed to bring the global health implications of scabies to the attention of the World Health Organization (WHO). Consequently, the WHO has included scabies on its official list of neglected tropical diseases and other neglected conditions.
Research
Moxidectin is being evaluated as a treatment for scabies. It is established in veterinary medicine to treat a range of parasites, including sarcoptic mange. Its advantage over ivermectin is its longer half-life in humans, thus the potential duration of action.
Tea tree oil (TTO) exhibits scabicidal action in a laboratory setting.
| Biology and health sciences | Infectious disease | null |
28852 | https://en.wikipedia.org/wiki/Syphilis | Syphilis | Syphilis () is a sexually transmitted infection caused by the bacterium Treponema pallidum subspecies pallidum. The signs and symptoms depend on the stage it presents: primary, secondary, latent or tertiary. The primary stage classically presents with a single chancre (a firm, painless, non-itchy skin ulceration usually between 1 cm and 2 cm in diameter), though there may be multiple sores. In secondary syphilis, a diffuse rash occurs, which frequently involves the palms of the hands and soles of the feet. There may also be sores in the mouth or vagina. Latent syphilis has no symptoms and can last years. In tertiary syphilis, there are gummas (soft, non-cancerous growths), neurological problems, or heart symptoms. Syphilis has been known as "the great imitator", because it may cause symptoms similar to many other diseases.
Syphilis is most commonly spread through sexual activity. It may also be transmitted from mother to baby during pregnancy or at birth, resulting in congenital syphilis. Other diseases caused by Treponema bacteria include yaws (T. pallidum subspecies pertenue), pinta (T. carateum), and nonvenereal endemic syphilis (T. pallidum subspecies endemicum). These three diseases are not typically sexually transmitted. Diagnosis is usually made by using blood tests; the bacteria can also be detected using dark field microscopy. The Centers for Disease Control and Prevention (U.S.) recommends for all pregnant women to be tested.
The risk of sexual transmission of syphilis can be reduced by using a latex or polyurethane condom. Syphilis can be effectively treated with antibiotics. The preferred antibiotic for most cases is benzathine benzylpenicillin injected into a muscle. In those who have a severe penicillin allergy, doxycycline or tetracycline may be used. In those with neurosyphilis, intravenous benzylpenicillin or ceftriaxone is recommended. During treatment, people may develop fever, headache, and muscle pains, a reaction known as Jarisch–Herxheimer.
In 2015, about 45.4 million people had syphilis infections, of which six million were new cases. During 2015, it caused about 107,000 deaths, down from 202,000 in 1990. After decreasing dramatically with the availability of penicillin in the 1940s, rates of infection have increased since the turn of the millennium in many countries, often in combination with human immunodeficiency virus (HIV). This is believed to be partly due to unsafe drug use, increased prostitution, and decreased use of condoms.
Signs and symptoms
Syphilis can present in one of four different stages: primary, secondary, latent, and tertiary, and may also occur congenitally. There may be no symptoms. It was referred to as "the great imitator" by Sir William Osler due to its varied presentations.
Primary
Primary syphilis is typically acquired by direct sexual contact with the infectious lesions of another person. Approximately 2–6 weeks after contact (with a range of 10–90 days) a skin lesion, called a chancre, appears at the site and this contains infectious bacteria. This is classically (40% of the time) a single, firm, painless, non-itchy skin ulceration with a clean base and sharp borders approximately 0.3–3.0 cm in size. The lesion may take on almost any form. In the classic form, it evolves from a macule to a papule and finally to an erosion or ulcer. Occasionally, multiple lesions may be present (~40%), with multiple lesions being more common when coinfected with HIV. Lesions may be painful or tender (30%), and they may occur in places other than the genitals (2–7%). The most common location in women is the cervix (44%), the penis in heterosexual men (99%), and anally and rectally in men who have sex with men (34%). Lymph node enlargement frequently (80%) occurs around the area of infection, occurring seven to 10 days after chancre formation. The lesion may persist for three to six weeks if left untreated.
Secondary
Secondary syphilis occurs approximately four to ten weeks after the primary infection. While secondary disease is known for the many different ways it can manifest, symptoms most commonly involve the skin, mucous membranes, and lymph nodes. There may be a symmetrical, reddish-pink, non-itchy rash on the trunk and extremities, including the palms and soles. The rash may become maculopapular or pustular. It may form flat, broad, whitish, wart-like lesions on mucous membranes, known as condyloma latum. All of these lesions harbor bacteria and are infectious. Other symptoms may include fever, sore throat, malaise, weight loss, hair loss, and headache. Rare manifestations include liver inflammation, kidney disease, joint inflammation, periostitis, inflammation of the optic nerve, uveitis, and interstitial keratitis. The acute symptoms usually resolve after three to six weeks; about 25% of people may present with a recurrence of secondary symptoms. Many people who present with secondary syphilis (40–85% of women, 20–65% of men) do not report previously having had the classical chancre of primary syphilis.
Latent
Latent syphilis is defined as having serologic proof of infection without symptoms of disease. It develops after secondary syphilis and is divided into early latent and late latent stages. Early latent syphilis is defined by the World Health Organization as less than 2 years after original infection. Early latent syphilis is infectious as up to 25% of people can develop a recurrent secondary infection (during which bacteria are actively replicating and are infectious). Two years after the original infection the person will enter late latent syphilis and is not as infectious as the early phase. The latent phase of syphilis can last many years after which, without treatment, approximately 15–40% of people can develop tertiary syphilis.
Tertiary
Tertiary syphilis may occur approximately 3 to 15 years after the initial infection and may be divided into three different forms: gummatous syphilis (15%), late neurosyphilis (6.5%), and cardiovascular syphilis (10%). Without treatment, a third of infected people develop tertiary disease. People with tertiary syphilis are not infectious.
Gummatous syphilis or late benign syphilis usually occurs 1 to 46 years after the initial infection, with an average of 15 years. This stage is characterized by the formation of chronic gummas, which are soft, tumor-like balls of inflammation which may vary considerably in size. They typically affect the skin, bone, and liver, but can occur anywhere.
Cardiovascular syphilis usually occurs 10–30 years after the initial infection. The most common complication is syphilitic aortitis, which may result in aortic aneurysm formation.
Neurosyphilis refers to an infection involving the central nervous system. Involvement of the central nervous system in syphilis (either asymptomatic or symptomatic) can occur at any stage of the infection. It may occur early, being either asymptomatic or in the form of syphilitic meningitis; or late as meningovascular syphilis, manifesting as general paresis or tabes dorsalis.
Meningovascular syphilis involves inflammation of the small and medium arteries of the central nervous system. It can present between 1–10 years after the initial infection. Meningovascular syphilis is characterized by stroke, cranial nerve palsies and spinal cord inflammation. Late symptomatic neurosyphilis can develop decades after the original infection and includes 2 types; general paresis and tabes dorsalis. General paresis presents with dementia, personality changes, delusions, seizures, psychosis and depression. Tabes dorsalis is characterized by gait instability, sharp pains in the trunk and limbs, impaired positional sensation of the limbs as well as having a positive Romberg's sign. Both tabes dorsalis and general paresis may present with Argyll Robertson pupil which are pupils that constrict when the person focuses on near objects (accommodation reflex) but do not constrict when exposed to bright light (pupillary reflex).
Congenital
Congenital syphilis is that which is transmitted during pregnancy or during birth. Two-thirds of syphilitic infants are born without symptoms. Common symptoms that develop over the first couple of years of life include enlargement of the liver and spleen (70%), rash (70%), fever (40%), neurosyphilis (20%), and lung inflammation (20%). If untreated, late congenital syphilis may occur in 40%, including saddle nose deformation, Higouménakis' sign, saber shin, or Clutton's joints among others. Infection during pregnancy is also associated with miscarriage. The main dental defects seen in congenital syphilis are the peg-shaped, notched incisors known as Hutchinson's teeth and so-called mulberry molars (also known as Moon or Fournier molars), defective permanent molars with rounded, deformed crowns resembling a mulberry.
Cause
Bacteriology
Treponema pallidum subspecies pallidum is a spiral-shaped, Gram-negative, highly mobile bacterium. Two other human diseases are caused by related Treponema pallidum subspecies, yaws (subspecies pertenue) and bejel (subspecies endemicum), and one further caused by the very closely related Treponema carateum, pinta. Unlike subspecies pallidum, they do not cause neurological disease. Humans are the only known natural reservoir for subspecies pallidum. It is unable to survive more than a few days without a host. This is due to its small genome (1.14Mbp) failing to encode the metabolic pathways necessary to make most of its macronutrients. It has a slow doubling time of greater than 30 hours. The bacterium is known for its ability to evade the immune system and its invasiveness.
Transmission
Syphilis is transmitted primarily by sexual contact or during pregnancy from a mother to her baby; the bacterium is able to pass through intact mucous membranes or compromised skin. It is thus transmissible by kissing near a lesion, as well as manual, oral, vaginal, and anal sex. Approximately 30% to 60% of those exposed to primary or secondary syphilis will get the disease. Its infectivity is exemplified by the fact that an individual inoculated with only 57 organisms has a 50% chance of being infected. Most new cases in the United States (60%) occur in men who have sex with men; and in this population 20% of syphilis cases were due to oral sex alone. Syphilis can be transmitted by blood products, but the risk is low due to screening of donated blood in many countries. The risk of transmission from sharing needles appears to be limited.
It is not generally possible to contract syphilis through toilet seats, daily activities, hot tubs, or sharing eating utensils or clothing. This is mainly because the bacteria die very quickly outside of the body, making transmission by objects extremely difficult.
Diagnosis
Syphilis is difficult to diagnose clinically during early infection. Confirmation is either via blood tests or direct visual inspection using dark field microscopy. Blood tests are more commonly used, as they are easier to perform. Diagnostic tests are unable to distinguish between the stages of the disease.
Blood tests
Blood tests are divided into nontreponemal and treponemal tests.
Nontreponemal tests are used initially and include venereal disease research laboratory (VDRL) and rapid plasma reagin (RPR) tests. False positives on the nontreponemal tests can occur with some viral infections, such as varicella (chickenpox) and measles. False positives can also occur with lymphoma, tuberculosis, malaria, endocarditis, connective tissue disease, and pregnancy.
Because of the possibility of false positives with nontreponemal tests, confirmation is required with a treponemal test, such as Treponema pallidum particle agglutination assay (TPHA) or fluorescent treponemal antibody absorption test (FTA-Abs). Treponemal antibody tests usually become positive two to five weeks after the initial infection and remain positive for many years. Neurosyphilis is diagnosed by finding high numbers of leukocytes (predominately lymphocytes) and high protein levels in the cerebrospinal fluid in the setting of a known syphilis infection.
Direct testing
Dark field microscopy of serous fluid from a chancre may be used to make an immediate diagnosis. Hospitals do not always have equipment or experienced staff members, and testing must be done within 10 minutes of acquiring the sample. Two other tests can be carried out on a sample from the chancre: direct fluorescent antibody (DFA) and polymerase chain reaction (PCR) tests. DFA uses antibodies tagged with fluorescein, which attach to specific syphilis proteins, while PCR uses techniques to detect the presence of specific syphilis genes. These tests are not as time-sensitive, as they do not require living bacteria to make the diagnosis.
Prevention
Vaccine
, there is no vaccine effective for prevention. Several vaccines based on treponemal proteins reduce lesion development in an animal model but research continues.
Sex
Condom use reduces the likelihood of transmission during sex, but does not eliminate the risk. The Centers for Disease Control and Prevention (CDC) states, "Correct and consistent use of latex condoms can reduce the risk of syphilis only when the infected area or site of potential exposure is protected. However, a syphilis sore outside of the area covered by a latex condom can still allow transmission, so caution should be exercised even when using a condom."
Abstinence from intimate physical contact with an infected person is effective at reducing the transmission of syphilis. The CDC states, "The surest way to avoid transmission of sexually transmitted diseases, including syphilis, is to abstain from sexual contact or to be in a long-term mutually monogamous relationship with a partner who has been tested and is known to be uninfected."
Congenital disease
Congenital syphilis in the newborn can be prevented by screening mothers during early pregnancy and treating those who are infected. The United States Preventive Services Task Force (USPSTF) strongly recommends universal screening of all pregnant women, while the World Health Organization (WHO) recommends all women be tested at their first antenatal visit and again in the third trimester. If they are positive, it is recommended their partners also be treated. Congenital syphilis is still common in the developing world, as many women do not receive antenatal care at all, and the antenatal care others receive does not include screening. It still occasionally occurs in the developed world, as those most likely to acquire syphilis are least likely to receive care during pregnancy. Several measures to increase access to testing appear effective at reducing rates of congenital syphilis in low- to middle-income countries. Point-of-care testing to detect syphilis appeared to be reliable, although more research is needed to assess its effectiveness and into improving outcomes in mothers and babies.
Screening
The CDC recommends that sexually active men who have sex with men be tested at least yearly. The USPSTF also recommends screening among those at high risk.
Syphilis is a notifiable disease in many countries, including Canada, the European Union, and the United States. This means health care providers are required to notify public health authorities, which will then ideally provide partner notification to the person's partners. Physicians may also encourage patients to send their partners to seek care. Several strategies have been found to improve follow-up for STI testing, including email and text messaging of reminders for appointments.
Treatment
Historic use of mercury
As a form of chemotherapy, elemental mercury had been used to treat skin diseases in Europe as early as 1363. As syphilis spread, preparations of mercury were among the first medicines used to combat it. Mercury is in fact highly anti-microbial: by the 16th century it was sometimes found to be sufficient to halt development of the disease when applied to ulcers as an inunction or when inhaled as a suffumigation. It was also treated by ingestion of mercury compounds. Once the disease had gained a strong foothold, however, the amounts and forms of mercury necessary to control its development exceeded the human body's ability to tolerate it, and the treatment became worse and more lethal than the disease. Nevertheless, medically directed mercury poisoning became widespread through the 17th, 18th, and 19th centuries in Europe, North America, and India. Mercury salts such as mercury (II) chloride were still in prominent medical use as late as 1916, and considered effective and worthwhile treatments.
Early infections
The first-line treatment for uncomplicated syphilis (primary or secondary stages) remains a single dose of intramuscular benzathine benzylpenicillin. The bacterium is highly vulnerable to penicillin when treated early, and a treated individual is typically rendered non-infective in about 24 hours. Doxycycline and tetracycline are alternative choices for those allergic to penicillin; due to the risk of birth defects, these are not recommended for pregnant women. Resistance to macrolides, rifampicin, and clindamycin is often present. Ceftriaxone, a third-generation cephalosporin antibiotic, may be as effective as penicillin-based treatment. It is recommended that a treated person avoid sex until the sores are healed. In comparison to azithromycin for treatment in early infection, there is lack of strong evidence for superiority of azithromycin to benzathine penicillin G.
Late infections
For neurosyphilis, due to the poor penetration of benzathine penicillin into the central nervous system, those affected are given large doses of intravenous penicillin G for a minimum of 10 days. If a person is allergic to penicillin, ceftriaxone may be used or penicillin desensitization attempted. Other late presentations may be treated with once-weekly intramuscular benzathine penicillin for three weeks. Treatment at this stage solely limits further progression of the disease and has a limited effect on damage which has already occurred. Serologic cure can be measured when the non-treponemal titers decline by a factor of 4 or more in 6–12 months in early syphilis or 12–24 months in late syphilis.
Jarisch–Herxheimer reaction
One of the potential side effects of treatment is the Jarisch–Herxheimer reaction. It frequently starts within one hour and lasts for 24 hours, with symptoms of fever, muscle pains, headache, and a fast heart rate. It is results from the release of pro-inflammatory cytokines by the immune system in response to lipoproteins released from rupturing syphilis bacteria.
Pregnancy
Penicillin is an effective treatment for syphilis in pregnancy but there is no agreement on which dose or route of delivery is most effective.
Epidemiology
In 2012, about 0.5% of adults were infected with syphilis, with 6 million new cases. In 1999, it is believed to have infected 12 million additional people, with greater than 90% of cases in the developing world. It affects between 700,000 and 1.6 million pregnancies a year, resulting in miscarriages, stillbirths, and congenital syphilis. During 2015, it caused about 107,000 deaths, down from 202,000 in 1990. In sub-Saharan Africa, syphilis contributes to approximately 20% of perinatal deaths. Rates are proportionally higher among intravenous drug users, those who are infected with HIV, and men who have sex with men. In the United States about 55,400 people are newly infected each year . African Americans accounted for almost half of all cases in 2010. As of 2014, syphilis infections continue to increase in the United States. In the United States as of 2020, rates of syphilis have increased by more than threefold; in 2018 approximately 86% of all cases of syphilis in the United States were in men. In 2021, preliminary CDC data illustrated that 2,677 cases of congenital syphilis were found in the population of 332 million in the United States.
Syphilis was very common in Europe during the 18th and 19th centuries. Flaubert found it universal among 19th-century Egyptian prostitutes. In the developed world during the early 20th century, infections declined rapidly with the widespread use of antibiotics, until the 1980s and 1990s. Since 2000, rates of syphilis have been increasing in the US, Canada, the UK, Australia and Europe, primarily among men who have sex with men. Rates of syphilis among US women have remained stable during this time, while rates among UK women have increased, but at a rate less than that of men. Increased rates among heterosexuals have occurred in China and Russia since the 1990s. This has been attributed to unsafe sexual practices, such as sexual promiscuity, prostitution, and decreasing use of barrier protection.
Left untreated, it has a mortality rate of 8% to 58%, with a greater death rate among males. The symptoms of syphilis have become less severe over the 19th and 20th centuries, in part due to widespread availability of effective treatment, and partly due to virulence of the bacteria. With early treatment, few complications result. Syphilis increases the risk of HIV transmission by two to five times, and coinfection is common (30–60% in some urban centers). In 2015, Cuba became the first country to eliminate mother-to-child transmission of syphilis.
History
Origin, spread and discovery
Paleopathologists have known for decades that syphilis was present in the Americas before European contact. The situation in Europe and Afro-Eurasia has been murkier and caused considerable debate. According to the Columbian theory, syphilis was brought to Spain by the men who sailed with Christopher Columbus in 1492 and spread from there, with a serious epidemic in Naples beginning as early as 1495. Contemporaries believed the disease sprang from American roots, and in the 16th century physicians wrote extensively about the new disease inflicted on them by the returning explorers.
Most evidence supports the Columbian origin hypothesis. However, beginning in the 1960s, examples of probable treponematosis—the parent disease of syphilis, bejel, and yaws—in skeletal remains shifted the opinion of some towards a "pre-Columbian" origin. A 2024 study published in Nature supported an emergence postdating human occupation in the Americas.
When living conditions changed with urbanization, elite social groups began to practice basic hygiene and started to separate themselves from other social tiers. Consequently, treponematosis was driven out of the age group in which it had become endemic. It then began to appear in adults as syphilis. Because they had never been exposed as children, they were not able to fend off serious illness. Spreading the disease via sexual contact also led to victims being infected with a massive bacterial load from open sores on the genitalia. Adults in higher socioeconomic groups then became very sick with painful and debilitating symptoms lasting for decades. Often, they died of the disease, as did their children who were infected with congenital syphilis. The difference between rural and urban populations was first noted by Ellis Herndon Hudson, a clinician who published extensively about the prevalence of treponematosis, including syphilis, in times past. The importance of bacterial load was first noted by the physician Ernest Grin in 1952 in his study of syphilis in Bosnia.
The most compelling evidence for the validity of the pre-Columbian hypothesis is the presence of syphilitic-like damage to bones and teeth in medieval skeletal remains. While the absolute number of cases is not large, new ones are continually discovered, most recently in 2015. At least fifteen cases of acquired treponematosis based on evidence from bones, and six examples of congenital treponematosis based on evidence from teeth, are now widely accepted. In several of the twenty-one cases the evidence may also indicate syphilis.
In 2020, a group of leading paleopathologists concluded that enough evidence had been collected to prove that treponemal disease, almost certainly including syphilis, had existed in Europe prior to the voyages of Columbus. There is an outstanding issue, however. Damaged teeth and bones may seem to hold proof of pre-Columbian syphilis, but there is a possibility that they point to an endemic form of treponemal disease instead. As syphilis, bejel, and yaws vary considerably in mortality rates and the level of human disease they elicit, it is important to know which one is under discussion in any given case, but it remains difficult for paleopathologists to distinguish among them. (The fourth of the treponemal diseases is pinta, a skin disease and therefore unrecoverable through paleopathology.) Ancient DNA (aDNA) holds the answer, because just as only aDNA suffices to distinguish between syphilis and other diseases that produce similar symptoms in the body, it alone can differentiate spirochetes that are 99.8 percent identical with absolute accuracy. Progress on uncovering the historical extent of syndromes through aDNA remains slow, however, because the bacterium responsible for treponematosis is rare in skeletal remains and fragile, making it notoriously difficult to recover and analyse. Precise dating to the medieval period is not yet possible but work by Kettu Majander et al. uncovering the presence of several different kinds of treponematosis at the beginning of the early modern period argues against its recent introduction from elsewhere. Therefore, they argue, treponematosis—possibly including syphilis—almost certainly existed in medieval Europe.
Despite significant progress in tracing the presence of syphilis in past historic periods, definitive findings from paleopathology and aDNA studies are still lacking for the medieval period. Evidence from art is therefore helpful in settling the issue. Research by Marylynn Salmon has demonstrated that deformities in medieval subjects can be identified by comparing them to those of modern victims of syphilis in medical drawings and photographs. One of the most typical deformities, for example, is a collapsed nasal bridge called saddle nose. Salmon discovered that it appeared often in medieval illuminations, especially among the men tormenting Christ in scenes of the crucifixion. The association of saddle nose with evil is an indication that the artists were thinking of syphilis, which is typically transmitted through sexual intercourse with promiscuous partners, a mortal sin in medieval times.
It remains mysterious why the authors of medieval medical treatises so uniformly refrained from describing syphilis or commenting on its existence in the population. Many may have confused it with other diseases such as leprosy (Hansen's disease) or elephantiasis. The great variety of symptoms of treponematosis, the different ages at which the various diseases appear, and its widely divergent outcomes depending on climate and culture, would have added greatly to the confusion of medical practitioners, as indeed they did right down to the middle of the 20th century. In addition, evidence indicates that some writers on disease feared the political implications of discussing a condition more fatal to elites than to commoners. Historian Jon Arrizabalaga has investigated this question for Castile with startling results revealing an effort to hide its association with elites.
The first written records of an outbreak of syphilis in Europe occurred in 1495 in Naples, Italy, during a French invasion (Italian War of 1494–98). Since it was claimed to have been spread by French troops, it was initially called the "French disease" by the people of Naples. The disease reached London in 1497 and was recorded at St Bartholomew's Hospital as infecting 10 out of the 20 patients. In 1530, the pastoral name "syphilis" (the name of a character) was first used by the Italian physician and poet Girolamo Fracastoro as the title of his Latin poem in dactylic hexameter Syphilis sive morbus gallicus (Syphilis or The French Disease) describing the ravages of the disease in Italy. In Great Britain it was also called the "Great Pox".
In the 16th through 19th centuries, syphilis was one of the largest public health burdens in prevalence, symptoms, and disability, although records of its true prevalence were generally not kept because of the fearsome and sordid status of sexually transmitted infections in those centuries. According to a 2020 study, more than 20% of individuals in the age range 15–34 years in late 18th-century London were treated for syphilis. At the time the causative agent was unknown but it was well known that it was spread sexually and also often from mother to child. Its association with sex, especially sexual promiscuity and prostitution, made it an object of fear and revulsion and a taboo. The magnitude of its morbidity and mortality in those centuries reflected that, unlike today, there was no adequate understanding of its pathogenesis and no truly effective treatments. Its damage was caused not so much by great sickness or death early in the course of the disease but rather by its gruesome effects decades after infection as it progressed to neurosyphilis with tabes dorsalis. Mercury compounds and isolation were commonly used, with treatments often worse than the disease.
The causative organism, Treponema pallidum, was first identified by Fritz Schaudinn and Erich Hoffmann, in 1905. The first effective treatment for syphilis was arsphenamine, discovered by Sahachiro Hata in 1909, during a survey of hundreds of newly synthesized organic arsenical compounds led by Paul Ehrlich. It was manufactured and marketed from 1910 under the trade name Salvarsan by Hoechst AG. This organoarsenic compound was the first modern chemotherapeutic agent.
During the 20th century, as both microbiology and pharmacology advanced greatly, syphilis, like many other infectious diseases, became more of a manageable burden than a scary and disfiguring mystery, at least in developed countries among those people who could afford to pay for timely diagnosis and treatment. Penicillin was discovered in 1928, and effectiveness of treatment with penicillin was confirmed in trials in 1943, at which time it became the main treatment.
Many famous historical figures, including Franz Schubert, Arthur Schopenhauer, Édouard Manet, Charles Baudelaire, and Guy de Maupassant are believed to have had the disease. Friedrich Nietzsche was long believed to have gone mad as a result of tertiary syphilis, but that diagnosis has recently come into question.
Arts and literature
The earliest known depiction of an individual with syphilis is Albrecht Dürer's Syphilitic Man (1496), a woodcut believed to represent a Landsknecht, a Northern European mercenary. The myth of the femme fatale or "poison women" of the 19th century is believed to be partly derived from the devastation of syphilis, with classic examples in literature including John Keats' "La Belle Dame sans Merci".
The Flemish artist Stradanus designed a print called Preparation and Use of Guayaco for Treating Syphilis, a scene of a wealthy man receiving treatment for syphilis with the tropical wood guaiacum sometime around 1590.
Tuskegee and Guatemala studies
The "Tuskegee Study of Untreated Syphilis in the Negro Male" was an infamous, unethical and racist clinical study conducted between 1932 and 1972 by the U.S. Public Health Service. Whereas the purpose of this study was to observe the natural history of untreated syphilis; the African-American men in the study were told they were receiving free treatment for "bad blood" from the United States government.
The Public Health Service started working on this study in 1932 in collaboration with Tuskegee University, a historically black college in Alabama. Researchers enrolled 600 poor, African American sharecroppers from Macon County, Alabama in the study. Of these men, 399 had contracted syphilis before the study began, and 201 did not have the disease. Medical care, hot meals and free burial insurance were given to those who participated. The men were told that the study would last six months, but in the end, it continued for 40 years. After funding for treatment was lost, the study was continued without informing the men that they were only being studied and would not be treated. Facing insufficient participation, the Macon County Health Department nevertheless wrote to subjects to offer them a "last chance" to get a special "treatment", which was not a treatment at all, but a spinal tap administered exclusively for diagnostic purposes. None of the men infected were ever told that they had the disease, and none were treated with penicillin even after the antibiotic had been proven to successfully treat syphilis. According to the Centers for Disease Control, the men were told they were being treated for "bad blood"—a colloquialism describing various conditions such as fatigue, anemia and syphilis—which was a leading cause of death among southern African American men.
The 40-year study became a textbook example of criminally negligent medical ethics because researchers had knowingly withheld treatment with penicillin and because the subjects had been misled concerning the purposes of the study. The revelation in 1972 of these study failures by a whistleblower, Peter Buxtun, led to major changes in U.S. law and regulation on the protection of participants in clinical studies. Now studies require informed consent, communication of diagnosis, and accurate reporting of test results.
Similar experiments were carried out in Guatemala from 1946 to 1948. It was done during the administration of American President Harry S. Truman and Guatemalan President Juan José Arévalo with the cooperation of some Guatemalan health ministries and officials. Doctors infected soldiers, prostitutes, prisoners and mental patients with syphilis and other sexually transmitted infections, without the informed consent of the subjects and treated most subjects with antibiotics. The experiment resulted in at least 83 deaths. In October 2010, the U.S. formally apologized to Guatemala for the ethical violations that took place. Secretary of State Hillary Clinton and Health and Human Services Secretary Kathleen Sebelius stated "Although these events occurred more than 64 years ago, we are outraged that such reprehensible research could have occurred under the guise of public health. We deeply regret that it happened, and we apologize to all the individuals who were affected by such abhorrent research practices." The experiments were led by physician John Charles Cutler who also participated in the late stages of the Tuskegee syphilis experiment.
Names
Syphilis was first called grande verole or the "great pox" by the French. Other historical names have included "button scurvy", sibbens, frenga and dichuchwa, among others. Since it was a disgraceful disease, the disease was known in several countries by the name of their neighbouring, often hostile country. The English, the Germans, and the Italians called it "the French disease", while the French referred to it as the "Neapolitan disease". The Dutch called it the "Spanish/Castilian disease". To the Turks it was known as the "Christian disease", whilst in India, the Hindus and Muslims named the disease after each other.
| Biology and health sciences | Infectious disease | null |
28857 | https://en.wikipedia.org/wiki/Signal%20transduction | Signal transduction | Signal transduction is the process by which a chemical or physical signal is transmitted through a cell as a series of molecular events. Proteins responsible for detecting stimuli are generally termed receptors, although in some cases the term sensor is used. The changes elicited by ligand binding (or signal sensing) in a receptor give rise to a biochemical cascade, which is a chain of biochemical events known as a signaling pathway.
When signaling pathways interact with one another they form networks, which allow cellular responses to be coordinated, often by combinatorial signaling events. At the molecular level, such responses include changes in the transcription or translation of genes, and post-translational and conformational changes in proteins, as well as changes in their location. These molecular events are the basic mechanisms controlling cell growth, proliferation, metabolism and many other processes. In multicellular organisms, signal transduction pathways regulate cell communication in a wide variety of ways.
Each component (or node) of a signaling pathway is classified according to the role it plays with respect to the initial stimulus. Ligands are termed first messengers, while receptors are the signal transducers, which then activate primary effectors. Such effectors are typically proteins and are often linked to second messengers, which can activate secondary effectors, and so on. Depending on the efficiency of the nodes, a signal can be amplified (a concept known as signal gain), so that one signaling molecule can generate a response involving hundreds to millions of molecules. As with other signals, the transduction of biological signals is characterised by delay, noise, signal feedback and feedforward and interference, which can range from negligible to pathological. With the advent of computational biology, the analysis of signaling pathways and networks has become an essential tool to understand cellular functions and disease, including signaling rewiring mechanisms underlying responses to acquired drug resistance.
Stimuli
The basis for signal transduction is the transformation of a certain stimulus into a biochemical signal. The nature of such stimuli can vary widely, ranging from extracellular cues, such as the presence of EGF, to intracellular events, such as the DNA damage resulting from replicative telomere attrition. Traditionally, signals that reach the central nervous system are classified as senses. These are transmitted from neuron to neuron in a process called synaptic transmission. Many other intercellular signal relay mechanisms exist in multicellular organisms, such as those that govern embryonic development.
Ligands
The majority of signal transduction pathways involve the binding of signaling molecules, known as ligands, to receptors that trigger events inside the cell. The binding of a signaling molecule with a receptor causes a change in the conformation of the receptor, known as receptor activation. Most ligands are soluble molecules from the extracellular medium which bind to cell surface receptors. These include growth factors, cytokines and neurotransmitters. Components of the extracellular matrix such as fibronectin and hyaluronan can also bind to such receptors (integrins and CD44, respectively). In addition, some molecules such as steroid hormones are lipid-soluble and thus cross the plasma membrane to reach cytoplasmic or nuclear receptors. In the case of steroid hormone receptors, their stimulation leads to binding to the promoter region of steroid-responsive genes.
Not all classifications of signaling molecules take into account the molecular nature of each class member. For example, odorants belong to a wide range of molecular classes, as do neurotransmitters, which range in size from small molecules such as dopamine to neuropeptides such as endorphins. Moreover, some molecules may fit into more than one class, e.g. epinephrine is a neurotransmitter when secreted by the central nervous system and a hormone when secreted by the adrenal medulla.
Some receptors such as HER2 are capable of ligand-independent activation when overexpressed or mutated. This leads to constitutive activation of the pathway, which may or may not be overturned by compensation mechanisms. In the case of HER2, which acts as a dimerization partner of other EGFRs, constitutive activation leads to hyperproliferation and cancer.
Mechanical forces
The prevalence of basement membranes in the tissues of Eumetazoans means that most cell types require attachment to survive. This requirement has led to the development of complex mechanotransduction pathways, allowing cells to sense the stiffness of the substratum. Such signaling is mainly orchestrated in focal adhesions, regions where the integrin-bound actin cytoskeleton detects changes and transmits them downstream through YAP1. Calcium-dependent cell adhesion molecules such as cadherins and selectins can also mediate mechanotransduction. Specialised forms of mechanotransduction within the nervous system are responsible for mechanosensation: hearing, touch, proprioception and balance.
Osmolarity
Cellular and systemic control of osmotic pressure (the difference in osmolarity between the cytosol and the extracellular medium) is critical for homeostasis. There are three ways in which cells can detect osmotic stimuli: as changes in macromolecular crowding, ionic strength, and changes in the properties of the plasma membrane or cytoskeleton (the latter being a form of mechanotransduction). These changes are detected by proteins known as osmosensors or osmoreceptors. In humans, the best characterised osmosensors are transient receptor potential channels present in the primary cilium of human cells. In yeast, the HOG pathway has been extensively characterised.
Temperature
The sensing of temperature in cells is known as thermoception and is primarily mediated by transient receptor potential channels. Additionally, animal cells contain a conserved mechanism to prevent high temperatures from causing cellular damage, the heat-shock response. Such response is triggered when high temperatures cause the dissociation of inactive HSF1 from complexes with heat shock proteins Hsp40/Hsp70 and Hsp90. With help from the ncRNA hsr1, HSF1 then trimerizes, becoming active and upregulating the expression of its target genes. Many other thermosensory mechanisms exist in both prokaryotes and eukaryotes.
Light
In mammals, light controls the sense of sight and the circadian clock by activating light-sensitive proteins in photoreceptor cells in the eye's retina. In the case of vision, light is detected by rhodopsin in rod and cone cells. In the case of the circadian clock, a different photopigment, melanopsin, is responsible for detecting light in intrinsically photosensitive retinal ganglion cells.
Receptors
Receptors can be roughly divided into two major classes: intracellular and extracellular receptors.
Extracellular receptors
Extracellular receptors are integral transmembrane proteins and make up most receptors. They span the plasma membrane of the cell, with one part of the receptor on the outside of the cell and the other on the inside. Signal transduction occurs as a result of a ligand binding to the outside region of the receptor (the ligand does not pass through the membrane). Ligand-receptor binding induces a change in the conformation of the inside part of the receptor, a process sometimes called "receptor activation". This results in either the activation of an enzyme domain of the receptor or the exposure of a binding site for other intracellular signaling proteins within the cell, eventually propagating the signal through the cytoplasm.
In eukaryotic cells, most intracellular proteins activated by a ligand/receptor interaction possess an enzymatic activity; examples include tyrosine kinase and phosphatases. Often such enzymes are covalently linked to the receptor. Some of them create second messengers such as cyclic AMP and IP3, the latter controlling the release of intracellular calcium stores into the cytoplasm. Other activated proteins interact with adaptor proteins that facilitate signaling protein interactions and coordination of signaling complexes necessary to respond to a particular stimulus. Enzymes and adaptor proteins are both responsive to various second messenger molecules.
Many adaptor proteins and enzymes activated as part of signal transduction possess specialized protein domains that bind to specific secondary messenger molecules. For example, calcium ions bind to the EF hand domains of calmodulin, allowing it to bind and activate calmodulin-dependent kinase. PIP3 and other phosphoinositides do the same thing to the Pleckstrin homology domains of proteins such as the kinase protein AKT.
G protein–coupled receptors
G protein–coupled receptors (GPCRs) are a family of integral transmembrane proteins that possess seven transmembrane domains and are linked to a heterotrimeric G protein. With nearly 800 members, this is the largest family of membrane proteins and receptors in mammals. Counting all animal species, they add up to over 5000. Mammalian GPCRs are classified into 5 major families: rhodopsin-like, secretin-like, metabotropic glutamate, adhesion and frizzled/smoothened, with a few GPCR groups being difficult to classify due to low sequence similarity, e.g. vomeronasal receptors. Other classes exist in eukaryotes, such as the Dictyostelium cyclic AMP receptors and fungal mating pheromone receptors.
Signal transduction by a GPCR begins with an inactive G protein coupled to the receptor; the G protein exists as a heterotrimer consisting of Gα, Gβ, and Gγ subunits. Once the GPCR recognizes a ligand, the conformation of the receptor changes to activate the G protein, causing Gα to bind a molecule of GTP and dissociate from the other two G-protein subunits. The dissociation exposes sites on the subunits that can interact with other molecules. The activated G protein subunits detach from the receptor and initiate signaling from many downstream effector proteins such as phospholipases and ion channels, the latter permitting the release of second messenger molecules. The total strength of signal amplification by a GPCR is determined by the lifetimes of the ligand-receptor complex and receptor-effector protein complex and the deactivation time of the activated receptor and effectors through intrinsic enzymatic activity; e.g. via protein kinase phosphorylation or b-arrestin-dependent internalization.
A study was conducted where a point mutation was inserted into the gene encoding the chemokine receptor CXCR2; mutated cells underwent a malignant transformation due to the expression of CXCR2 in an active conformation despite the absence of chemokine-binding. This meant that chemokine receptors can contribute to cancer development.
Tyrosine, Ser/Thr and Histidine-specific protein kinases
Receptor tyrosine kinases (RTKs) are transmembrane proteins with an intracellular kinase domain and an extracellular domain that binds ligands; examples include growth factor receptors such as the insulin receptor. To perform signal transduction, RTKs need to form dimers in the plasma membrane; the dimer is stabilized by ligands binding to the receptor. The interaction between the cytoplasmic domains stimulates the autophosphorylation of tyrosine residues within the intracellular kinase domains of the RTKs, causing conformational changes. Subsequent to this, the receptors' kinase domains are activated, initiating phosphorylation signaling cascades of downstream cytoplasmic molecules that facilitate various cellular processes such as cell differentiation and metabolism. Many Ser/Thr and dual-specificity protein kinases are important for signal transduction, either acting downstream of [receptor tyrosine kinases], or as membrane-embedded or cell-soluble versions in their own right. The process of signal transduction involves around 560 known protein kinases and pseudokinases, encoded by the human kinome
As is the case with GPCRs, proteins that bind GTP play a major role in signal transduction from the activated RTK into the cell. In this case, the G proteins are members of the Ras, Rho, and Raf families, referred to collectively as small G proteins. They act as molecular switches usually tethered to membranes by isoprenyl groups linked to their carboxyl ends. Upon activation, they assign proteins to specific membrane subdomains where they participate in signaling. Activated RTKs in turn activate small G proteins that activate guanine nucleotide exchange factors such as SOS1. Once activated, these exchange factors can activate more small G proteins, thus amplifying the receptor's initial signal. The mutation of certain RTK genes, as with that of GPCRs, can result in the expression of receptors that exist in a constitutively activated state; such mutated genes may act as oncogenes.
Histidine-specific protein kinases are structurally distinct from other protein kinases and are found in prokaryotes, fungi, and plants as part of a two-component signal transduction mechanism: a phosphate group from ATP is first added to a histidine residue within the kinase, then transferred to an aspartate residue on a receiver domain on a different protein or the kinase itself, thus activating the aspartate residue.
Integrins
Integrins are produced by a wide variety of cells; they play a role in cell attachment to other cells and the extracellular matrix and in the transduction of signals from extracellular matrix components such as fibronectin and collagen. Ligand binding to the extracellular domain of integrins changes the protein's conformation, clustering it at the cell membrane to initiate signal transduction. Integrins lack kinase activity; hence, integrin-mediated signal transduction is achieved through a variety of intracellular protein kinases and adaptor molecules, the main coordinator being integrin-linked kinase. As shown in the adjacent picture, cooperative integrin-RTK signaling determines the timing of cellular survival, apoptosis, proliferation, and differentiation.
Important differences exist between integrin-signaling in circulating blood cells and non-circulating cells such as epithelial cells; integrins of circulating cells are normally inactive. For example, cell membrane integrins on circulating leukocytes are maintained in an inactive state to avoid epithelial cell attachment; they are activated only in response to stimuli such as those received at the site of an inflammatory response. In a similar manner, integrins at the cell membrane of circulating platelets are normally kept inactive to avoid thrombosis. Epithelial cells (which are non-circulating) normally have active integrins at their cell membrane, helping maintain their stable adhesion to underlying stromal cells that provide signals to maintain normal functioning.
In plants, there are no bona fide integrin receptors identified to date; nevertheless, several integrin-like proteins were proposed based on structural homology with the metazoan receptors. Plants contain integrin-linked kinases that are very similar in their primary structure with the animal ILKs. In the experimental model plant Arabidopsis thaliana, one of the integrin-linked kinase genes, ILK1, has been shown to be a critical element in the plant immune response to signal molecules from bacterial pathogens and plant sensitivity to salt and osmotic stress. ILK1 protein interacts with the high-affinity potassium transporter HAK5 and with the calcium sensor CML9.
Toll-like receptors
When activated, toll-like receptors (TLRs) take adapter molecules within the cytoplasm of cells in order to propagate a signal. Four adaptor molecules are known to be involved in signaling, which are Myd88, TIRAP, TRIF, and TRAM. These adapters activate other intracellular molecules such as IRAK1, IRAK4, TBK1, and IKKi that amplify the signal, eventually leading to the induction or suppression of genes that cause certain responses. Thousands of genes are activated by TLR signaling, implying that this method constitutes an important gateway for gene modulation.
Ligand-gated ion channels
A ligand-gated ion channel, upon binding with a ligand, changes conformation to open a channel in the cell membrane through which ions relaying signals can pass. An example of this mechanism is found in the receiving cell of a neural synapse. The influx of ions that occurs in response to the opening of these channels induces action potentials, such as those that travel along nerves, by depolarizing the membrane of post-synaptic cells, resulting in the opening of voltage-gated ion channels.
An example of an ion allowed into the cell during a ligand-gated ion channel opening is Ca2+; it acts as a second messenger initiating signal transduction cascades and altering the physiology of the responding cell. This results in amplification of the synapse response between synaptic cells by remodelling the dendritic spines involved in the synapse.
Intracellular receptors
Intracellular receptors, such as nuclear receptors and cytoplasmic receptors, are soluble proteins localized within their respective areas. The typical ligands for nuclear receptors are non-polar hormones like the steroid hormones testosterone and progesterone and derivatives of vitamins A and D. To initiate signal transduction, the ligand must pass through the plasma membrane by passive diffusion. On binding with the receptor, the ligands pass through the nuclear membrane into the nucleus, altering gene expression.
Activated nuclear receptors attach to the DNA at receptor-specific hormone-responsive element (HRE) sequences, located in the promoter region of the genes activated by the hormone-receptor complex. Due to their enabling gene transcription, they are alternatively called inductors of gene expression. All hormones that act by regulation of gene expression have two consequences in their mechanism of action; their effects are produced after a characteristically long period of time and their effects persist for another long period of time, even after their concentration has been reduced to zero, due to a relatively slow turnover of most enzymes and proteins that would either deactivate or terminate ligand binding onto the receptor.
Nucleic receptors have DNA-binding domains containing zinc fingers and a ligand-binding domain; the zinc fingers stabilize DNA binding by holding its phosphate backbone. DNA sequences that match the receptor are usually hexameric repeats of any kind; the sequences are similar but their orientation and distance differentiate them. The ligand-binding domain is additionally responsible for dimerization of nucleic receptors prior to binding and providing structures for transactivation used for communication with the translational apparatus.
Steroid receptors are a subclass of nuclear receptors located primarily within the cytosol. In the absence of steroids, they associate in an aporeceptor complex containing chaperone or heatshock proteins (HSPs). The HSPs are necessary to activate the receptor by assisting the protein to fold in a way such that the signal sequence enabling its passage into the nucleus is accessible. Steroid receptors, on the other hand, may be repressive on gene expression when their transactivation domain is hidden. Receptor activity can be enhanced by phosphorylation of serine residues at their N-terminal as a result of another signal transduction pathway, a process called crosstalk.
Retinoic acid receptors are another subset of nuclear receptors. They can be activated by an endocrine-synthesized ligand that entered the cell by diffusion, a ligand synthesised from a precursor like retinol brought to the cell through the bloodstream or a completely intracellularly synthesised ligand like prostaglandin. These receptors are located in the nucleus and are not accompanied by HSPs. They repress their gene by binding to their specific DNA sequence when no ligand binds to them, and vice versa.
Certain intracellular receptors of the immune system are cytoplasmic receptors; recently identified NOD-like receptors (NLRs) reside in the cytoplasm of some eukaryotic cells and interact with ligands using a leucine-rich repeat (LRR) motif similar to TLRs. Some of these molecules like NOD2 interact with RIP2 kinase that activates NF-κB signaling, whereas others like NALP3 interact with inflammatory caspases and initiate processing of particular cytokines like interleukin-1β.
Second messengers
First messengers are the signaling molecules (hormones, neurotransmitters, and paracrine/autocrine agents) that reach the cell from the extracellular fluid and bind to their specific receptors. Second messengers are the substances that enter the cytoplasm and act within the cell to trigger a response. In essence, second messengers serve as chemical relays from the plasma membrane to the cytoplasm, thus carrying out intracellular signal transduction.
Calcium
The release of calcium ions from the endoplasmic reticulum into the cytosol results in its binding to signaling proteins that are then activated; it is then sequestered in the smooth endoplasmic reticulum and the mitochondria. Two combined receptor/ion channel proteins control the transport of calcium: the InsP3-receptor that transports calcium upon interaction with inositol triphosphate on its cytosolic side; and the ryanodine receptor named after the alkaloid ryanodine, similar to the InsP3 receptor but having a feedback mechanism that releases more calcium upon binding with it. The nature of calcium in the cytosol means that it is active for only a very short time, meaning its free state concentration is very low and is mostly bound to organelle molecules like calreticulin when inactive.
Calcium is used in many processes including muscle contraction, neurotransmitter release from nerve endings, and cell migration. The three main pathways that lead to its activation are GPCR pathways, RTK pathways, and gated ion channels; it regulates proteins either directly or by binding to an enzyme.
Lipid messengers
Lipophilic second messenger molecules are derived from lipids residing in cellular membranes; enzymes stimulated by activated receptors activate the lipids by modifying them. Examples include diacylglycerol and ceramide, the former required for the activation of protein kinase C.
Nitric oxide
Nitric oxide (NO) acts as a second messenger because it is a free radical that can diffuse through the plasma membrane and affect nearby cells. It is synthesised from arginine and oxygen by the NO synthase and works through activation of soluble guanylyl cyclase, which when activated produces another second messenger, cGMP. NO can also act through covalent modification of proteins or their metal co-factors; some have a redox mechanism and are reversible. It is toxic in high concentrations and causes damage during stroke, but is the cause of many other functions like the relaxation of blood vessels, apoptosis, and penile erections.
Redox signaling
In addition to nitric oxide, other electronically activated species are also signal-transducing agents in a process called redox signaling. Examples include superoxide, hydrogen peroxide, carbon monoxide, and hydrogen sulfide. Redox signaling also includes active modulation of electronic flows in semiconductive biological macromolecules.
Cellular responses
Gene activations and metabolism alterations are examples of cellular responses to extracellular stimulation that require signal transduction. Gene activation leads to further cellular effects, since the products of responding genes include instigators of activation; transcription factors produced as a result of a signal transduction cascade can activate even more genes. Hence, an initial stimulus can trigger the expression of a large number of genes, leading to physiological events like the increased uptake of glucose from the blood stream and the migration of neutrophils to sites of infection. The set of genes and their activation order to certain stimuli is referred to as a genetic program.
Mammalian cells require stimulation for cell division and survival; in the absence of growth factor, apoptosis ensues. Such requirements for extracellular stimulation are necessary for controlling cell behavior in unicellular and multicellular organisms; signal transduction pathways are perceived to be so central to biological processes that a large number of diseases are attributed to their dysregulation.
Three basic signals determine cellular growth:
Stimulatory (growth factors)
Transcription dependent responseFor example, steroids act directly as transcription factor (gives slow response, as transcription factor must bind DNA, which needs to be transcribed. Produced mRNA needs to be translated, and the produced protein/peptide can undergo posttranslational modification (PTM))
Transcription independent responseFor example, epidermal growth factor (EGF) binds the epidermal growth factor receptor (EGFR), which causes dimerization and autophosphorylation of the EGFR, which in turn activates the intracellular signaling pathway .
Inhibitory (cell-cell contact)
Permissive (cell-matrix interactions)
The combination of these signals is integrated into altered cytoplasmic machinery which leads to altered cell behaviour.
Major pathways
Following are some major signaling pathways, demonstrating how ligands binding to their receptors can affect second messengers and eventually result in altered cellular responses.
MAPK/ERK pathway: A pathway that couples intracellular responses to the binding of growth factors to cell surface receptors. This pathway is very complex and includes many protein components. In many cell types, activation of this pathway promotes cell division, and many forms of cancer are associated with aberrations in it.
cAMP-dependent pathway: In humans, cAMP works by activating protein kinase A (PKA, cAMP-dependent protein kinase) (see picture), and, thus, further effects depend mainly on cAMP-dependent protein kinase, which vary based on the type of cell.
IP3/DAG pathway: PLC cleaves the phospholipid phosphatidylinositol 4,5-bisphosphate (PIP2), yielding diacyl glycerol (DAG) and inositol 1,4,5-triphosphate (IP3). DAG remains bound to the membrane, and IP3 is released as a soluble structure into the cytosol. IP3 then diffuses through the cytosol to bind to IP3 receptors, particular calcium channels in the endoplasmic reticulum (ER). These channels are specific to calcium and allow the passage of only calcium to move through. This causes the cytosolic concentration of Calcium to increase, causing a cascade of intracellular changes and activity. In addition, calcium and DAG together works to activate PKC, which goes on to phosphorylate other molecules, leading to altered cellular activity. End-effects include taste, manic depression, tumor promotion, etc.
History
The earliest notion of signal transduction can be traced back to 1855, when Claude Bernard proposed that ductless glands such as the spleen, the thyroid and adrenal glands, were responsible for the release of "internal secretions" with physiological effects. Bernard's "secretions" were later named "hormones" by Ernest Starling in 1905. Together with William Bayliss, Starling had discovered secretin in 1902. Although many other hormones, most notably insulin, were discovered in the following years, the mechanisms remained largely unknown.
The discovery of nerve growth factor by Rita Levi-Montalcini in 1954, and epidermal growth factor by Stanley Cohen in 1962, led to more detailed insights into the molecular basis of cell signaling, in particular growth factors. Their work, together with Earl Wilbur Sutherland's discovery of cyclic AMP in 1956, prompted the redefinition of endocrine signaling to include only signaling from glands, while the terms autocrine and paracrine began to be used. Sutherland was awarded the 1971 Nobel Prize in Physiology or Medicine, while Levi-Montalcini and Cohen shared it in 1986.
In 1970, Martin Rodbell examined the effects of glucagon on a rat's liver cell membrane receptor. He noted that guanosine triphosphate disassociated glucagon from this receptor and stimulated the G-protein, which strongly influenced the cell's metabolism. Thus, he deduced that the G-protein is a transducer that accepts glucagon molecules and affects the cell. For this, he shared the 1994 Nobel Prize in Physiology or Medicine with Alfred G. Gilman. Thus, the characterization of RTKs and GPCRs led to the formulation of the concept of "signal transduction", a word first used in 1972. Some early articles used the terms signal transmission and sensory transduction. In 2007, a total of 48,377 scientific papers—including 11,211 review papers—were published on the subject. The term first appeared in a paper's title in 1979. Widespread use of the term has been traced to a 1980 review article by Rodbell: Research papers focusing on signal transduction first appeared in large numbers in the late 1980s and early 1990s.
Signal transduction in Immunology
The purpose of this section is to briefly describe some developments in immunology in the 1960s and 1970s, relevant to the initial stages of transmembrane signal transduction, and how they impacted our understanding of immunology, and ultimately of other areas of cell biology.
The relevant events begin with the sequencing of myeloma protein light chains, which are found in abundance in the urine of individuals with multiple myeloma. Biochemical experiments revealed that these so-called Bence Jones proteins consisted of 2 discrete domains –one that varied from one molecule to the next (the V domain) and one that did not (the Fc domain or the Fragment crystallizable region). An analysis of multiple V region sequences by Wu and Kabat identified locations within the V region that were hypervariable and which, they hypothesized, combined in the folded protein to form the antigen recognition site. Thus, within a relatively short time a plausible model was developed for the molecular basis of immunological specificity, and for mediation of biological function through the Fc domain. Crystallization of an IgG molecule soon followed ) confirming the inferences based on sequencing, and providing an understanding of immunological specificity at the highest level of resolution.
The biological significance of these developments was encapsulated in the theory of clonal selection which holds that a B cell has on its surface immunoglobulin receptors whose antigen-binding site is identical to that of antibodies that are secreted by the cell when it encounters an antigen, and more specifically a particular B cell clone secretes antibodies with identical sequences. The final piece of the story, the Fluid mosaic model of the plasma membrane provided all the ingredients for a new model for the initiation of signal transduction; viz, receptor dimerization.
The first hints of this were obtained by Becker et al who demonstrated that the extent to which human basophils—for which bivalent Immunoglobulin E (IgE) functions as a surface receptor – degranulate, depends on the concentration of anti IgE antibodies to which they are exposed, and results in a redistribution of surface molecules, which is absent when monovalent ligand is used. The latter observation was consistent with earlier findings by Fanger et al. These observations tied a biological response to events and structural details of molecules on the cell surface. A preponderance of evidence soon developed that receptor dimerization initiates responses (reviewed in ) in a variety of cell types, including B cells.
Such observations led to a number of theoretical (mathematical) developments. The first of these was a simple model proposed by Bell which resolved an apparent paradox: clustering forms stable networks; i.e. binding is essentially irreversible, whereas the affinities of antibodies secreted by B cells increase as the immune response progresses. A theory of the dynamics of cell surface clustering on lymphocyte membranes was developed by DeLisi and Perelson who found the size distribution of clusters as a function of time, and its dependence on the affinity and valence of the ligand. Subsequent theories for basophils and mast cells were developed by Goldstein and Sobotka and their collaborators, all aimed at the analysis of dose-response patterns of immune cells and their biological correlates. For a recent review of clustering in immunological systems see.
Ligand binding to cell surface receptors is also critical to motility, a phenomenon that is best understood in single-celled organisms. An example is a detection and response to concentration gradients by bacteria -–the classic mathematical theory appearing in. A recent account can be found in
| Biology and health sciences | Cell processes | Biology |
28922 | https://en.wikipedia.org/wiki/Scorpion | Scorpion | Scorpions are predatory arachnids of the order Scorpiones. They have eight legs and are easily recognized by a pair of grasping pincers and a narrow, segmented tail, often carried in a characteristic forward curve over the back and always ending with a stinger. The evolutionary history of scorpions goes back 435 million years. They mainly live in deserts but have adapted to a wide range of environmental conditions, and can be found on all continents except Antarctica. There are over 2,500 described species, with 22 extant (living) families recognized to date. Their taxonomy is being revised to account for 21st-century genomic studies.
Scorpions primarily prey on insects and other invertebrates, but some species hunt vertebrates. They use their pincers to restrain and kill prey, or to prevent their own predation. The venomous sting is used for offense and defense. During courtship, the male and female grasp each other's pincers and dance while he tries to move her onto his sperm packet. All known species give live birth and the female cares for the young as their exoskeletons harden, transporting them on her back. The exoskeleton contains fluorescent chemicals and glows under ultraviolet light.
The vast majority of species do not seriously threaten humans, and healthy adults usually do not need medical treatment after a sting. About 25 species (fewer than one percent) have venom capable of killing a human, which happens frequently in the parts of the world where they live, primarily where access to medical treatment is unlikely.
Scorpions appear in art, folklore, mythology, and commercial brands. Scorpion motifs are woven into kilim carpets for protection from their sting. Scorpius is the name of a constellation; the corresponding astrological sign is Scorpio. A classical myth about Scorpius tells how the giant scorpion and its enemy Orion became constellations on opposite sides of the sky.
Etymology
The word scorpion originated in Middle English between 1175 and 1225 AD from Old French , or from Italian , both derived from the Latin , equivalent to , which is the romanization of the Greek – , with no native IE etymology (cfr. Arabic ʕaqrab 'scorpion', Proto-Germanic *krabbô 'crab').
Evolution
Fossil record
Scorpion fossils have been found in many strata, including marine Silurian and estuarine Devonian deposits, coal deposits from the Carboniferous Period and in amber. Whether the early scorpions were marine or terrestrial has been debated, and while they had book lungs like modern terrestrial species, the most basal such as Eramoscorpius were originally considered as still aquatic, although later study found that Eramoscorpius also had book lungs. Over 100 fossil species of scorpion have been described. The oldest found as of 2021 is Dolichophonus loudonensis, which lived during the Silurian, in present-day Scotland. Gondwanascorpio from the Devonian is among the earliest-known terrestrial animals on the Gondwana supercontinent. Some Palaeozoic scorpions possessed compound eyes similar to those of eurypterids. The Triassic fossils Protochactas and Protobuthus belong to the modern clades Chactoidea and Buthoidea respectively, indicating that the crown group of modern scorpions had emerged by this time.
Phylogeny
The Scorpiones are a clade within the pulmonate Arachnida (those with book lungs). Arachnida is placed within the Chelicerata, a subphylum of Arthropoda that contains sea spiders and horseshoe crabs, alongside terrestrial animals without book lungs such as ticks and harvestmen. The extinct Eurypterida, sometimes called sea scorpions, though they were not all marine, are not scorpions; their grasping pincers were chelicerae, not homologous with the pincers (second appendages) of scorpions. Scorpiones is sister to the Tetrapulmonata, a terrestrial group of pulmonates containing the spiders and whip scorpions. This 2019 cladogram summarizes:
Recent studies place pseudoscorpions as the sister group of scorpions in the clade Panscorpiones, which together with Tetrapulmonata makes up the clade Arachnopulmonata.
The internal phylogeny of the scorpions has been debated, but genomic analysis consistently places the Bothriuridae as sister to a clade consisting of Scorpionoidea and Chactoidea. The scorpions diversified between the Devonian and the early Carboniferous. The main division is into the clades Buthida and Iurida. The Bothriuridae diverged starting before temperate Gondwana broke up into separate land masses, completed by the Jurassic. The Iuroidea and Chactoidea are both seen not to be single clades, and are shown as "paraphyletic" (with quotation marks) in this 2018 cladogram.
Taxonomy
Carl Linnaeus described six species of scorpion in his genus Scorpio in 1758 and 1767; three of these are now considered valid and are called Scorpio maurus, Androctonus australis, and Euscorpius carpathicus; the other three are dubious names. He placed the scorpions among his "Insecta aptera" (wingless insects), a group that included Crustacea, Arachnida and Myriapoda. In 1801, Jean-Baptiste Lamarck divided up the "Insecta aptera", creating the taxon Arachnides for spiders, scorpions, and acari (mites and ticks), though it also contained the Thysanura, Myriapoda and parasites such as lice. German arachnologist Carl Ludwig Koch created the order Scorpiones in 1837. He divided it into four families, the six-eyed scorpions "Scorpionides", the eight-eyed scorpions "Buthides", the ten-eyed scorpions "Centrurides", and the twelve-eyed scorpions "Androctonides".
More recently, some twenty-two families containing over 2,500 species of scorpions have been described, with many additions and much reorganization of taxa in the 21st century. There are over 100 described taxa of fossil scorpions. This classification is based on Soleglad and Fet (2003), which replaced Stockwell's older, unpublished classification. Further taxonomic changes are from papers by Soleglad et al. (2005).
The extant taxa to the rank of family (numbers of species in parentheses) are:
Order Scorpiones
Parvorder Pseudochactida Soleglad & Fet, 2003
Superfamily Pseudochactoidea Gromov, 1998
Family Pseudochactidae Gromov, 1998 (1 sp.) (Central Asian scorpions of semi-savanna habitats)
Parvorder Buthida Soleglad & Fet, 2003
Superfamily Buthoidea C. L. Koch, 1837
Family Buthidae C. L. Koch, 1837 (1209 spp.) (thick-tailed scorpions, including the most dangerous species)
Family Microcharmidae Lourenço, 1996, 2019 (17 spp.) (African scorpions of humid forest leaf litter)
Parvorder Chaerilida Soleglad & Fet, 2003
Superfamily Chaeriloidea Pocock, 1893
Family Chaerilidae Pocock, 1893 (51 spp.) (South and Southeast Asian scorpions of non-arid places)
Parvorder Iurida Soleglad & Fet, 2003
Superfamily Chactoidea Pocock, 1893
Family Akravidae Levy, 2007 (1 sp.) (cave-dwelling scorpions of Israel)
Family Belisariidae Lourenço, 1998 (3 spp.) (cave-related scorpions of Southern Europe)
Family Chactidae Pocock, 1893 (209 spp.) (New World scorpions, membership under revision)
Family Euscorpiidae Laurie, 1896 (170 spp.) (harmless scorpions of the Americas, Eurasia, and North Africa)
Family Superstitioniidae Stahnke, 1940 (1 sp.) (cave scorpions of Mexico and Southwestern United States)
Family Troglotayosicidae Lourenço, 1998 (4 spp.) (cave-related scorpions of South America)
Family Typhlochactidae Mitchell, 1971 (11 spp.) (cave-related scorpions of Eastern Mexico)
Family Vaejovidae Thorell, 1876 (222 spp.) (New World scorpions)
Superfamily Iuroidea Thorell, 1876
Family Caraboctonidae Kraepelin, 1905 (23 spp.) (hairy scorpions)
Family Hadruridae Stahnke, 1974 (9 spp.) (large North American scorpions)
Family Iuridae Thorell, 1876 (21 spp.) (scorpions with a large tooth on inner side of moveable claw)
Superfamily Scorpionoidea Latreille, 1802
Family Bothriuridae Simon, 1880 (158 spp.) (Southern hemisphere tropical and temperate scorpions)
Family Hemiscorpiidae Pocock, 1893 (16 spp.) (rock, creeping, or tree scorpions of the Middle East)
Family Hormuridae Laurie, 1896 (92 spp.) (flattened, crevice-living scorpions of Southeast Asia and Australia)
Family Rugodentidae Bastawade et al., 2005 (1 sp.) (burrowing scorpions of India)
Family Scorpionidae Latreille, 1802 (183 spp.) (burrowing or pale-legged scorpions)
Family Diplocentridae Karsch, 1880 (134 spp.) (closely related to and sometimes placed in Scorpionidae, but have spine on telson)
Family Heteroscorpionidae Kraepelin, 1905 (6 spp.) (scorpions of Madagascar)
Geographical distribution
Scorpions are found on all continents except Antarctica. The diversity of scorpions is greatest in subtropical areas; it decreases toward the poles and equator, though scorpions are found in the tropics. Scorpions did not occur naturally in Great Britain but were accidentally introduced by humans, and have now established a population. New Zealand, and some of the islands in Oceania, have in the past had small populations of introduced scorpions, but they were exterminated. Five colonies of Euscorpius flavicaudis have established themselves since the late 19th century in Sheerness in England at 51°N, while Paruroctonus boreus lives as far north as Red Deer, Alberta, at 52°N. A few species are on the IUCN Red List; Afrolychas braueri is classed as critically endangered (2012), Isometrus deharvengi as endangered (2016) and Chiromachus ochropus as vulnerable (2014).
Scorpions are xerocoles, meaning they primarily live in deserts, but they can be found in virtually every terrestrial habitat including high-elevation mountains, caves, and intertidal zones. They are largely absent from boreal ecosystems such as the tundra, high-altitude taiga, and mountain tops. The highest altitude reached by a scorpion is in the Andes, for Orobothriurus crassimanus.
As regards microhabitats, scorpions may be ground-dwelling, tree-loving, rock-loving or sand-loving. Some species, such as Vaejovis janssi, are versatile and are found in all habitats on Socorro Island, Baja California, while others such as Euscorpius carpathicus, endemic to the littoral zone of rivers in Romania, occupy specialized niches.
Morphology
Scorpions range in size from the Typhlochactas mitchelli of Typhlochactidae, to the Heterometrus swammerdami of Scorpionidae. The body of a scorpion is divided into two parts or tagmata: the cephalothorax or prosoma, and the abdomen or opisthosoma. The opisthosoma is subdivided into a broad anterior portion, the mesosoma or pre-abdomen, and a narrow tail-like posterior, the metasoma or post-abdomen. External differences between the sexes are not obvious in most species. In some, the metasoma is more elongated in males than females.
Cephalothorax
The cephalothorax comprises the carapace, eyes, chelicerae (mouth parts), pedipalps (which have chelae, commonly called claws or pincers) and four pairs of walking legs. Scorpions have two eyes on the top of the cephalothorax, and usually two to five pairs of eyes along the front corners of the cephalothorax. While unable to form sharp images, their central eyes are amongst the most light sensitive in the animal kingdom, especially in dim light, which makes it possible for nocturnal species to use starlight to navigate at night. The chelicerae are at the front and underneath the carapace. They are pincer-like and have three segments and sharp "teeth". The brain of a scorpion is in the back of the cephalothorax, just above the esophagus. As in other arachnids, the nervous system is highly concentrated in the cephalothorax, but has a long ventral nerve cord with segmented ganglia which may be a primitive trait.
The pedipalp is a segmented, clawed appendage used for prey immobilization, defense and sensory purposes. The segments of the pedipalp (from closest to the body outward) are coxa, trochanter, femur, patella, tibia (including the fixed claw and the manus) and tarsus (moveable claw). A scorpion has darkened or granular raised linear ridges, called "keels" or "carinae" on the pedipalp segments and on other parts of the body; these are useful as taxonomic characters. Unlike those of some other arachnids, the legs have not been modified for other purposes, though they may occasionally be used for digging, and females may use them to catch emerging young. The legs are covered in proprioceptors, bristles and sensory setae. Depending on the species, the legs may have spines and spurs.
Mesosoma
The mesosoma or preabdomen is the broad part of the opisthosoma. In the early stages of embryonic development the mesosoma consist of eight segments, but the first segment disappear before birth, so the mesosoma in scorpions actually consist of segments 2-8. These anterior seven somites (segments) of the opisthosoma are each covered dorsally by a sclerotized plate called the tergite. Ventrally, somites 3 to 7 are armored with matching plates called sternites. The ventral side of somite 1 has a pair of genital opercula covering the gonopore. Sternite 2 forms the basal plate bearing the pectines, which function as sensory organs.
The next four somites, 3 to 6, all bear pairs of spiracles. They serve as openings for the scorpion's respiratory organs, known as book lungs. The spiracle openings may be slits, circular, elliptical or oval according to the species. There are thus four pairs of book lungs; each consists of some 140 to 150 thin lamellae filled with air inside a pulmonary chamber, connected on the ventral side to an atrial chamber which opens into a spiracle. Bristles hold the lamellae apart. A muscle opens the spiracle and widens the atrial chamber; dorsoventral muscles contract to compress the pulmonary chamber, forcing air out, and relax to allow the chamber to refill. The 7th and last somite does not bear appendages or any other significant external structures.
The mesosoma contains the heart or "dorsal vessel" which is the center of the scorpion's open circulatory system. The heart is continuous with a deep arterial system which spreads throughout the body. Sinuses return deoxygenated blood (hemolymph) to the heart; the blood is re-oxygenated by cardiac pores. The mesosoma also contains the reproductive system. The female gonads are made of three or four tubes that run parallel to each other and are connected by two to four transverse anastomoses. These tubes are the sites for both oocyte formation and embryonic development. They connect to two oviducts which connect to a single atrium leading to the genital orifice. Males have two gonads made of two cylindrical tubes with a ladder-like configuration; they contain cysts which produce spermatozoa. Both tubes end in a spermiduct, one on each side of the mesosoma. They connect to glandular symmetrical structures called paraxial organs, which end at the genital orifice. These secrete chitin-based structures which come together to form the spermatophore.
Metasoma
The "tail" or metasoma consists of five segments and the telson, which is not strictly a segment. The five segments are merely body rings; they lack apparent sterna or terga, and become larger distally. These segments have keels, setae and bristles which may be used for taxonomic classification. The anus is at the distal and ventral end of the last segment, and is encircled by four anal papillae and the anal arch. The tails of some species contain light receptors.
The telson includes the vesicle, which contains a symmetrical pair of venom glands. Externally it bears the curved stinger, the hypodermic aculeus, equipped with sensory hairs. Each of the venom glands has its own duct to convey its secretion along the aculeus from the bulb of the gland to immediately near of the tip, where each of the paired ducts has its own venom pore. An extrinsic muscle system in the tail moves it forward and propels and penetrates with the aculeus, while an intrinsic muscle system attached to the glands pumps venom through the stinger into the intended victim. The stinger contains metalloproteins with zinc, hardening the tip. The optimal stinging angle is around 30 degrees relative to the tip.
Biology
Most scorpion species are nocturnal or crepuscular, finding shelter during the day in burrows, cracks in rocks and tree bark. Many species dig a shelter underneath stones a few centimeters long. Some may use burrows made by other animals including spiders, reptiles and small mammals. Other species dig their own burrows which vary in complexity and depth. Hadrurus species dig burrows as over deep. Digging is done using the mouth parts, claws and legs. In several species, particularly of the family Buthidae, individuals may gather in the same shelter; bark scorpions may aggregate up to 30 individuals. In some species, families of females and young sometimes aggregate.
Scorpions prefer areas where the temperature remains in the range of , but may survive temperatures from well below freezing to desert heat. Scorpions can withstand intense heat: Leiurus quinquestriatus, Scorpio maurus and Hadrurus arizonensis can live in temperatures of if they are sufficiently hydrated. Desert species must deal with the extreme changes in temperature from day to night or between seasons; Pectinibuthus birulai lives in a temperature range of . Scorpions that live outside deserts prefer lower temperatures. The ability to resist cold may be related to the increase in the sugar trehalose when the temperature drops. Some species hibernate. Scorpions appear to have resistance to ionizing radiation. This was discovered in the early 1960s when scorpions were found to be among the few animals to survive nuclear tests at Reggane, Algeria.
Desert scorpions have several adaptations for water conservation. They excrete insoluble compounds such as xanthine, guanine, and uric acid, not requiring water for their removal from the body. Guanine is the main component and maximizes the amount of nitrogen excreted. A scorpion's cuticle holds in moisture via lipids and waxes from epidermal glands, and protects against ultraviolet radiation. Even when dehydrated, a scorpion can tolerate high osmotic pressure in its blood. Desert scorpions get most of their moisture from the food they eat but some can absorb water from the humid soil. Species that live in denser vegetation and in more moderate temperatures will drink water on plants and in puddles.
A scorpion uses its stinger both for killing prey and defense. Some species make direct, quick strikes with their tails while others make slower, more circular strikes which can more easily return the stinger to a position where it can strike again. Leiurus quinquestriatus can whip its tail at a speed of up to in a defensive strike.
Mortality and defense
Scorpions may be attacked by other arthropods like ants, spiders, solifugids and centipedes. Major predators include frogs, lizards, snakes, birds, and mammals. Meerkats are somewhat specialized in preying on scorpions, biting off their stingers and being immune to their venom. Other predators adapted for hunting scorpions include the grasshopper mouse and desert long-eared bat, which are also immune to their venom. In one study, 70% of the latter's droppings contained scorpion fragments. Scorpions host parasites including mites, scuttle flies, nematodes and some bacteria. The immune system of scorpions gives them resistance to infection by many types of bacteria.
When threatened, a scorpion raises its claws and tail in a defensive posture. Some species stridulate to warn off predators by rubbing certain hairs, the stinger or the claws. Certain species have a preference for using either the claws or stinger as defense, depending on the size of the appendages. A few scorpions, such as Parabuthus, Centruroides margaritatus, and Hadrurus arizonensis, squirt venom in a narrow jet as far as to warn off potential predators, possibly injuring them in the eyes. Some Ananteris species can shed parts of their tail to escape predators. The parts do not grow back, leaving them unable to sting and defecate, but they can still catch small prey and reproduce for at least eight months afterward.
Diet and feeding
Scorpions generally prey on insects, particularly grasshoppers, crickets, termites, beetles and wasps. Other prey include spiders, solifugids, woodlice and even small vertebrates including lizards, snakes and mammals. Species with large claws may prey on earthworms and mollusks. The majority of species are opportunistic and consume a variety of prey though some may be highly specialized; Isometroides vescus specializes on burrowing spiders. Prey size depends on the size of the species. Several scorpion species are sit-and-wait predators, which involves them waiting for prey at or near the entrance to their burrow. Others actively seek them out. Scorpions detect their prey with mechanoreceptive and chemoreceptive hairs on their bodies and capture them with their claws. Small animals are merely killed with the claws, particularly by large-clawed species. Larger and more aggressive prey is given a sting.
Scorpions, like other arachnids, digest their food externally. The chelicerae, which are very sharp, are used to pull small amounts of food off the prey item into a pre-oral cavity below the chelicerae and carapace. The digestive juices from the gut are egested onto the food, and the digested food is then sucked into the gut in liquid form. Any solid indigestible matter (such as exoskeleton fragments) is trapped by setae in the pre-oral cavity and ejected. The sucked-in food is pumped into the midgut by the pharynx, where it is further digested. The waste passes through the hindgut and out of the anus. Scorpions can consume large amounts of food during one meal. They have an efficient food storage organ and a very low metabolic rate, and a relatively inactive lifestyle. This enables some to survive six to twelve months of starvation.
Mating
Most scorpions reproduce sexually, with male and female individuals; species in some genera, such as Hottentotta and Tityus, and the species Centruroides gracilis, Liocheles australasiae, and Ananteris coineaui have been reported, not necessarily reliably, to reproduce through parthenogenesis, in which unfertilized eggs develop into living embryos. Receptive females produce pheromones which are picked up by wandering males using their pectines to comb the substrate. Males begin courtship by moving their bodies back and forth, without moving the legs, a behavior known as juddering. This appears to produce ground vibrations that are picked up by the female.
The pair then make contact using their pedipalps, and perform a dance called the promenade à deux (French for "a walk for two"). In this dance, the male and female move back and forth while facing each other, as the male searches for a suitable place to deposit his spermatophore. The courtship ritual can involve several other behaviors such as a cheliceral kiss, in which the male and female grasp each other's mouth-parts, arbre droit ("upright tree") where the partners elevate their posteriors and rub their tails together, and sexual stinging, in which the male stings the female in the chelae or mesosoma to subdue her. The dance can last from a few minutes to several hours.
When the male has located a suitably stable substrate, such as hard ground, agglomerated sand, rock, or tree bark, he deposits the spermatophore and guides the female over it. This allows the spermatophore to enter her genital opercula, which triggers release of the sperm, thus fertilizing the female. A mating plug then forms in the female to prevent her from mating again before the young are born. The male and female then abruptly separate. Sexual cannibalism after mating has only been reported anecdotally in scorpions.
Birth and development
Gestation in scorpions can last for over a year in some species. They have two types of embryonic development; apoikogenic and katoikogenic. In the apoikogenic system, which is mainly found in the Buthidae, embryos develop in yolk-rich eggs inside follicles. The katoikogenic system is documented in Hemiscorpiidae, Scorpionidae and Diplocentridae, and involves the embryos developing in a diverticulum which has a teat-like structure for them to feed through. Unlike the majority of arachnids, which are oviparous, hatching from eggs, scorpions seem to be universally viviparous, with live births. They are unusual among terrestrial arthropods in the amount of care a female gives to her offspring. The size of a brood varies by species, from 3 to over 100. The body size of scorpions is not correlated either with brood size or with life cycle length.
Before giving birth, the female elevates the front of her body and positions her pedipalps and front legs under her to catch the young ("birth basket"). The young emerge one by one from the genital opercula, expel the embryonic membrane, if any, and are placed on the mother's back where they remain until they have gone through at least one molt. The period before the first molt is called the pro-juvenile stage; the young are unable to feed or sting, but have suckers on their tarsi, used to hold on to their mother. This period lasts 5 to 25 days, depending on the species. The brood molt for the first time simultaneously in a process that lasts 6 to 8 hours, marking the beginning of the juvenile stage.
Juvenile stages or instars generally resemble smaller versions of adults, with fully developed pincers, hairs and stingers. They are still soft and lack pigments, and thus continue to ride on their mother's back for protection. They become harder and more pigmented over the next couple of days. They may leave their mother temporarily, returning when they sense potential danger. Once the exoskeleton is fully hardened, the young can hunt prey on their own and may soon leave their mother. A scorpion may molt six times on average before reaching maturity, which may not occur until it is 6 to 83 months old, depending on the species. Some species may live up to 25 years.
Fluorescence
Scorpions glow a vibrant blue-green when exposed to certain wavelengths of ultraviolet light, such as that produced by a black light, due to fluorescent chemicals such as beta-carboline in the cuticle. Accordingly, a hand-held ultraviolet lamp has long been a standard tool for nocturnal field surveys of these animals. Fluorescence occurs as a result of sclerotization and increases in intensity with each successive instar. This fluorescence may have an active role in the scorpion's ability to detect light.
Relationship with humans
Stings
Scorpion venom serves to kill or paralyze prey rapidly. The stings of many species are uncomfortable, but only 25 species have venom that is deadly to humans. Those species belong to the family Buthidae, including Leiurus quinquestriatus, Hottentotta spp., Centruroides spp., and Androctonus spp. People with allergies are especially at risk; otherwise, first aid is symptomatic, with analgesia. Cases of very high blood pressure are treated with medications that relieve anxiety and relax the blood vessels. Scorpion envenomation with high morbidity and mortality is usually due to either excessive autonomic activity and cardiovascular toxic effects, or neuromuscular toxic effects. Antivenom is the specific treatment for scorpion envenomation combined with supportive measures including vasodilators in patients with cardiovascular toxic effects, and benzodiazepines when there is neuromuscular involvement. Although rare, severe hypersensitivity reactions including anaphylaxis to scorpion antivenin are possible.
Scorpion stings are a public health problem, particularly in the tropical and subtropical regions of the Americas, North Africa, the Middle East and India. Around 1.5 million scorpion envenomations occur each year with around 2,600 deaths. Mexico is one of the most affected countries, with the highest biodiversity of scorpions in the world, some 200,000 envenomations per year and at least 300 deaths.
Efforts are made to prevent envenomation and to control scorpion populations. Prevention encompasses personal activities such as checking shoes and clothes before putting them on, not walking in bare feet or sandals, and filling in holes and cracks where scorpions might nest. Street lighting reduces scorpion activity. Control may involve the use of insecticides such as pyrethroids, or gathering scorpions manually with the help of ultraviolet lights. Domestic predators of scorpions, such as chickens and turkeys, can help to reduce the risk to a household.
Potential medical use
Scorpion venom is a mixture of neurotoxins; most of these are peptides, chains of amino acids. Many of them interfere with membrane channels that transport sodium, potassium, calcium, or chloride ions. These channels are essential for nerve conduction, muscle contraction and many other biological processes. Some of these molecules may be useful in medical research and might lead to the development of new disease treatments. Among their potential therapeutic uses are as analgesic, anti-cancer, antibacterial, antifungal, antiviral, antiparasitic, bradykinin-potentiating, and immunosuppressive drugs. As of 2020, no scorpion toxin-based drug is for sale, though chlorotoxin is being trialled for use against glioma, a brain cancer.
Consumption
Scorpions are eaten by people in West Africa, Myanmar and East Asia. Fried scorpion is traditionally eaten in Shandong, China. There, scorpions can be cooked and eaten in a variety of ways, including roasting, frying, grilling, raw, or alive. The stingers are typically not removed, since direct and sustained heat negates the harmful effects of the venom. In Thailand, scorpions are not eaten as often as other arthropods, such as grasshoppers, but they are sometimes fried as street food. They are used in Vietnam to make snake wine (scorpion wine).
Pets
Scorpions are often kept as pets. They are relatively simple to keep, the main requirements being a secure enclosure such as a glass terrarium with a lockable lid and the appropriate temperature and humidity for the chosen species, which typically means installing a heating mat and spraying regularly with a little water. The substrate needs to resemble that of the species' natural environment, such as peat for forest species, or lateritic sand for burrowing desert species. Scorpions in the genera Pandinus and Heterometrus are docile enough to handle. A large Pandinus may consume up to three crickets each week. Cannibalism is more common in captivity than in the wild and can be minimized by providing many small shelters within the enclosure and ensuring there is plenty of prey. The pet trade has threatened wild populations of some scorpion species, particularly Androctonus australis and Pandinus imperator.
Culture
The scorpion is a culturally significant animal, appearing as a motif in art, especially in Islamic art in the Middle East.
A scorpion motif is often woven into Turkish kilim flatweave carpets, for protection from their sting. The scorpion is perceived both as an embodiment of evil and as a protective force such as a dervish's powers to combat evil. In Muslim folklore, the scorpion portrays human sexuality. Scorpions are used in folk medicine in South Asia, especially in antidotes for scorpion stings.
One of the earliest occurrences of the scorpion in culture is its inclusion, as Scorpio, in the 12 signs of the Zodiac by Babylonian astronomers during the Chaldean period. This was then taken up by western astrology; in astronomy the corresponding constellation is named Scorpius.
In ancient Egypt, the goddess Serket, who protected the Pharaoh, was often depicted as a scorpion.
In ancient Greece, a warrior's shield sometimes carried a scorpion device, as seen in red-figure pottery from the 5th century BC. In Greek mythology, Artemis or Gaia sent a giant scorpion to kill the hunter Orion, who had said he would kill all the world's animals. Orion and the scorpion both became constellations; as enemies they were placed on opposite sides of the world, so when one rises in the sky, the other sets. Scorpions are mentioned in the Bible and the Talmud as symbols of danger and maliciousness.
The fable of The Scorpion and the Frog has been interpreted as showing that vicious people cannot resist hurting others, even when it is not in their interests. More recently, the action in John Steinbeck's 1947 novella The Pearl centers on a poor pearl fisherman's attempts to save his infant son from a scorpion sting, only to lose him to human violence. Scorpions have equally appeared in western artforms including film and poetry: the surrealist filmmaker Luis Buñuel made symbolic use of scorpions in his 1930 classic L'Age d'or (The Golden Age), while Stevie Smith's last collection of poems was entitled Scorpion and other Poems. A variety of martial arts films and video games have been entitled Scorpion King.
Since classical times, the scorpion with its powerful stinger has been used to provide a name for weapons. In the Roman army, the scorpio was a torsion siege engine used to shoot a projectile. The British Army's FV101 Scorpion was an armored reconnaissance vehicle or light tank in service from 1972 to 1994. A version of the Matilda II tank, fitted with a flail to clear mines, was named the Matilda Scorpion.
Several ships of the Royal Navy and of the US Navy have been named Scorpion including an 18-gun sloop in 1803, a turret ship in 1863, a patrol yacht in 1898, a destroyer in 1910, and a nuclear submarine in 1960.
The scorpion has served as the name or symbol of products and brands including Italy's Abarth racing cars and a Montesa scrambler motorcycle.
A hand- or forearm-balancing asana in modern yoga as exercise with the back arched and one or both legs pointing forward over the head in the manner of the scorpion's tail is called Scorpion pose.
| Biology and health sciences | Arachnids | null |
28927 | https://en.wikipedia.org/wiki/Stellar%20classification | Stellar classification | In astronomy, stellar classification is the classification of stars based on their spectral characteristics. Electromagnetic radiation from the star is analyzed by splitting it with a prism or diffraction grating into a spectrum exhibiting the rainbow of colors interspersed with spectral lines. Each line indicates a particular chemical element or molecule, with the line strength indicating the abundance of that element. The strengths of the different spectral lines vary mainly due to the temperature of the photosphere, although in some cases there are true abundance differences. The spectral class of a star is a short code primarily summarizing the ionization state, giving an objective measure of the photosphere's temperature.
Most stars are currently classified under the Morgan–Keenan (MK) system using the letters O, B, A, F, G, K, and M, a sequence from the hottest (O type) to the coolest (M type). Each letter class is then subdivided using a numeric digit with 0 being hottest and 9 being coolest (e.g., A8, A9, F0, and F1 form a sequence from hotter to cooler). The sequence has been expanded with three classes for other stars that do not fit in the classical system: W, S and C. Some stellar remnants or objects of deviating mass have also been assigned letters: D for white dwarfs and L, T and Y for Brown dwarfs (and exoplanets).
In the MK system, a luminosity class is added to the spectral class using Roman numerals. This is based on the width of certain absorption lines in the star's spectrum, which vary with the density of the atmosphere and so distinguish giant stars from dwarfs. Luminosity class 0 or Ia+ is used for hypergiants, class I for supergiants, class II for bright giants, class III for regular giants, class IV for subgiants, class V for main-sequence stars, class sd (or VI) for subdwarfs, and class D (or VII) for white dwarfs. The full spectral class for the Sun is then G2V, indicating a main-sequence star with a surface temperature around 5,800 K.
Conventional colour description
The conventional colour description takes into account only the peak of the stellar spectrum. In actuality, however, stars radiate in all parts of the spectrum. Because all spectral colours combined appear white, the actual apparent colours the human eye would observe are far lighter than the conventional colour descriptions would suggest. This characteristic of 'lightness' indicates that the simplified assignment of colours within the spectrum can be misleading. Excluding colour-contrast effects in dim light, in typical viewing conditions there are no green, cyan, indigo, or violet stars. "Yellow" dwarfs such as the Sun are white, "red" dwarfs are a deep shade of yellow/orange, and "brown" dwarfs do not literally appear brown, but hypothetically would appear dim red or grey/black to a nearby observer.
Modern classification
The modern classification system is known as the Morgan–Keenan (MK) classification. Each star is assigned a spectral class (from the older Harvard spectral classification, which did not include luminosity) and a luminosity class using Roman numerals as explained below, forming the star's spectral type.
Other modern stellar classification systems, such as the UBV system, are based on color indices—the measured differences in three or more color magnitudes. Those numbers are given labels such as "U−V" or "B−V", which represent the colors passed by two standard filters (e.g. Ultraviolet, Blue and Visual).
Harvard spectral classification
The Harvard system is a one-dimensional classification scheme by astronomer Annie Jump Cannon, who re-ordered and simplified the prior alphabetical system by Draper (see History). Stars are grouped according to their spectral characteristics by single letters of the alphabet, optionally with numeric subdivisions. Main-sequence stars vary in surface temperature from approximately 2,000 to 50,000 K, whereas more-evolved stars – in particular, newly-formed white dwarfs – can have surface temperatures above 100,000 K. Physically, the classes indicate the temperature of the star's atmosphere and are normally listed from hottest to coldest.
A common mnemonic for remembering the order of the spectral type letters, from hottest to coolest, is "Oh, Be A Fine Guy/Girl: Kiss Me!", or another one is "Our Bright Astronomers Frequently Generate Killer Mnemonics!".
The spectral classes O through M, as well as other more specialized classes discussed later, are subdivided by Arabic numerals (0–9), where 0 denotes the hottest stars of a given class. For example, A0 denotes the hottest stars in class A and A9 denotes the coolest ones. Fractional numbers are allowed; for example, the star Mu Normae is classified as O9.7. The Sun is classified as G2.
The fact that the Harvard classification of a star indicated its surface or photospheric temperature (or more precisely, its effective temperature) was not fully understood until after its development, though by the time the first Hertzsprung–Russell diagram was formulated (by 1914), this was generally suspected to be true. In the 1920s, the Indian physicist Meghnad Saha derived a theory of ionization by extending well-known ideas in physical chemistry pertaining to the dissociation of molecules to the ionization of atoms. First he applied it to the solar chromosphere, then to stellar spectra.
Harvard astronomer Cecilia Payne then demonstrated that the O-B-A-F-G-K-M spectral sequence is actually a sequence in temperature. Because the classification sequence predates our understanding that it is a temperature sequence, the placement of a spectrum into a given subtype, such as B3 or A7, depends upon (largely subjective) estimates of the strengths of absorption features in stellar spectra. As a result, these subtypes are not evenly divided into any sort of mathematically representable intervals.
Yerkes spectral classification
The Yerkes spectral classification, also called the MK, or Morgan-Keenan (alternatively referred to as the MKK, or Morgan-Keenan-Kellman) system from the authors' initials, is a system of stellar spectral classification introduced in 1943 by William Wilson Morgan, Philip C. Keenan, and Edith Kellman from Yerkes Observatory. This two-dimensional (temperature and luminosity) classification scheme is based on spectral lines sensitive to stellar temperature and surface gravity, which is related to luminosity (whilst the Harvard classification is based on just surface temperature). Later, in 1953, after some revisions to the list of standard stars and classification criteria, the scheme was named the Morgan–Keenan classification, or MK, which remains in use today.
Denser stars with higher surface gravity exhibit greater pressure broadening of spectral lines. The gravity, and hence the pressure, on the surface of a giant star is much lower than for a dwarf star because the radius of the giant is much greater than a dwarf of similar mass. Therefore, differences in the spectrum can be interpreted as luminosity effects and a luminosity class can be assigned purely from examination of the spectrum.
A number of different luminosity classes are distinguished, as listed in the table below.
Marginal cases are allowed; for example, a star may be either a supergiant or a bright giant, or may be in between the subgiant and main-sequence classifications.
In these cases, two special symbols are used:
A slash (/) means that a star is either one class or the other.
A dash (-) means that the star is in between the two classes.
For example, a star classified as A3-4III/IV would be in between spectral types A3 and A4, while being either a giant star or a subgiant.
Sub-dwarf classes have also been used: VI for sub-dwarfs (stars slightly less luminous than the main sequence).
Nominal luminosity class VII (and sometimes higher numerals) is now rarely used for white dwarf or "hot sub-dwarf" classes, since the temperature-letters of the main sequence and giant stars no longer apply to white dwarfs.
Occasionally, letters a and b are applied to luminosity classes other than supergiants; for example, a giant star slightly less luminous than typical may be given a luminosity class of IIIb, while a luminosity class IIIa indicates a star slightly brighter than a typical giant.
A sample of extreme V stars with strong absorption in He II λ4686 spectral lines have been given the Vz designation. An example star is HD 93129 B.
Spectral peculiarities
Additional nomenclature, in the form of lower-case letters, can follow the spectral type to indicate peculiar features of the spectrum.
For example, 59 Cygni is listed as spectral type B1.5Vnne, indicating a spectrum with the general classification B1.5V, as well as very broad absorption lines and certain emission lines.
History
The reason for the odd arrangement of letters in the Harvard classification is historical, having evolved from the earlier Secchi classes and been progressively modified as understanding improved.
Secchi classes
During the 1860s and 1870s, pioneering stellar spectroscopist Angelo Secchi created the Secchi classes in order to classify observed spectra. By 1866, he had developed three classes of stellar spectra, shown in the table below.
In the late 1890s, this classification began to be superseded by the Harvard classification, which is discussed in the remainder of this article.
The Roman numerals used for Secchi classes should not be confused with the completely unrelated Roman numerals used for Yerkes luminosity classes and the proposed neutron star classes.
Draper system
In the 1880s, the astronomer Edward C. Pickering began to make a survey of stellar spectra at the Harvard College Observatory, using the objective-prism method. A first result of this work was the Draper Catalogue of Stellar Spectra, published in 1890. Williamina Fleming classified most of the spectra in this catalogue and was credited with classifying over 10,000 featured stars and discovering 10 novae and more than 200 variable stars. With the help of the Harvard computers, especially Williamina Fleming, the first iteration of the Henry Draper catalogue was devised to replace the Roman-numeral scheme established by Angelo Secchi.
The catalogue used a scheme in which the previously used Secchi classes (I to V) were subdivided into more specific classes, given letters from A to P. Also, the letter Q was used for stars not fitting into any other class. Fleming worked with Pickering to differentiate 17 different classes based on the intensity of hydrogen spectral lines, which causes variation in the wavelengths emanated from stars and results in variation in color appearance. The spectra in class A tended to produce the strongest hydrogen absorption lines while spectra in class O produced virtually no visible lines. The lettering system displayed the gradual decrease in hydrogen absorption in the spectral classes when moving down the alphabet. This classification system was later modified by Annie Jump Cannon and Antonia Maury to produce the Harvard spectral classification scheme.
The old Harvard system (1897)
In 1897, another astronomer at Harvard, Antonia Maury, placed the Orion subtype of Secchi class I ahead of the remainder of Secchi class I, thus placing the modern type B ahead of the modern type A. She was the first to do so, although she did not use lettered spectral types, but rather a series of twenty-two types numbered from I–XXII.
Because the 22 Roman numeral groupings did not account for additional variations in spectra, three additional divisions were made to further specify differences: Lowercase letters were added to differentiate relative line appearance in spectra; the lines were defined as:
(a): average width
(b): hazy
(c): sharp
Antonia Maury published her own stellar classification catalogue in 1897 called "Spectra of Bright Stars Photographed with the 11 inch Draper Telescope as Part of the Henry Draper Memorial", which included 4,800 photographs and Maury's analyses of 681 bright northern stars. This was the first instance in which a woman was credited for an observatory publication.
The current Harvard system (1912)
In 1901, Annie Jump Cannon returned to the lettered types, but dropped all letters except O, B, A, F, G, K, M, and N used in that order, as well as P for planetary nebulae and Q for some peculiar spectra. She also used types such as B5A for stars halfway between types B and A, F2G for stars one fifth of the way from F to G, and so on.
Finally, by 1912, Cannon had changed the types B, A, B5A, F2G, etc. to B0, A0, B5, F2, etc. This is essentially the modern form of the Harvard classification system. This system was developed through the analysis of spectra on photographic plates, which could convert light emanated from stars into a readable spectrum.
Mount Wilson classes
A luminosity classification known as the Mount Wilson system was used to distinguish between stars of different luminosities. This notation system is still sometimes seen on modern spectra.
sd: subdwarf
d: dwarf
sg: subgiant
g: giant
c: supergiant
Spectral types
The stellar classification system is taxonomic, based on type specimens, similar to classification of species in biology: The categories are defined by one or more standard stars for each category and sub-category, with an associated description of the distinguishing features.
"Early" and "late" nomenclature
Stars are often referred to as early or late types. "Early" is a synonym for hotter, while "late" is a synonym for cooler.
Depending on the context, "early" and "late" may be absolute or relative terms. "Early" as an absolute term would therefore refer to O or B, and possibly A stars. As a relative reference it relates to stars hotter than others, such as "early K" being perhaps K0, K1, K2 and K3.
"Late" is used in the same way, with an unqualified use of the term indicating stars with spectral types such as K and M, but it can also be used for stars that are cool relative to other stars, as in using "late G" to refer to G7, G8, and G9.
In the relative sense, "early" means a lower Arabic numeral following the class letter, and "late" means a higher number.
This obscure terminology is a hold-over from a late nineteenth century model of stellar evolution, which supposed that stars were powered by gravitational contraction via the Kelvin–Helmholtz mechanism, which is now known to not apply to main-sequence stars. If that were true, then stars would start their lives as very hot "early-type" stars and then gradually cool down into "late-type" stars. This mechanism provided ages of the Sun that were much smaller than what is observed in the geologic record, and was rendered obsolete by the discovery that stars are powered by nuclear fusion. The terms "early" and "late" were carried over, beyond the demise of the model they were based on.
Class O
O-type stars are very hot and extremely luminous, with most of their radiated output in the ultraviolet range. These are the rarest of all main-sequence stars. About 1 in 3,000,000 (0.00003%) of the main-sequence stars in the solar neighborhood are O-type stars. Some of the most massive stars lie within this spectral class. O-type stars frequently have complicated surroundings that make measurement of their spectra difficult.
O-type spectra formerly were defined by the ratio of the strength of the He II λ4541 relative to that of He I λ4471, where λ is the radiation wavelength. Spectral type O7 was defined to be the point at which the two intensities are equal, with the He I line weakening towards earlier types. Type O3 was, by definition, the point at which said line disappears altogether, although it can be seen very faintly with modern technology. Due to this, the modern definition uses the ratio of the nitrogen line N IV λ4058 to N III λλ4634-40-42.
O-type stars have dominant lines of absorption and sometimes emission for He II lines, prominent ionized (Si IV, O III, N III, and C III) and neutral helium lines, strengthening from O5 to O9, and prominent hydrogen Balmer lines, although not as strong as in later types. Higher-mass O-type stars do not retain extensive atmospheres due to the extreme velocity of their stellar wind, which may reach 2,000 km/s. Because they are so massive, O-type stars have very hot cores and burn through their hydrogen fuel very quickly, so they are the first stars to leave the main sequence.
When the MKK classification scheme was first described in 1943, the only subtypes of class O used were O5 to O9.5. The MKK scheme was extended to O9.7 in 1971 and O4 in 1978, and new classification schemes that add types O2, O3, and O3.5 have subsequently been introduced.
Spectral standards:
O7V – S Monocerotis
O9V – 10 Lacertae
Class B
B-type stars are very luminous and blue. Their spectra have neutral helium lines, which are most prominent at the B2 subclass, and moderate hydrogen lines. As O- and B-type stars are so energetic, they only live for a relatively short time. Thus, due to the low probability of kinematic interaction during their lifetime, they are unable to stray far from the area in which they formed, apart from runaway stars.
The transition from class O to class B was originally defined to be the point at which the He II λ4541 disappears. However, with modern equipment, the line is still apparent in the early B-type stars. Today for main-sequence stars, the B class is instead defined by the intensity of the He I violet spectrum, with the maximum intensity corresponding to class B2. For supergiants, lines of silicon are used instead; the Si IV λ4089 and Si III λ4552 lines are indicative of early B. At mid-B, the intensity of the latter relative to that of Si II λλ4128-30 is the defining characteristic, while for late B, it is the intensity of Mg II λ4481 relative to that of He I λ4471.
These stars tend to be found in their originating OB associations, which are associated with giant molecular clouds. The Orion OB1 association occupies a large portion of a spiral arm of the Milky Way and contains many of the brighter stars of the constellation Orion. About 1 in 800 (0.125%) of the main-sequence stars in the solar neighborhood are B-type main-sequence stars. B-type stars are relatively uncommon and the closest is Regulus, at around 80 light years.
Massive yet non-supergiant stars known as Be stars have been observed to show one or more Balmer lines in emission, with the hydrogen-related electromagnetic radiation series projected out by the stars being of particular interest. Be stars are generally thought to feature unusually strong stellar winds, high surface temperatures, and significant attrition of stellar mass as the objects rotate at a curiously rapid rate.
Objects known as B[e] stars – or B(e) stars for typographic reasons – possess distinctive neutral or low ionisation emission lines that are considered to have forbidden mechanisms, undergoing processes not normally allowed under current understandings of quantum mechanics.
Spectral standards:
B0V – Upsilon Orionis
B0Ia – Alnilam
B2Ia – Chi2 Orionis
B2Ib – 9 Cephei
B3V – Eta Ursae Majoris
B3V – Eta Aurigae
B3Ia – Omicron2 Canis Majoris
B5Ia – Eta Canis Majoris
B8Ia – Rigel
Class A
A-type stars are among the more common naked eye stars, and are white or bluish-white. They have strong hydrogen lines, at a maximum by A0, and also lines of ionized metals (Fe II, Mg II, Si II) at a maximum at A5. The presence of Ca II lines is notably strengthening by this point. About 1 in 160 (0.625%) of the main-sequence stars in the solar neighborhood are A-type stars, which includes 9 stars within 15 parsecs.
Spectral standards:
A0Van – Gamma Ursae Majoris
A0Va – Vega
A0Ib – Eta Leonis
A0Ia – HD 21389
A1V – Sirius A
A2Ia – Deneb
A3Va – Fomalhaut
Class F
F-type stars have strengthening spectral lines H and K of Ca II. Neutral metals (Fe I, Cr I) beginning to gain on ionized metal lines by late F. Their spectra are characterized by the weaker hydrogen lines and ionized metals. Their color is white. About 1 in 33 (3.03%) of the main-sequence stars in the solar neighborhood are F-type stars, including 1 star Procyon A within 20 ly.
Spectral standards:
F0IIIa – Zeta Leonis
F0Ib – Alpha Leporis
F1V - 37 Ursae Majoris
F2V – 78 Ursae Majoris
F7V - Iota Piscium
F9V - Beta Virginis
F9V - HD 10647
Class G
G-type stars, including the Sun, have prominent spectral lines H and K of Ca II, which are most pronounced at G2. They have even weaker hydrogen lines than F, but along with the ionized metals, they have neutral metals. There is a prominent spike in the G band of CN molecules. Class G main-sequence stars make up about 7.5%, nearly one in thirteen, of the main-sequence stars in the solar neighborhood. There are 21 G-type stars within 10pc.
Class G contains the "Yellow Evolutionary Void". Supergiant stars often swing between O or B (blue) and K or M (red). While they do this, they do not stay for long in the unstable yellow supergiant class.
Spectral standards:
G0V – Beta Canum Venaticorum
G0IV – Eta Boötis
G0Ib – Beta Aquarii
G2V – Sun
G5V – Kappa1 Ceti
G5IV – Mu Herculis
G5Ib – 9 Pegasi
G8V – 61 Ursae Majoris
G8IV – Beta Aquilae
G8IIIa – Kappa Geminorum
G8IIIab – Epsilon Virginis
G8Ib – Epsilon Geminorum
Class K
K-type stars are orangish stars that are slightly cooler than the Sun. They make up about 12% of the main-sequence stars in the solar neighborhood. There are also giant K-type stars, which range from hypergiants like RW Cephei, to giants and supergiants, such as Arcturus, whereas orange dwarfs, like Alpha Centauri B, are main-sequence stars.
They have extremely weak hydrogen lines, if those are present at all, and mostly neutral metals (Mn I, Fe I, Si I). By late K, molecular bands of titanium oxide become present. Mainstream theories (those rooted in lower harmful radioactivity and star longevity) would thus suggest such stars have the optimal chances of heavily evolved life developing on orbiting planets (if such life is directly analogous to Earth's) due to a broad habitable zone yet much lower harmful periods of emission compared to those with the broadest such zones.
Spectral standards:
K0V – Sigma Draconis
K0III – Pollux
K0III – Epsilon Cygni
K2V – Epsilon Eridani
K2III – Kappa Ophiuchi
K3III – Rho Boötis
K5V – 61 Cygni A
K5III – Gamma Draconis
Class M
Class M stars are by far the most common. About 76% of the main-sequence stars in the solar neighborhood are class M stars. However, class M main-sequence stars (red dwarfs) have such low luminosities that none are bright enough to be seen with the unaided eye, unless under exceptional conditions. The brightest-known M class main-sequence star is Lacaille 8760, class M0V, with magnitude 6.7 (the limiting magnitude for typical naked-eye visibility under good conditions being typically quoted as 6.5), and it is extremely unlikely that any brighter examples will be found.
Although most class M stars are red dwarfs, most of the largest-known supergiant stars in the Milky Way are class M stars, such as VY Canis Majoris, VV Cephei, Antares, and Betelgeuse. Furthermore, some larger, hotter brown dwarfs are late class M, usually in the range of M6.5 to M9.5.
The spectrum of a class M star contains lines from oxide molecules (in the visible spectrum, especially TiO) and all neutral metals, but absorption lines of hydrogen are usually absent. TiO bands can be strong in class M stars, usually dominating their visible spectrum by about M5. Vanadium(II) oxide bands become present by late M.
Spectral standards:
M0IIIa – Beta Andromedae
M2III – Chi Pegasi
M1-M2Ia-Iab – Betelgeuse
M2Ia – Mu Cephei ("Herschel's garnet")
Extended spectral types
A number of new spectral types have been taken into use from newly discovered types of stars.
Hot blue emission star classes
Spectra of some very hot and bluish stars exhibit marked emission lines from carbon or nitrogen, or sometimes oxygen.
Class WR (or W): Wolf–Rayet
Once included as type O stars, the Wolf–Rayet stars of class W or WR are notable for spectra lacking hydrogen lines. Instead their spectra are dominated by broad emission lines of highly ionized helium, nitrogen, carbon, and sometimes oxygen. They are thought to mostly be dying supergiants with their hydrogen layers blown away by stellar winds, thereby directly exposing their hot helium shells. Class WR is further divided into subclasses according to the relative strength of nitrogen and carbon emission lines in their spectra (and outer layers).
WR spectra range is listed below:
WN – spectrum dominated by N III-V and He I-II lines
WNE (WN2 to WN5 with some WN6) – hotter or "early"
WNL (WN7 to WN9 with some WN6) – cooler or "late"
Extended WN classes WN10 and WN11 sometimes used for the Ofpe/WN9 stars
h tag used (e.g. WN9h) for WR with hydrogen emission and ha (e.g. WN6ha) for both hydrogen emission and absorption
WN/C – WN stars plus strong C IV lines, intermediate between WN and WC stars
WC – spectrum with strong C II-IV lines
WCE (WC4 to WC6) – hotter or "early"
WCL (WC7 to WC9) – cooler or "late"
WO (WO1 to WO4) – strong O VI lines, extremely rare, extension of the WCE class into incredibly hot temperatures (up to 200 kK or more)
Although the central stars of most planetary nebulae (CSPNe) show O-type spectra, around 10% are hydrogen-deficient and show WR spectra. These are low-mass stars and to distinguish them from the massive Wolf–Rayet stars, their spectra are enclosed in square brackets: e.g. [WC]. Most of these show [WC] spectra, some [WO], and very rarely [WN].
Slash stars
The slash stars are O-type stars with WN-like lines in their spectra. The name "slash" comes from their printed spectral type having a slash in it (e.g. "Of/WNL")).
There is a secondary group found with these spectra, a cooler, "intermediate" group designated "Ofpe/WN9". These stars have also been referred to as WN10 or WN11, but that has become less popular with the realisation of the evolutionary difference from other Wolf–Rayet stars. Recent discoveries of even rarer stars have extended the range of slash stars as far as O2-3.5If*/WN5-7, which are even hotter than the original "slash" stars.
Magnetic O stars
They are O stars with strong magnetic fields. Designation is Of?p.
Cool red and brown dwarf classes
The new spectral types L, T, and Y were created to classify infrared spectra of cool stars. This includes both red dwarfs and brown dwarfs that are very faint in the visible spectrum.
Brown dwarfs, stars that do not undergo hydrogen fusion, cool as they age and so progress to later spectral types. Brown dwarfs start their lives with M-type spectra and will cool through the L, T, and Y spectral classes, faster the less massive they are; the highest-mass brown dwarfs cannot have cooled to Y or even T dwarfs within the age of the universe. Because this leads to an unresolvable overlap between spectral types effective temperature and luminosity for some masses and ages of different L-T-Y types, no distinct temperature or luminosity values can be given.
Class L
Class L dwarfs get their designation because they are cooler than M stars and L is the remaining letter alphabetically closest to M. Some of these objects have masses large enough to support hydrogen fusion and are therefore stars, but most are of substellar mass and are therefore brown dwarfs. They are a very dark red in color and brightest in infrared. Their atmosphere is cool enough to allow metal hydrides and alkali metals to be prominent in their spectra.
Due to low surface gravity in giant stars, TiO- and VO-bearing condensates never form. Thus, L-type stars larger than dwarfs can never form in an isolated environment. However, it may be possible for these L-type supergiants to form through stellar collisions, an example of which is V838 Monocerotis while in the height of its luminous red nova eruption.
Class T
Class T dwarfs are cool brown dwarfs with surface temperatures between approximately . Their emission peaks in the infrared. Methane is prominent in their spectra.
Study of the number of proplyds (protoplanetary disks, clumps of gas in nebulae from which stars and planetary systems are formed) indicates that the number of stars in the galaxy should be several orders of magnitude higher than what was previously conjectured. It is theorized that these proplyds are in a race with each other. The first one to form will become a protostar, which are very violent objects and will disrupt other proplyds in the vicinity, stripping them of their gas. The victim proplyds will then probably go on to become main-sequence stars or brown dwarfs of the L and T classes, which are quite invisible to us.
Class Y
Brown dwarfs of spectral class Y are cooler than those of spectral class T and have qualitatively different spectra from them. A total of 17 objects have been placed in class Y as of August 2013. Although such dwarfs have been modelled and detected within forty light-years by the Wide-field Infrared Survey Explorer (WISE) there is no well-defined spectral sequence yet and no prototypes. Nevertheless, several objects have been proposed as spectral classes Y0, Y1, and Y2.
The spectra of these prospective Y objects display absorption around 1.55 micrometers. Delorme et al. have suggested that this feature is due to absorption from ammonia, and that this should be taken as the indicative feature for the T-Y transition. In fact, this ammonia-absorption feature is the main criterion that has been adopted to define this class. However, this feature is difficult to distinguish from absorption by water and methane, and other authors have stated that the assignment of class Y0 is premature.
The latest brown dwarf proposed for the Y spectral type, WISE 1828+2650, is a > Y2 dwarf with an effective temperature originally estimated around 300 K, the temperature of the human body. Parallax measurements have, however, since shown that its luminosity is inconsistent with it being colder than ~400 K. The coolest Y dwarf currently known is WISE 0855−0714 with an approximate temperature of 250 K, and a mass just seven times that of Jupiter.
The mass range for Y dwarfs is 9–25 Jupiter masses, but young objects might reach below one Jupiter mass (although they cool to become planets), which means that Y class objects straddle the 13 Jupiter mass deuterium-fusion limit that marks the current IAU division between brown dwarfs and planets.
Peculiar brown dwarfs
Young brown dwarfs have low surface gravities because they have larger radii and lower masses compared to the field stars of similar spectral type. These sources are marked by a letter beta () for intermediate surface gravity and gamma () for low surface gravity. Indication for low surface gravity are weak CaH, K and Na lines, as well as strong VO line. Alpha () stands for normal surface gravity and is usually dropped. Sometimes an extremely low surface gravity is denoted by a delta (). The suffix "pec" stands for peculiar. The peculiar suffix is still used for other features that are unusual and summarizes different properties, indicative of low surface gravity, subdwarfs and unresolved binaries.
The prefix sd stands for subdwarf and only includes cool subdwarfs. This prefix indicates a low metallicity and kinematic properties that are more similar to halo stars than to disk stars. Subdwarfs appear bluer than disk objects.
The red suffix describes objects with red color, but an older age. This is not interpreted as low surface gravity, but as a high dust content. The blue suffix describes objects with blue near-infrared colors that cannot be explained with low metallicity. Some are explained as L+T binaries, others are not binaries, such as 2MASS J11263991−5003550 and are explained with thin and/or large-grained clouds.
Late giant carbon-star classes
Carbon-stars are stars whose spectra indicate production of carbon – a byproduct of triple-alpha helium fusion. With increased carbon abundance, and some parallel s-process heavy element production, the spectra of these stars become increasingly deviant from the usual late spectral classes G, K, and M. Equivalent classes for carbon-rich stars are S and C.
The giants among those stars are presumed to produce this carbon themselves, but some stars in this class are double stars, whose odd atmosphere is suspected of having been transferred from a companion that is now a white dwarf, when the companion was a carbon-star.
Class C
Originally classified as R and N stars, these are also known as carbon stars. These are red giants, near the end of their lives, in which there is an excess of carbon in the atmosphere. The old R and N classes ran parallel to the normal classification system from roughly mid-G to late M. These have more recently been remapped into a unified carbon classifier C with N0 starting at roughly C6. Another subset of cool carbon stars are the C–J-type stars, which are characterized by the strong presence of molecules of 13CN in addition to those of 12CN. A few main-sequence carbon stars are known, but the overwhelming majority of known carbon stars are giants or supergiants. There are several subclasses:
C-R – Formerly its own class (R) representing the carbon star equivalent of late G- to early K-type stars.
C-N – Formerly its own class representing the carbon star equivalent of late K- to M-type stars.
C-J – A subtype of cool C stars with a high content of 13C.
C-H – Population II analogues of the C-R stars.
C-Hd – Hydrogen-deficient carbon stars, similar to late G supergiants with CH and C2 bands added.
Class S
Class S stars form a continuum between class M stars and carbon stars. Those most similar to class M stars have strong ZrO absorption bands analogous to the TiO bands of class M stars, whereas those most similar to carbon stars have strong sodium D lines and weak C2 bands. Class S stars have excess amounts of zirconium and other elements produced by the s-process, and have more similar carbon and oxygen abundances to class M or carbon stars. Like carbon stars, nearly all known class S stars are asymptotic-giant-branch stars.
The spectral type is formed by the letter S and a number between zero and ten. This number corresponds to the temperature of the star and approximately follows the temperature scale used for class M giants. The most common types are S3 to S5. The non-standard designation S10 has only been used for the star Chi Cygni when at an extreme minimum.
The basic classification is usually followed by an abundance indication, following one of several schemes: S2,5; S2/5; S2 Zr4 Ti2; or S2*5. A number following a comma is a scale between 1 and 9 based on the ratio of ZrO and TiO. A number following a slash is a more-recent but less-common scheme designed to represent the ratio of carbon to oxygen on a scale of 1 to 10, where a 0 would be an MS star. Intensities of zirconium and titanium may be indicated explicitly. Also occasionally seen is a number following an asterisk, which represents the strength of the ZrO bands on a scale from 1 to 5.
Classes MS and SC: Intermediate carbon-related classes
In between the M and S classes, border cases are named MS stars. In a similar way, border cases between the S and C-N classes are named SC or CS. The sequence M → MS → S → SC → C-N is hypothesized to be a sequence of increased carbon abundance with age for carbon stars in the asymptotic giant branch.
White dwarf classifications
The class D (for Degenerate) is the modern classification used for white dwarfs—low-mass stars that are no longer undergoing nuclear fusion and have shrunk to planetary size, slowly cooling down. Class D is further divided into spectral types DA, DB, DC, DO, DQ, DX, and DZ. The letters are not related to the letters used in the classification of other stars, but instead indicate the composition of the white dwarf's visible outer layer or atmosphere.
The white dwarf types are as follows:
DA – a hydrogen-rich atmosphere or outer layer, indicated by strong Balmer hydrogen spectral lines.
DB – a helium-rich atmosphere, indicated by neutral helium, He I, spectral lines.
DO – a helium-rich atmosphere, indicated by ionized helium, He II, spectral lines.
DQ – a carbon-rich atmosphere, indicated by atomic or molecular carbon lines.
DZ – a metal-rich atmosphere, indicated by metal spectral lines (a merger of the obsolete white dwarf spectral types, DG, DK, and DM).
DC – no strong spectral lines indicating one of the above categories.
DX – spectral lines are insufficiently clear to classify into one of the above categories.
The type is followed by a number giving the white dwarf's surface temperature. This number is a rounded form of 50400/Teff, where Teff is the effective surface temperature, measured in kelvins. Originally, this number was rounded to one of the digits 1 through 9, but more recently fractional values have started to be used, as well as values below 1 and above 9.(For example DA1.5 for IK Pegasi B)
Two or more of the type letters may be used to indicate a white dwarf that displays more than one of the spectral features above.
Extended white dwarf spectral types
DAB – a hydrogen- and helium-rich white dwarf displaying neutral helium lines
DAO – a hydrogen- and helium-rich white dwarf displaying ionized helium lines
DAZ – a hydrogen-rich metallic white dwarf
DBZ – a helium-rich metallic white dwarf
A different set of spectral peculiarity symbols are used for white dwarfs than for other types of stars:
Luminous Blue Variables
Luminous blue variables (LBVs) are rare, massive and evolved stars that show unpredictable and sometimes dramatic variations in their spectra and brightness. During their "quiescent" states, they are usually similar to B-type stars, although with unusual spectral lines. During outbursts, they are more similar to F-type stars, with significantly lower temperatures. Many papers treat LBV as its own spectral type.
Spectral types of non-single objects: Classes P and Q
Finally, the classes P and Q are left over from the system developed by Cannon for the Henry Draper Catalogue. They are occasionally used for certain objects, not associated with a single star: Type P objects are stars within planetary nebulae (typically young white dwarfs or hydrogen-poor M giants); type Q objects are novae.
Stellar remnants
Stellar remnants are objects associated with the death of stars. Included in the category are white dwarfs, and as can be seen from the radically different classification scheme for class D, stellar remnants are difficult to fit into the MK system.
The Hertzsprung–Russell diagram, which the MK system is based on, is observational in nature so these remnants cannot easily be plotted on the diagram, or cannot be placed at all. Old neutron stars are relatively small and cold, and would fall on the far right side of the diagram. Planetary nebulae are dynamic and tend to quickly fade in brightness as the progenitor star transitions to the white dwarf branch. If shown, a planetary nebula would be plotted to the right of the diagram's upper right quadrant. A black hole emits no visible light of its own, and therefore would not appear on the diagram.
A classification system for neutron stars using Roman numerals has been proposed: type I for less massive neutron stars with low cooling rates, type II for more massive neutron stars with higher cooling rates, and a proposed type III for more massive neutron stars (possible exotic star candidates) with higher cooling rates. The more massive a neutron star is, the higher neutrino flux it carries. These neutrinos carry away so much heat energy that after only a few years the temperature of an isolated neutron star falls from the order of billions to only around a million Kelvin. This proposed neutron star classification system is not to be confused with the earlier Secchi spectral classes and the Yerkes luminosity classes.
Replaced spectral classes
Several spectral types, all previously used for non-standard stars in the mid-20th century, have been replaced during revisions of the stellar classification system. They may still be found in old editions of star catalogs: R and N have been subsumed into the new C class as C-R and C-N.
Stellar classification, habitability, and the search for life
While humans may eventually be able to colonize any kind of stellar habitat, this section will address the probability of life arising around other stars.
Stability, luminosity, and lifespan are all factors in stellar habitability. Humans know of only one star that hosts life, the G-class Sun, a star with an abundance of heavy elements and low variability in brightness. The Solar System is also unlike many stellar systems in that it only contains one star (see Habitability of binary star systems).
Working from these constraints and the problems of having an empirical sample set of only one, the range of stars that are predicted to be able to support life is limited by a few factors. Of the main-sequence star types, stars more massive than 1.5 times that of the Sun (spectral types O, B, and A) age too quickly for advanced life to develop (using Earth as a guideline). On the other extreme, dwarfs of less than half the mass of the Sun (spectral type M) are likely to tidally lock planets within their habitable zone, along with other problems (see Habitability of red dwarf systems). While there are many problems facing life on red dwarfs, many astronomers continue to model these systems due to their sheer numbers and longevity.
For these reasons NASA's Kepler Mission is searching for habitable planets at nearby main-sequence stars that are less massive than spectral type A but more massive than type M—making the most probable stars to host life dwarf stars of types F, G, and K.
| Physical sciences | Stellar astronomy | null |
28930 | https://en.wikipedia.org/wiki/SN%201987A | SN 1987A | SN 1987A was a type II supernova in the Large Magellanic Cloud, a dwarf satellite galaxy of the Milky Way. It occurred approximately from Earth and was the closest observed supernova since Kepler's Supernova in 1604. Light and neutrinos from the explosion reached Earth on February 23, 1987 and was designated "SN 1987A" as the first supernova discovered that year. Its brightness peaked in May of that year, with an apparent magnitude of about 3.
It was the first supernova that modern astronomers were able to study in great detail, and its observations have provided much insight into core-collapse supernovae. SN 1987A provided the first opportunity to confirm by direct observation the radioactive source of the energy for visible light emissions, by detecting predicted gamma-ray line radiation from two of its abundant radioactive nuclei. This proved the radioactive nature of the long-duration post-explosion glow of supernovae.
In 2019, indirect evidence for the presence of a collapsed neutron star within the remnants of SN 1987A was discovered using the Atacama Large Millimeter Array telescope. Further evidence was subsequently uncovered in 2021 through observations conducted by the Chandra and NuSTAR X-ray telescopes.
Discovery
SN 1987A was discovered independently by Ian Shelton and Oscar Duhalde at the Las Campanas Observatory in Chile on February 24, 1987, and within the same 24 hours by Albert Jones in New Zealand.
Later investigations found photographs showing the supernova brightening rapidly early on February 23. On March 4–12, 1987, it was observed from space by Astron, the largest ultraviolet space telescope of that time.
Progenitor
Four days after the event was recorded, the progenitor star was tentatively identified as Sanduleak −69 202 (Sk -69 202), a blue supergiant.
After the supernova faded, that identification was definitively confirmed, as Sk −69 202 had disappeared. The possibility of a blue supergiant producing a supernova was considered surprising, and the confirmation led to further research which identified an earlier supernova with a blue supergiant progenitor.
Some models of SN 1987A's progenitor attributed the blue color largely to its chemical composition rather than its evolutionary stage, particularly the low levels of heavy elements. There was some speculation that the star might have merged with a companion star before the supernova. However, it is now widely understood that blue supergiants are natural progenitors of some supernovae, although there is still speculation that the evolution of such stars could require mass loss involving a binary companion.
Neutrino emissions
Approximately two to three hours before the visible light from SN 1987A reached Earth, a burst of neutrinos was observed at three neutrino observatories. This was likely due to neutrino emission which occurs simultaneously with core collapse, but before visible light is emitted as the shock wave reaches the stellar surface. At 7:35 UT, 12 antineutrinos were detected by Kamiokande II, 8 by IMB, and 5 by Baksan in a burst lasting less than 13 seconds. Approximately three hours earlier, the Mont Blanc liquid scintillator detected a five-neutrino burst, but this is generally not believed to be associated with SN 1987A.
The Kamiokande II detection, which at 12 neutrinos had the largest sample population, showed the neutrinos arriving in two distinct pulses. The first pulse at 07:35:35 comprised 9 neutrinos over a period of 1.915 seconds. A second pulse of three neutrinos arrived during a 3.220-second interval from 9.219 to 12.439 seconds after the beginning of the first pulse.
Although only 25 neutrinos were detected during the event, it was a significant increase from the previously observed background level. This was the first time neutrinos known to be emitted from a supernova had been observed directly, which marked the beginning of neutrino astronomy. The observations were consistent with theoretical supernova models in which 99% of the energy of the collapse is radiated away in the form of neutrinos. The observations are also consistent with the models' estimates of a total neutrino count of 1058 with a total energy of 1046 joules, i.e. a mean value of some dozens of MeV per neutrino. Billions of neutrinos passed through a square centimeter on Earth.
The neutrino measurements allowed upper bounds on neutrino mass and charge, as well as the number of flavors of neutrinos and other properties. For example, the data show that the rest mass of the electron neutrino is < 16 eV/c2 at 95% confidence, which is 30,000 times smaller than the mass of an electron. The data suggest that the total number of neutrino flavors is at most 8 but other observations and experiments give tighter estimates. Many of these results have since been confirmed or tightened by other neutrino experiments such as more careful analysis of solar neutrinos and atmospheric neutrinos as well as experiments with artificial neutrino sources.
Neutron star
SN 1987A appears to be a core-collapse supernova, which should result in a neutron star given the size of the original star. The neutrino data indicate that a compact object did form at the star's core, and astronomers immediately began searching for the collapsed core. The Hubble Space Telescope took images of the supernova regularly from August 1990 without a clear detection of a neutron star.
A number of possibilities for the "missing" neutron star were considered. First, that the neutron star may be obscured by surrounding dense dust clouds. Second, that a pulsar was formed, but with either an unusually large or small magnetic field. Third, that large amounts of material fell back on the neutron star, collapsing it further into a black hole. Neutron stars and black holes often give off light as material falls onto them. If there is a compact object in the supernova remnant, but no material to fall onto it, it would be too dim for detection. A fourth hypothesis is that the collapsed core became a quark star.
In 2019, evidence was presented for a neutron star inside one of the brightest dust clumps, close to the expected position of the supernova remnant. In 2021, further evidence was presented of hard X-ray emissions from SN 1987A originating in the pulsar wind nebula. The latter result is supported by a three-dimensional magnetohydrodynamic model, which describes the evolution of SN 1987A from the SN event to the present, and reconstructs the ambient environment, predicting the absorbing power of the dense stellar material around the pulsar.
In 2024, researchers using the James Webb Space Telescope (JWST) identified distinctive emission lines of ionized argon within the central region of the Supernova 1987A remnants. These emission lines, discernible only near the remnant's core, were analyzed using photoionization models. The models indicate that the observed line ratios and velocities can be attributed to ionizing radiation originating from a neutron star illuminating gas from the inner regions of the exploded star.
Light curve
Much of the light curve, or graph of luminosity as a function of time, after the explosion of a type II supernova such as SN 1987A is produced by the energy from radioactive decay. Although the luminous emission consists of optical photons, it is the radioactive power absorbed that keeps the remnant hot enough to radiate light. Without the radioactive heat, it would dim quickly. The radioactive decay of 56Ni through its daughters 56Co to 56Fe produces gamma-ray photons that are absorbed and dominate the heating and thus the luminosity of the ejecta at intermediate times (several weeks) to late times (several months). Energy for the peak of the light curve of SN1987A was provided by the decay of 56Ni to 56Co (half life of 6 days) while energy for the later light curve in particular fit very closely with the 77.3-day half-life of 56Co decaying to 56Fe. Later measurements by space gamma-ray telescopes of the small fraction of the 56Co and 57Co gamma rays that escaped the SN1987A remnant without absorption confirmed earlier predictions that those two radioactive nuclei were the power source.
Because the 56Co in SN1987A has now completely decayed, it no longer supports the luminosity of the SN 1987A ejecta. That is currently powered by the radioactive decay of 44Ti with a half life of about 60 years. With this change, X-rays produced by the ring interactions of the ejecta began to contribute significantly to the total light curve. This was noticed by the Hubble Space Telescope as a steady increase in luminosity 10,000 days after the event in the blue and red spectral bands. X-ray lines 44Ti observed by the INTEGRAL space X-ray telescope showed that the total mass of radioactive 44Ti synthesized during the explosion was .
Observations of the radioactive power from their decays in the 1987A light curve have measured accurate total masses of the 56Ni, 57Ni, and 44Ti created in the explosion, which agree with the masses measured by gamma-ray line space telescopes and provides nucleosynthesis constraints on the computed supernova model.
Interaction with circumstellar material
The three bright rings around SN 1987A that were visible after a few months in images by the Hubble Space Telescope are material from the stellar wind of the progenitor. These rings were ionized by the ultraviolet flash from the supernova explosion, and consequently began emitting in various emission lines. These rings did not "turn on" until several months after the supernova and the process can be very accurately studied through spectroscopy. The rings are large enough that their angular size can be measured accurately: the inner ring is 0.808 arcseconds in radius. The time light traveled to light up the inner ring gives its radius of 0.66 (ly) light years. Using this as the base of a right angle triangle and the angular size as seen from the Earth for the local angle, one can use basic trigonometry to calculate the distance to SN 1987A, which is about 168,000 light-years. The material from the explosion is catching up with the material expelled during both its red and blue supergiant phases and heating it, so we observe ring structures about the star.
Around 2001, the expanding (>7,000 km/s) supernova ejecta collided with the inner ring. This caused its heating and the generation of x-rays—the x-ray flux from the ring increased by a factor of three between 2001 and 2009. A part of the x-ray radiation, which is absorbed by the dense ejecta close to the center, is responsible for a comparable increase in the optical flux from the supernova remnant in 2001–2009. This increase of the brightness of the remnant reversed the trend observed before 2001, when the optical flux was decreasing due to the decaying of 44Ti isotope.
A study reported in June 2015, using images from the Hubble Space Telescope and the Very Large Telescope taken between 1994 and 2014, shows that the emissions from the clumps of matter making up the rings are fading as the clumps are destroyed by the shock wave. It is predicted the ring would fade away between 2020 and 2030. These findings are also supported by the results of a three-dimensional hydrodynamic model which describes the interaction of the blast wave with the circumstellar nebula.
The model also shows that X-ray emission from ejecta heated up by the shock will be dominant very soon, after which the ring would fade away. As the shock wave passes the circumstellar ring it will trace the history of mass loss of the supernova's progenitor and provide useful information for discriminating among various models for the progenitor of SN 1987A.
In 2018, radio observations from the interaction between the circumstellar ring of dust and the shockwave has confirmed the shockwave has now left the circumstellar material. It also shows that the speed of the shockwave, which slowed down to 2,300 km/s while interacting with the dust in the ring, has now re-accelerated to 3,600 km/s.
Condensation of warm dust in the ejecta
Soon after the SN 1987A outburst, three major groups embarked in a photometric monitoring of the supernova: the South African Astronomical Observatory (SAAO), the Cerro Tololo Inter-American Observatory (CTIO), and the European Southern Observatory (ESO). In particular, the ESO team reported an infrared excess which became apparent beginning less than one month after the explosion (March 11, 1987). Three possible interpretations for it were discussed in this work: the infrared echo hypothesis was discarded, and thermal emission from dust that could have condensed in the ejecta was favoured (in which case the estimated temperature at that epoch was ~ 1250 K, and the dust mass was approximately ). The possibility that the IR excess could be produced by optically thick free-free emission seemed unlikely because the luminosity in UV photons needed to keep the envelope ionized was much larger than what was available, but it was not ruled out in view of the eventuality of electron scattering, which had not been considered.
However, none of these three groups had sufficiently convincing proofs to claim for a dusty ejecta on the basis of an IR excess alone.
An independent Australian team advanced several arguments in favour of an echo interpretation. This seemingly straightforward interpretation of the nature of the IR emission was challenged by the ESO group and definitively ruled out after presenting optical evidence for the presence of dust in the SN ejecta.
To discriminate between the two interpretations, they considered the implication of the presence of an echoing dust cloud on the optical light curve, and on the existence of diffuse optical emission around the SN. They concluded that the expected optical echo from the cloud should be resolvable, and could be very bright with an integrated visual brightness of magnitude 10.3 around day 650. However, further optical observations, as expressed in SN light curve, showed no inflection in the light curve at the predicted level. Finally, the ESO team presented a convincing clumpy model for dust condensation in the ejecta.
Although it had been thought more than 50 years ago that dust could form in the ejecta of a core-collapse supernova, which in particular could explain the origin of the dust seen in young galaxies, that was the first time that such a condensation was observed. If SN 1987A is a typical representative of its class then the derived mass of the warm dust formed in the debris of core collapse supernovae is not sufficient to account for all the dust observed in the early universe. However, a much larger reservoir of ~0.25 solar mass of colder dust (at ~26 K) in the ejecta of SN 1987A was found with the infrared Herschel Space Telescope in 2011 and confirmed with the Atacama Large Millimeter Array (ALMA) in 2014.
ALMA observations
Following the confirmation of a large amount of cold dust in the ejecta, ALMA has continued observing SN 1987A. Synchrotron radiation due to shock interaction in the equatorial ring has been measured. Cold (20–100K) carbon monoxide (CO) and silicate molecules (SiO) were observed. The data show that CO and SiO distributions are clumpy, and that different nucleosynthesis products (C, O and Si) are located in different places of the ejecta, indicating the footprints of the stellar interior at the time of the explosion.
Gallery
| Physical sciences | Notable transient events | Astronomy |
28935 | https://en.wikipedia.org/wiki/Seismology | Seismology | Seismology (; from Ancient Greek σεισμός (seismós) meaning "earthquake" and -λογία (-logía) meaning "study of") is the scientific study of earthquakes (or generally, quakes) and the generation and propagation of elastic waves through planetary bodies. It also includes studies of the environmental effects of earthquakes such as tsunamis; other seismic sources such as volcanoes, plate tectonics, glaciers, rivers, oceanic microseisms, and the atmosphere; and artificial processes such as explosions.
Paleoseismology is a related field that uses geology to infer information regarding past earthquakes. A recording of Earth's motion as a function of time, created by a seismograph is called a seismogram. A seismologist is a scientist who works in basic or applied seismology.
History
Scholarly interest in earthquakes can be traced back to antiquity. Early speculations on the natural causes of earthquakes were included in the writings of Thales of Miletus (), Anaximenes of Miletus (), Aristotle (), and Zhang Heng (132 CE).
In 132 CE, Zhang Heng of China's Han dynasty designed the first known seismoscope.
In the 17th century, Athanasius Kircher argued that earthquakes were caused by the movement of fire within a system of channels inside the Earth. Martin Lister (1638–1712) and Nicolas Lemery (1645–1715) proposed that earthquakes were caused by chemical explosions within the Earth.
The Lisbon earthquake of 1755, coinciding with the general flowering of science in Europe, set in motion intensified scientific attempts to understand the behaviour and causation of earthquakes. The earliest responses include work by John Bevis (1757) and John Michell (1761). Michell determined that earthquakes originate within the Earth and were waves of movement caused by "shifting masses of rock miles below the surface".
In response to a series of earthquakes near Comrie in Scotland in 1839, a committee was formed in the United Kingdom in order to produce better detection methods for earthquakes. The outcome of this was the production of one of the first modern seismometers by James David Forbes, first presented in a report by David Milne-Home in 1842. This seismometer was an inverted pendulum, which recorded the measurements of seismic activity through the use of a pencil placed on paper above the pendulum. The designs provided did not prove effective, according to Milne's reports.
From 1857, Robert Mallet laid the foundation of modern instrumental seismology and carried out seismological experiments using explosives. He is also responsible for coining the word "seismology."
In 1889 Ernst von Rebeur-Paschwitz recorded the first teleseismic earthquake signal (an earthquake in Japan recorded at Pottsdam Germany).
In 1897, Emil Wiechert's theoretical calculations led him to conclude that the Earth's interior consists of a mantle of silicates, surrounding a core of iron.
In 1906 Richard Dixon Oldham identified the separate arrival of P waves, S waves and surface waves on seismograms and found the first clear evidence that the Earth has a central core.
In 1909, Andrija Mohorovičić, one of the founders of modern seismology, discovered and defined the Mohorovičić discontinuity. Usually referred to as the "Moho discontinuity" or the "Moho," it is the boundary between the Earth's crust and the mantle. It is defined by the distinct change in velocity of seismological waves as they pass through changing densities of rock.
In 1910, after studying the April 1906 San Francisco earthquake, Harry Fielding Reid put forward the "elastic rebound theory" which remains the foundation for modern tectonic studies. The development of this theory depended on the considerable progress of earlier independent streams of work on the behavior of elastic materials and in mathematics.
An early scientific study of aftershocks from a destructive earthquake came after the January 1920 Xalapa earthquake. An Wiechert seismograph was brought to the Mexican city of Xalapa by rail after the earthquake. The instrument was deployed to record its aftershocks. Data from the seismograph would eventually determine that the mainshock was produced along a shallow crustal fault.
In 1926, Harold Jeffreys was the first to claim, based on his study of earthquake waves, that below the mantle, the core of the Earth is liquid.
In 1937, Inge Lehmann determined that within Earth's liquid outer core there is a solid inner core.
In 1950, Michael S. Longuet-Higgins elucidated the ocean processes responsible for the global background seismic microseism.
By the 1960s, Earth science had developed to the point where a comprehensive theory of the causation of seismic events and geodetic motions had come together in the now well-established theory of plate tectonics.
Types of seismic wave
Seismic waves are elastic waves that propagate in solid or fluid materials. They can be divided into body waves that travel through the interior of the materials; surface waves that travel along surfaces or interfaces between materials; and normal modes, a form of standing wave.
Body waves
There are two types of body waves, pressure waves or primary waves (P waves) and shear or secondary waves (S waves). P waves are longitudinal waves that involve compression and expansion in the direction that the wave is moving and are always the first waves to appear on a seismogram as they are the fastest moving waves through solids. S waves are transverse waves that move perpendicular to the direction of propagation. S waves are slower than P waves. Therefore, they appear later than P waves on a seismogram. Fluids cannot support transverse elastic waves because of their low shear strength, so S waves only travel in solids.
Surface waves
Surface waves are the result of P and S waves interacting with the surface of the Earth. These waves are dispersive, meaning that different frequencies have different velocities. The two main surface wave types are Rayleigh waves, which have both compressional and shear motions, and Love waves, which are purely shear. Rayleigh waves result from the interaction of P waves and vertically polarized S waves with the surface and can exist in any solid medium. Love waves are formed by horizontally polarized S waves interacting with the surface, and can only exist if there is a change in the elastic properties with depth in a solid medium, which is always the case in seismological applications. Surface waves travel more slowly than P waves and S waves because they are the result of these waves traveling along indirect paths to interact with Earth's surface. Because they travel along the surface of the Earth, their energy decays less rapidly than body waves (1/distance2 vs. 1/distance3), and thus the shaking caused by surface waves is generally stronger than that of body waves, and the primary surface waves are often thus the largest signals on earthquake seismograms. Surface waves are strongly excited when their source is close to the surface, as in a shallow earthquake or a near-surface explosion, and are much weaker for deep earthquake sources.
Normal modes
Both body and surface waves are traveling waves; however, large earthquakes can also make the entire Earth "ring" like a resonant bell. This ringing is a mixture of normal modes with discrete frequencies and periods of approximately an hour or shorter. Normal mode motion caused by a very large earthquake can be observed for up to a month after the event. The first observations of normal modes were made in the 1960s as the advent of higher fidelity instruments coincided with two of the largest earthquakes of the 20th century the 1960 Valdivia earthquake and the 1964 Alaska earthquake. Since then, the normal modes of the Earth have given us some of the strongest constraints on the deep structure of the Earth.
Earthquakes
One of the first attempts at the scientific study of earthquakes followed the 1755 Lisbon earthquake. Other notable earthquakes that spurred major advancements in the science of seismology include the 1857 Basilicata earthquake, the 1906 San Francisco earthquake, the 1964 Alaska earthquake, the 2004 Sumatra-Andaman earthquake, and the 2011 Great East Japan earthquake.
Controlled seismic sources
Seismic waves produced by explosions or vibrating controlled sources are one of the primary methods of underground exploration in geophysics (in addition to many different electromagnetic methods such as induced polarization and magnetotellurics). Controlled-source seismology has been used to map salt domes, anticlines and other geologic traps in petroleum-bearing rocks, faults, rock types, and long-buried giant meteor craters. For example, the Chicxulub Crater, which was caused by an impact that has been implicated in the extinction of the dinosaurs, was localized to Central America by analyzing ejecta in the Cretaceous–Paleogene boundary, and then physically proven to exist using seismic maps from oil exploration.
Detection of seismic waves
Seismometers are sensors that detect and record the motion of the Earth arising from elastic waves. Seismometers may be deployed at the Earth's surface, in shallow vaults, in boreholes, or underwater. A complete instrument package that records seismic signals is called a seismograph. Networks of seismographs continuously record ground motions around the world to facilitate the monitoring and analysis of global earthquakes and other sources of seismic activity. Rapid location of earthquakes makes tsunami warnings possible because seismic waves travel considerably faster than tsunami waves. Seismometers also record signals from non-earthquake sources ranging from explosions (nuclear and chemical), to local noise from wind or anthropogenic activities, to incessant signals generated at the ocean floor and coasts induced by ocean waves (the global microseism), to cryospheric events associated with large icebergs and glaciers. Above-ocean meteor strikes with energies as high as 4.2 × 1013 J (equivalent to that released by an explosion of ten kilotons of TNT) have been recorded by seismographs, as have a number of industrial accidents and terrorist bombs and events (a field of study referred to as forensic seismology). A major long-term motivation for the global seismographic monitoring has been for the detection and study of nuclear testing.
Mapping Earth's interior
Because seismic waves commonly propagate efficiently as they interact with the internal structure of the Earth, they provide high-resolution noninvasive methods for studying the planet's interior. One of the earliest important discoveries (suggested by Richard Dixon Oldham in 1906 and definitively shown by Harold Jeffreys in 1926) was that the outer core of the earth is liquid. Since S waves do not pass through liquids, the liquid core causes a "shadow" on the side of the planet opposite the earthquake where no direct S waves are observed. In addition, P waves travel much slower through the outer core than the mantle.
Processing readings from many seismometers using seismic tomography, seismologists have mapped the mantle of the earth to a resolution of several hundred kilometers. This has enabled scientists to identify convection cells and other large-scale features such as the large low-shear-velocity provinces near the core–mantle boundary.
Seismology and society
Earthquake prediction
Forecasting a probable timing, location, magnitude and other important features of a forthcoming seismic event is called earthquake prediction. Various attempts have been made by seismologists and others to create effective systems for precise earthquake predictions, including the VAN method. Most seismologists do not believe that a system to provide timely warnings for individual earthquakes has yet been developed, and many believe that such a system would be unlikely to give useful warning of impending seismic events. However, more general forecasts routinely predict seismic hazard. Such forecasts estimate the probability of an earthquake of a particular size affecting a particular location within a particular time-span, and they are routinely used in earthquake engineering.
Public controversy over earthquake prediction erupted after Italian authorities indicted six seismologists and one government official for manslaughter in connection with a magnitude 6.3 earthquake in L'Aquila, Italy on April 5, 2009. A report in Nature stated that the indictment was widely seen in Italy and abroad as being for failing to predict the earthquake and drew condemnation from the American Association for the Advancement of Science and the American Geophysical Union. However, the magazine also indicated that the population of Aquila do not consider the failure to predict the earthquake to be the reason for the indictment, but rather the alleged failure of the scientists to evaluate and communicate risk. The indictment claims that, at a special meeting in L'Aquila the week before the earthquake occurred, scientists and officials were more interested in pacifying the population than providing adequate information about earthquake risk and preparedness.
In locations where a historical record exists it may be used to estimate the timing, location and magnitude of future seismic events. There are several interpretative factors to consider. The epicentres or foci and magnitudes of historical earthquakes are subject to interpretation meaning it is possible that 5–6 earthquakes described in the historical record could be larger events occurring elsewhere that were felt moderately in the populated areas that produced written records. Documentation in the historic period may be sparse or incomplete, and not give a full picture of the geographic scope of an earthquake, or the historical record may only have earthquake records spanning a few centuries, a very short time frame in a seismic cycle.
Engineering seismology
Engineering seismology is the study and application of seismology for engineering purposes. It generally applied to the branch of seismology that deals with the assessment of the seismic hazard of a site or region for the purposes of earthquake engineering. It is, therefore, a link between earth science and civil engineering. There are two principal components of engineering seismology. Firstly, studying earthquake history (e.g. historical and instrumental catalogs of seismicity) and tectonics to assess the earthquakes that could occur in a region and their characteristics and frequency of occurrence. Secondly, studying strong ground motions generated by earthquakes to assess the expected shaking from future earthquakes with similar characteristics. These strong ground motions could either be observations from accelerometers or seismometers or those simulated by computers using various techniques, which are then often used to develop ground motion prediction equations (or ground-motion models).
Tools
Seismological instruments can generate large amounts of data. Systems for processing such data include:
CUSP (Caltech-USGS Seismic Processing)
RadExPro seismic software
SeisComP3
Notable seismologists
Aki, Keiiti
Ambraseys, Nicholas
Anderson, Don L.
Bolt, Bruce
Beroza,Gregory
Claerbout, Jon
Dziewonski, Adam Marian
Ewing, Maurice
Galitzine, Boris Borisovich
Gamburtsev, Grigory A.
Gutenberg, Beno
Hough, Susan
Jeffreys, Harold
Jones, Lucy
Kanamori, Hiroo
Keilis-Borok, Vladimir
Knopoff, Leon
Lehmann, Inge
Macelwane, James
Mallet, Robert
Mercalli, Giuseppe
Milne, John
Mohorovičić, Andrija
Oldham, Richard Dixon
Omori, Fusakichi
Sebastião de Melo, Marquis of Pombal
Press, Frank
Rautian, Tatyana G.
Richards, Paul G.
Richter, Charles Francis
Sekiya, Seikei
Sieh, Kerry
Paul G. Silver
Stein, Ross
Tucker, Brian
Vidale, John
Wen, Lianxing
Winthrop, John
Zhang Heng
| Physical sciences | Geophysics | null |
28938 | https://en.wikipedia.org/wiki/Shell%20script | Shell script | A shell script is a computer program designed to be run by a Unix shell, a command-line interpreter. The various dialects of shell scripts are considered to be command languages. Typical operations performed by shell scripts include file manipulation, program execution, and printing text. A script which sets up the environment, runs the program, and does any necessary cleanup or logging, is called a wrapper.
The term is also used more generally to mean the automated mode of running an operating system shell; each operating system uses a particular name for these functions including batch files (MSDos-Win95 stream, OS/2), command procedures (VMS), and shell scripts (Windows NT stream and third-party derivatives like 4NT—article is at cmd.exe), and mainframe operating systems are associated with a number of terms.
Shells commonly present in Unix and Unix-like systems include the Korn shell, the Bourne shell, and GNU Bash. While a Unix operating system may have a different default shell, such as Zsh on macOS, these shells are typically present for backwards compatibility.
Capabilities
Comments
Comments are ignored by the shell. They typically begin with the hash symbol (#), and continue until the end of the line.
Configurable choice of scripting language
The shebang, or hash-bang, is a special kind of comment which the system uses to determine what interpreter to use to execute the file. The shebang must be the first line of the file, and start with "#!". In Unix-like operating systems, the characters following the "#!" prefix are interpreted as a path to an executable program that will interpret the script.
Shortcuts
A shell script can provide a convenient variation of a system command where special environment settings, command options, or post-processing apply automatically, but in a way that allows the new script to still act as a fully normal Unix command.
One example would be to create a version of ls, the command to list files, giving it a shorter command name of l, which would be normally saved in a user's bin directory as /home/username/bin/l, and a default set of command options pre-supplied.
#!/bin/sh
LC_COLLATE=C ls -FCas "$@"
Here, the first line uses a shebang to indicate which interpreter should execute the rest of the script, and the second line makes a listing with options for file format indicators, columns, all files (none omitted), and a size in blocks. The LC_COLLATE=C sets the default collation order to not fold upper and lower case together, not intermix dotfiles with normal filenames as a side effect of ignoring punctuation in the names (dotfiles are usually only shown if an option like -a is used), and the "$@" causes any parameters given to l to pass through as parameters to ls, so that all of the normal options and other syntax known to ls can still be used.
The user could then simply use l for the most commonly used short listing.
Another example of a shell script that could be used as a shortcut would be to print a list of all the files and directories within a given directory.
#!/bin/sh
clear
ls -al
In this case, the shell script would start with its normal starting line of #!/bin/sh. Following this, the script executes the command clear which clears the terminal of all text before going to the next line. The following line provides the main function of the script. The ls -al command lists the files and directories that are in the directory from which the script is being run. The ls command attributes could be changed to reflect the needs of the user.
Batch jobs
Shell scripts allow several commands that would be entered manually at a command-line interface to be executed automatically, and without having to wait for a user to trigger each stage of the sequence. For example, in a directory with three C source code files, rather than manually running the four commands required to build the final program from them, one could instead create a script for POSIX-compliant shells, here named build and kept in the directory with them, which would compile them automatically:
#!/bin/sh
printf 'compiling...\n'
cc -c foo.c
cc -c bar.c
cc -c qux.c
cc -o myprog foo.o bar.o qux.o
printf 'done.\n'
The script would allow a user to save the file being edited, pause the editor, and then just run ./build to create the updated program, test it, and then return to the editor. Since the 1980s or so, however, scripts of this type have been replaced with utilities like make which are specialized for building programs.
Generalization
Simple batch jobs are not unusual for isolated tasks, but using shell loops, tests, and variables provides much more flexibility to users. A POSIX sh script to convert JPEG images to PNG images, where the image names are provided on the command-line—possibly via wildcards—instead of each being listed within the script, can be created with this file, typically saved in a file like /home/username/bin/jpg2png
#!/bin/sh
for jpg; do # use $jpg in place of each filename given, in turn
png=${jpg%.jpg}.png # construct the PNG version of the filename by replacing .jpg with .png
printf 'converting "%s" ...\n' "$jpg" # output status info to the user running the script
if convert "$jpg" jpg.to.png; then # use convert (provided by ImageMagick) to create the PNG in a temp file
mv jpg.to.png "$png" # if it worked, rename the temporary PNG image to the correct name
else # ...otherwise complain and exit from the script
printf >&2 'jpg2png: error: failed output saved in "jpg.to.png".\n'
exit 1
fi # the end of the "if" test construct
done # the end of the "for" loop
printf 'all conversions successful\n' # tell the user the good news
The jpg2png command can then be run on an entire directory full of JPEG images with just /home/username/bin/jpg2png *.jpg
Programming
Many modern shells also supply various features usually found only in more sophisticated general-purpose programming languages, such as control-flow constructs, variables, comments, arrays, subroutines and so on. With these sorts of features available, it is possible to write reasonably sophisticated applications as shell scripts. However, they are still limited by the fact that most shell languages have little or no support for data typing systems, classes, threading, complex math, and other common full language features, and are also generally much slower than compiled code or interpreted languages written with speed as a performance goal.
The standard Unix tools sed and awk provide extra capabilities for shell programming; Perl can also be embedded in shell scripts as can other scripting languages like Tcl. Perl and Tcl come with graphics toolkits as well.
Typical POSIX scripting languages
Scripting languages commonly found on UNIX, Linux, and POSIX-compliant operating system installations include:
KornShell (ksh) in several possible versions such as ksh88, Korn Shell '93 and others.
The Bourne shell (sh), one of the oldest shells still common in use
The C shell (csh)
GNU Bash (bash)
tclsh, a shell which is a main component of the Tcl/Tk programming language.
The wish is a GUI-based Tcl/Tk shell.
The C and Tcl shells have syntax quite similar to that of said programming languages, and the Korn shells and Bash are developments of the Bourne shell, which is based on the ALGOL language with elements of a number of others added as well. On the other hand, the various shells plus tools like awk, sed, grep, and BASIC, Lisp, C and so forth contributed to the Perl programming language.
Other shells that may be available on a machine or for download and/or purchase include:
Almquist shell (ash)
Nushell (nu)
PowerShell (msh)
Z shell (zsh, a particularly common enhanced KornShell)
The Tenex C Shell (tcsh).
Related programs such as shells based on Python, Ruby, C, Java, Perl, Pascal, Rexx etc. in various forms are also widely available. Another somewhat common shell is Old shell (osh), whose manual page states it "is an enhanced, backward-compatible port of the standard command interpreter from Sixth Edition UNIX."
So called remote shells such as
a Remote Shell (rsh)
a Secure Shell (ssh)
are really just tools to run a more complex shell on a remote system and have no 'shell' like characteristics themselves.
Other scripting languages
Many powerful scripting languages have been introduced for tasks that are too large or complex to be comfortably handled with ordinary shell scripts, but for which the advantages of a script are desirable and the development overhead of a full-blown, compiled programming language would be disadvantageous. The specifics of what separates scripting languages from high-level programming languages is a frequent source of debate, but, generally speaking, a scripting language is one which requires an interpreter.
Life cycle
Shell scripts often serve as an initial stage in software development, and are often subject to conversion later to a different underlying implementation, most commonly being converted to Perl, Python, or C. The interpreter directive allows the implementation detail to be fully hidden inside the script, rather than being exposed as a filename extension, and provides for seamless reimplementation in different languages with no impact on end users.
While files with the ".sh" file extension are usually a shell script of some kind, most shell scripts do not have any filename extension.
Advantages and disadvantages
Perhaps the biggest advantage of writing a shell script is that the commands and syntax are exactly the same as those directly entered at the command-line. The programmer does not have to switch to a totally different syntax, as they would if the script were written in a different language, or if a compiled language were used.
Often, writing a shell script is much quicker than writing the equivalent code in other programming languages. The many advantages include easy program or file selection, quick start, and interactive debugging. A shell script can be used to provide a sequencing and decision-making linkage around existing programs, and for moderately sized scripts the absence of a compilation step is an advantage. Interpretive running makes it easy to write debugging code into a script and re-run it to detect and fix bugs. Non-expert users can use scripting to tailor the behavior of programs, and shell scripting provides some limited scope for multiprocessing.
On the other hand, shell scripting is prone to costly errors. Inadvertent typing errors such as rm -rf * / (instead of the intended rm -rf */) are folklore in the Unix community; a single extra space converts the command from one that deletes all subdirectories contained in the current directory, to one which deletes everything from the file system's root directory. Similar problems can transform cp and mv into dangerous weapons, and misuse of the > redirect can delete the contents of a file. This is made more problematic by the fact that many UNIX commands differ in name by only one letter: cp, cd, dd, df, etc.
Another significant disadvantage is the slow execution speed and the need to launch a new process for almost every shell command executed. When a script's job can be accomplished by setting up a pipeline in which efficient filter commands perform most of the work, the slowdown is mitigated, but a complex script is typically several orders of magnitude slower than a conventional compiled program that performs an equivalent task.
There are also compatibility problems between different platforms. Larry Wall, creator of Perl, famously wrote that "It's easier to port a shell than a shell script."
Similarly, more complex scripts can run into the limitations of the shell scripting language itself; the limits make it difficult to write quality code, and extensions by various shells to ameliorate problems with the original shell language can make problems worse.
Many disadvantages of using some script languages are caused by design flaws within the language syntax or implementation, and are not necessarily imposed by the use of a text-based command-line; there are a number of shells which use other shell programming languages or even full-fledged languages like Scsh (which uses Scheme).
Interoperability among scripting languages
Different scripting languages may share many common elements, largely due to being POSIX based, and some shells offer modes to emulate different shells. This allows a shell script written in one scripting language to be adapted into another.
One example of this is Bash, which offers the same grammar and syntax as the Bourne shell, and which also provides a POSIX-compliant mode. As such, most shell scripts written for the Bourne shell can be run in BASH, but the reverse may not be true since BASH has extensions which are not present in the Bourne shell. As such, these features are known as bashisms.
Shell scripting on other operating systems
Interoperability software such as Cygwin, the MKS Toolkit, Interix (which is available in the Microsoft Windows Services for UNIX), Hamilton C shell, UWIN (AT&T Unix for Windows) and others allow Unix shell programs to be run on machines running Windows NT and its successors, with some loss of functionality on the MS-DOS-Windows 95 branch, as well as earlier MKS Toolkit versions for OS/2. At least three DCL implementations for Windows type operating systems—in addition to XLNT, a multiple-use scripting language package which is used with the command shell, Windows Script Host and CGI programming—are available for these systems as well. Mac OS X and subsequent are Unix-like as well.
In addition to the aforementioned tools, some POSIX and OS/2 functionality can be used with the corresponding environmental subsystems of the Windows NT operating system series up to Windows 2000 as well. A third, 16-bit subsystem often called the MS-DOS subsystem uses the Command.com provided with these operating systems to run the aforementioned MS-DOS batch files.
The console alternatives 4DOS, 4OS2, FreeDOS, Peter Norton's NDOS and 4NT / Take Command which add functionality to the Windows NT-style cmd.exe, MS-DOS/Windows 95 batch files (run by Command.com), OS/2's cmd.exe, and 4NT respectively are similar to the shells that they enhance and are more integrated with the Windows Script Host, which comes with three pre-installed engines, VBScript, JScript, and VBA and to which numerous third-party engines can be added, with Rexx, Perl, Python, Ruby, and Tcl having pre-defined functions in 4NT and related programs. PC DOS is quite similar to MS-DOS, whilst DR DOS is more different. Earlier versions of Windows NT are able to run contemporary versions of 4OS2 by the OS/2 subsystem.
Scripting languages are, by definition, able to be extended; for example, a MS-DOS/Windows 95/98 and Windows NT type systems allows for shell/batch programs to call tools like KiXtart, QBasic, various BASIC, Rexx, Perl, and Python implementations, the Windows Script Host and its installed engines. On Unix and other POSIX-compliant systems, awk and sed are used to extend the string and numeric processing ability of shell scripts. Tcl, Perl, Rexx, and Python have graphics toolkits and can be used to code functions and procedures for shell scripts which pose a speed bottleneck (C, Fortran, assembly language &c are much faster still) and to add functionality not available in the shell language such as sockets and other connectivity functions, heavy-duty text processing, working with numbers if the calling script does not have those abilities, self-writing and self-modifying code, techniques like recursion, direct memory access, various types of sorting and more, which are difficult or impossible in the main script, and so on. Visual Basic for Applications and VBScript can be used to control and communicate with such things as spreadsheets, databases, scriptable programs of all types, telecommunications software, development tools, graphics tools and other software which can be accessed through the Component Object Model.
| Technology | Scripting languages | null |
28942 | https://en.wikipedia.org/wiki/Solder | Solder | Solder (; NA: ) is a fusible metal alloy used to create a permanent bond between metal workpieces. Solder is melted in order to wet the parts of the joint, where it adheres to and connects the pieces after cooling. Metals or alloys suitable for use as solder should have a lower melting point than the pieces to be joined. The solder should also be resistant to oxidative and corrosive effects that would degrade the joint over time. Solder used in making electrical connections also needs to have favorable electrical characteristics.
Soft solder typically has a melting point range of , and is commonly used in electronics, plumbing, and sheet metal work. Alloys that melt between are the most commonly used. Soldering performed using alloys with a melting point above is called "hard soldering", "silver soldering", or brazing.
In specific proportions, some alloys are eutectic — that is, the alloy's melting point is the lowest possible for a mixture of those components, and coincides with the freezing point. Non-eutectic alloys can have markedly different solidus and liquidus temperatures, as they have distinct liquid and solid transitions. Non-eutectic mixtures often exist as a paste of solid particles in a melted matrix of the lower-melting phase as they approach high enough temperatures. In electrical work, if the joint is disturbed while in this "pasty" state before it fully solidifies, a poor electrical connection may result; use of eutectic solder reduces this problem. The pasty state of a non-eutectic solder can be exploited in plumbing, as it allows molding of the solder during cooling, e.g. for ensuring watertight joint of pipes, resulting in a so-called "wiped joint".
For electrical and electronics work, solder wire is available in a range of thicknesses for hand-soldering (manual soldering is performed using a soldering iron or soldering gun), and with cores containing flux. It is also available as a room temperature paste, as a preformed foil shaped to match the workpiece which may be more suited for mechanized mass-production, or in small "tabs" that can be wrapped around the joint and melted with a flame where an iron isn't usable or available, as for instance in field repairs. Alloys of lead and tin were commonly used in the past and are still available; they are particularly convenient for hand-soldering. Lead-free solders have been increasing in use due to regulatory requirements plus the health and environmental benefits of avoiding lead-based electronic components. They are almost exclusively used today in consumer electronics.
Plumbers often use bars of solder, much thicker than the wire used for electrical applications, and apply flux separately; many plumbing-suitable soldering fluxes are too corrosive (or conductive) to be used in electrical or electronic work. Jewelers often use solder in thin sheets, which they cut into snippets.
Etymology
The word solder comes from the Middle English word , via Old French and , from the Latin , meaning "to make solid".
Composition
Lead-based
Tin-lead (Sn-Pb) solders, also called soft solders, are commercially available with tin concentrations between 5% and 70% by weight. The greater the tin concentration, the greater the solder's tensile and shear strengths. Lead mitigates the formation of tin whiskers, though the precise mechanism for this is unknown. Today, many techniques are used to mitigate the problem, including changes to the annealing process (heating and cooling), addition of elements like copper and nickel, and the application of conformal coatings. Alloys commonly used for electrical soldering are 60/40 Sn-Pb, which melts at , and 63/37 Sn-Pb used principally in electrical/electronic work. The latter mixture is a eutectic alloy of these metals, which:
has the lowest melting point () of all the tin-lead alloys; and
the melting point is truly a point — not a range.
In the United States, since 1974, lead is prohibited in solder and flux in plumbing applications for drinking water use, per the Safe Drinking Water Act. Historically, a higher proportion of lead was used, commonly 50/50. This had the advantage of making the alloy solidify more slowly. With the pipes being physically fitted together before soldering, the solder could be wiped over the joint to ensure water tightness. Although lead water pipes were displaced by copper when the significance of lead poisoning began to be fully appreciated, lead solder was still used until the 1980s because it was thought that the amount of lead that could leach into water from the solder was negligible from a properly soldered joint. The electrochemical couple of copper and lead promotes corrosion of the lead and tin. Tin, however, is protected by insoluble oxide. Since even small amounts of lead have been found detrimental to health as a potent neurotoxin, lead in plumbing solder was replaced by silver (food-grade applications) or antimony, with copper often added, and the proportion of tin was increased (see lead-free solder).
The addition of tin—more expensive than lead—improves wetting properties of the alloy; lead itself has poor wetting characteristics. High-tin tin-lead alloys have limited use as the workability range can be provided by a cheaper high-lead alloy.
Lead-tin solders readily dissolve gold plating and form brittle intermetallics.
60/40 Sn-Pb solder oxidizes on the surface, forming a complex 4-layer structure: tin(IV) oxide on the surface, below it a layer of tin(II) oxide with finely dispersed lead, followed by a layer of tin(II) oxide with finely dispersed tin and lead, and the solder alloy itself underneath.
Lead, and to some degree tin, as used in solder contains small but significant amounts of radioisotope impurities. Radioisotopes undergoing alpha decay are a concern due to their tendency to cause soft errors. Polonium-210 is especially troublesome; lead-210 beta decays to bismuth-210 which then beta decays to polonium-210, an intense emitter of alpha particles. Uranium-238 and thorium-232 are other significant contaminants of alloys of lead.
Lead-free
The European Union Waste Electrical and Electronic Equipment Directive and Restriction of Hazardous Substances Directive were adopted in early 2003 and came into effect on July 1, 2006, restricting the inclusion of lead in most consumer electronics sold in the EU, and having a broad effect on consumer electronics sold worldwide. In the US, manufacturers may receive tax benefits by reducing the use of lead-based solder. Lead-free solders in commercial use may contain tin, copper, silver, bismuth, indium, zinc, antimony, and traces of other metals. Most lead-free replacements for conventional 60/40 and 63/37 Sn-Pb solder have melting points from 50 to 200 °C higher, though there are also solders with much lower melting points. Lead-free solder typically requires around 2% flux by mass for adequate wetting ability.
When lead-free solder is used in wave soldering, a slightly modified solder pot may be desirable (e.g. titanium liners or impellers) to reduce maintenance cost due to increased tin-scavenging of high-tin solder.
Lead-free solder is prohibited in critical applications, such as aerospace, military and medical projects, because joints are likely to suffer from metal fatigue failure under stress (such as that from thermal expansion and contraction). Although this is a property that conventional leaded solder possesses as well (like any metal), the point at which stress fatigue will usually occur in leaded solder is substantially above the level of stresses normally encountered.
Tin-silver-copper (Sn-Ag-Cu, or SAC) solders are used by two-thirds of Japanese manufacturers for reflow and wave soldering, and by about 75% of companies for hand soldering. The widespread use of this popular lead-free solder alloy family is based on the reduced melting point of the Sn-Ag-Cu ternary eutectic behavior (), which is below the 22/78 Sn-Ag (wt.%) eutectic of and the 99.3/0.7 Sn-Cu eutectic of . The ternary eutectic behavior of Sn-Ag-Cu and its application for electronics assembly was discovered (and patented) by a team of researchers from Ames Laboratory, Iowa State University, and from Sandia National Laboratories-Albuquerque.
Much recent research has focused on the addition of a fourth element to Sn-Ag-Cu solder, in order to provide compatibility for the reduced cooling rate of solder sphere reflow for assembly of ball grid arrays. Examples of these four-element compositions are 18/64/14/4 tin-silver-copper-zinc (Sn-Ag-Cu-Zn) (melting range 217–220 °C) and 18/64/16/2 tin-silver-copper-manganese (Sn-Ag-Cu-Mn; melting range of 211–215 °C).
Tin-based solders readily dissolve gold, forming brittle intermetallic joins; for Sn-Pb alloys the critical concentration of gold to embrittle the joint is about 4%. Indium-rich solders (usually indium-lead) are more suitable for soldering thicker gold layers as the dissolution rate of gold in indium is much slower. Tin-rich solders also readily dissolve silver; for soldering silver metallization or surfaces, alloys with addition of silver are suitable; tin-free alloys are also a choice, though their wetting ability is poorer. If the soldering time is long enough to form the intermetallics, the tin surface of a joint soldered to gold is very dull.
Hard solder
Hard solders are used for brazing, and melt at higher temperatures. Alloys of copper with either zinc or silver are the most common.
In silversmithing or jewelry making, special hard solders are used that will pass assay. They contain a high proportion of the metal being soldered and lead is not used in these alloys. These solders vary in hardness, designated as "enameling", "hard", "medium" and "easy". Enameling solder has a high melting point, close to that of the material itself, to prevent the joint desoldering during firing in the enameling process. The remaining solder types are used in decreasing order of hardness during the process of making an item, to prevent a previously soldered seam or joint desoldering while additional sites are soldered. Easy solder is also often used for repair work for the same reason. Flux is also used to prevent joints from desoldering.
Silver solder is also used in manufacturing to join metal parts that cannot be welded. The alloys used for these purposes contain a high proportion of silver (up to 40%), and may also contain cadmium.
Alloys
Different elements serve different roles in the solder alloy:
Antimony is added to increase strength without affecting wettability. Prevents tin pest. Should be avoided on zinc, cadmium, or galvanized metals as the resulting joint is brittle.
Bismuth significantly lowers the melting point and improves wettability. In presence of sufficient lead and tin, bismuth forms crystals of with melting point of only 95 °C, which diffuses along the grain boundaries and may cause a joint failure at relatively low temperatures. A high-power part pre-tinned with an alloy of lead can therefore desolder under load when soldered with a bismuth-containing solder. Such joints are also prone to cracking. Alloys with more than 47% Bi expand upon cooling, which may be used to offset thermal expansion mismatch stresses. Retards growth of tin whiskers. Relatively expensive, limited availability.
Copper improves resistance to thermal cycle fatigue, and improves wetting properties of the molten solder. It also slows down the rate of dissolution of copper from the board and part leads in the liquid solder. Copper in solders forms intermetallic compounds. Supersaturated (by about 1%) solution of copper in tin may be employed to inhibit dissolution of thin-film under-bump metallization of BGA chips, e.g. as .
Nickel can be added to the solder alloy to form a supersaturated solution to inhibit dissolution of thin-film under-bump metallization. In tin-copper alloys, small addition of Ni (<0.5 wt%) inhibits the formation of voids and interdiffusion of Cu and Sn elements. Inhibits copper dissolution, even more in synergy with bismuth. Nickel presence stabilizes the copper-tin intermetallics, inhibits growth of pro-eutectic β-tin dendrites (and therefore increases fluidity near the melting point of copper-tin eutectic), promotes shiny bright surface after solidification, inhibits surface cracking at cooling; such alloys are called "nickel-modified" or "nickel-stabilized". Small amounts increase melt fluidity, most at 0.06%. Suboptimal amounts may be used to avoid patent issues. Fluidity reduction increase hole filling and mitigates bridging and icicles.
Cobalt is used instead of nickel to avoid patent issues in improving fluidity. Does not stabilize intermetallic growths in solid alloy.
Indium lowers the melting point and improves ductility. In presence of lead it forms a ternary compound that undergoes phase change at 114 °C. Very high cost (several times of silver), low availability. Easily oxidizes, which causes problems for repairs and reworks, especially when oxide-removing flux cannot be used, e.g. during GaAs die attachment. Indium alloys are used for cryogenic applications, and for soldering gold as gold dissolves in indium much less than in tin. Indium can also solder many nonmetals (e.g. glass, mica, alumina, magnesia, titania, zirconia, porcelain, brick, concrete, and marble). Prone to diffusion into semiconductors and cause undesired doping. At elevated temperatures easily diffuses through metals. Low vapor pressure, suitable for use in vacuum systems. Forms brittle intermetallics with gold; indium-rich solders on thick gold are unreliable. Indium-based solders are prone to corrosion, especially in presence of chloride ions.
Lead is inexpensive and has suitable properties. Worse wetting than tin. Toxic, being phased out. Retards growth of tin whiskers, inhibits tin pest. Lowers solubility of copper and other metals in tin.
Silver provides mechanical strength, but has worse ductility than lead. In absence of lead, it improves resistance to fatigue from thermal cycles. Using SnAg solders with HASL-SnPb-coated leads forms phase with melting point at 179 °C, which moves to the board-solder interface, solidifies last, and separates from the board. Addition of silver to tin significantly lowers solubility of silver coatings in the tin phase. In eutectic tin-silver (3.5% Ag) alloy and similar alloys (e.g. SAC305) it tends to form platelets of , which, if formed near a high-stress spot, may serve as initiating sites for cracks and cause poor shock and drop performance; silver content needs to be kept below 3% to inhibit such problems. High ion mobility, tends to migrate and form short circuits at high humidity under DC bias. Promotes corrosion of solder pots, increases dross formation.
Tin is the usual main structural metal of the alloy. It has good strength and wetting. On its own it is prone to tin pest and growth of tin whiskers. Readily dissolves silver, gold and to less but still significant extent many other metals, e.g. copper; this is a particular concern for tin-rich alloys with higher melting points and reflow temperatures.
Zinc lowers the melting point and is low-cost. However, it is highly susceptible to corrosion and oxidation in air, therefore zinc-containing alloys are unsuitable for some purposes, e.g. wave soldering, and zinc-containing solder pastes have shorter shelf life than zinc-free. Can form brittle Cu-Zn intermetallic layers in contact with copper. Readily oxidizes which impairs wetting, requires a suitable flux.
Germanium in tin-based lead-free solders influences formation of oxides; at below 0.002% it increases formation of oxides. Optimal concentration for suppressing oxidation is at 0.005%. Used in e.g. Sn100C alloy. Patented.
Rare-earth elements, when added in small amounts, refine the matrix structure in tin-copper alloys by segregating impurities at the grain boundaries. However, excessive addition results in the formation of tin whiskers; it also results in spurious rare earth phases, which easily oxidize and deteriorate the solder properties.
Phosphorus is used as antioxidant to inhibit dross formation. Decreases fluidity of tin-copper alloys.
Impurities
Impurities usually enter the solder reservoir by dissolving the metals present in the assemblies being soldered. Dissolving of process equipment is not common as the materials are usually chosen to be insoluble in solder.
Aluminium – little solubility, causes sluggishness of solder and dull gritty appearance due to formation of oxides. Addition of antimony to solders forms Al-Sb intermetallics that are segregated into dross. Promotes embrittlement.
Antimony – added intentionally, up to 0.3% improves wetting, larger amounts slowly degrade wetting. Increases melting point.
Arsenic – forms thin intermetallics with adverse effects on mechanical properties, causes dewetting of brass surfaces
Cadmium – causes sluggishness of solder, forms oxides and tarnishes
Copper – most common contaminant, forms needle-shaped intermetallics, causes sluggishness of solders, grittiness of alloys, decreased wetting
Gold – easily dissolves, forms brittle intermetallics, contamination above 0.5% causes sluggishness and decreases wetting. Lowers melting point of tin-based solders. Higher-tin alloys can absorb more gold without embrittlement.
Iron – forms intermetallics, causes grittiness, but rate of dissolution is very low; readily dissolves in lead-tin above 427 °C.
Lead – causes Restriction of Hazardous Substances Directive compliance problems at above 0.1%.
Nickel – causes grittiness, very little solubility in Sn-Pb
Phosphorus – forms tin and lead phosphides, causes grittiness and dewetting, present in electroless nickel plating
Silver – often added intentionally, in high amounts forms intermetallics that cause grittiness and formation of pimples on the solder surface, potential for embrittlement
Sulfur – forms lead and tin sulfides, causes dewetting
Zinc – in melt forms excessive dross, in solidified joints rapidly oxidizes on the surface; zinc oxide is insoluble in fluxes, impairing repairability; copper and nickel barrier layers may be needed when soldering brass to prevent zinc migration to the surface; potential for embrittlement
Board finishes vs wave soldering bath impurities buildup:
HASL, lead-free (Hot Air Level): usually virtually pure tin. Does not contaminate high-tin baths.
HASL, leaded: some lead dissolves into the bath
ENIG (Electroless Nickel Immersion Gold): typically 100-200 microinches of nickel with 3-5 microinches of gold on top. Some gold dissolves into the bath, but limits exceeding buildup is rare.
Immersion silver: typically 10–15 microinches of silver. Some dissolves into the bath, limits exceeding buildup is rare.
Immersion tin: does not contaminate high-tin baths.
OSP (Organic solderability preservative): usually imidazole-class compounds forming a thin layer on the copper surface. Copper readily dissolves in high-tin baths.
Flux
Flux is a reducing agent designed to help reduce (return oxidized metals to their metallic state) metal oxides at the points of contact to improve the electrical connection and mechanical strength. The two principal types of flux are acid flux (sometimes called "active flux"), containing strong acids, used for metal mending and plumbing, and rosin flux (sometimes called "passive flux"), used in electronics. Rosin flux comes in a variety of "activities", corresponding roughly to the speed and effectiveness of the organic acid components of the rosin in dissolving metallic surface oxides, and consequently the corrosiveness of the flux residue.
Due to concerns over atmospheric pollution and hazardous waste disposal, the electronics industry has been gradually shifting from rosin flux to water-soluble flux, which can be removed with deionized water and detergent, instead of hydrocarbon solvents. Water-soluble fluxes are generally more conductive than traditionally used electrical / electronic fluxes and so have more potential for electrically interacting with a circuit; in general it is important to remove their traces after soldering. Some rosin type flux traces likewise should be removed, and for the same reason.
In contrast to using traditional bars or coiled wires of all-metal solder and manually applying flux to the parts being joined, much hand soldering since the mid-20th century has used flux-core solder. This is manufactured as a coiled wire of solder, with one or more continuous bodies of inorganic acid or rosin flux embedded lengthwise inside it. As the solder melts onto the joint, it frees the flux and releases that on it as well.
Operation
The solidifying behavior depends on the alloy composition. Pure metals solidify at a certain temperature, forming crystals of one phase. Eutectic alloys also solidify at a single temperature, all components precipitating simultaneously in so-called coupled growth. Non-eutectic compositions on cooling start to first precipitate the non-eutectic phase; dendrites when it is a metal, large crystals when it is an intermetallic compound. Such a mixture of solid particles in a molten eutectic is referred to as a mushy state. Even a relatively small proportion of solids in the liquid can dramatically lower its fluidity.
The temperature of total solidification is the solidus of the alloy, the temperature at which all components are molten is the liquidus.
The mushy state is desired where a degree of plasticity is beneficial for creating the joint, allowing filling larger gaps or being wiped over the joint (e.g. when soldering pipes). In hand soldering of electronics it may be detrimental as the joint may appear solidified while it is not yet. Premature handling of such joint then disrupts its internal structure and leads to compromised mechanical integrity.
Intermetallics
Many different intermetallic compounds are formed during solidifying of solders and during their reactions with the soldered surfaces. The intermetallics form distinct phases, usually as inclusions in a ductile solid solution matrix, but also can form the matrix itself with metal inclusions or form crystalline matter with different intermetallics. Intermetallics are often hard and brittle. Finely distributed intermetallics in a ductile matrix yield a hard alloy while coarse structure gives a softer alloy. A range of intermetallics often forms between the metal and the solder, with increasing proportion of the metal; e.g. forming a structure of . Layers of intermetallics can form between the solder and the soldered material. These layers may cause mechanical reliability weakening and brittleness, increased electrical resistance, or electromigration and formation of voids. The gold-tin intermetallics layer is responsible for poor mechanical reliability of tin-soldered gold-plated surfaces where the gold plating did not completely dissolve in the solder.
Two processes play a role in a solder joint formation: interaction between the substrate and molten solder, and solid-state growth of intermetallic compounds. The base metal dissolves in the molten solder in an amount depending on its solubility in the solder. The active constituent of the solder reacts with the base metal with a rate dependent on the solubility of the active constituents in the base metal. The solid-state reactions are more complex – the formation of intermetallics can be inhibited by changing the composition of the base metal or the solder alloy, or by using a suitable barrier layer to inhibit diffusion of the metals.
Some example interactions include:
Gold and palladium readily dissolve in solders. Copper and nickel tend to form intermetallic layers during normal soldering profiles. Indium forms intermetallics as well.
Indium-gold intermetallics are brittle and occupy about 4 times more volume than the original gold. Bonding wires are especially susceptible to indium attack. Such intermetallic growth, together with thermal cycling, can lead to failure of the bonding wires.
Copper plated with nickel and gold is often used. The thin gold layer facilitates good solderability of nickel as it protects the nickel from oxidation; the layer has to be thin enough to rapidly and completely dissolve so bare nickel is exposed to the solder.
Lead-tin solder layers on copper leads can form copper-tin intermetallic layers; the solder alloy is then locally depleted of tin and form a lead-rich layer. The Sn-Cu intermetallics then can get exposed to oxidation, resulting in impaired solderability.
– common on solder-copper interface, forms preferentially when excess of tin is available; in presence of nickel, compound can be formed
– common on solder-copper interface, forms preferentially when excess of copper is available, more thermally stable than , often present when higher-temperature soldering occurred
– common on solder-nickel interface
– very slow formation
Sn - at higher concentration of silver (over 3%) in tin forms platelets that can serve as crack initiation sites.
– β-phase – brittle, forms at excess of tin. Detrimental to properties of tin-based solders to gold-plated layers.
– forms on the boundary between gold and indium-lead solder, acts as a barrier against further dissolution of gold
Preform
A preform is a pre-made shape of solder specially designed for the application where it is to be used. Many methods are used to manufacture the solder preform, stamping being the most common. The solder preform may include the solder flux needed for the soldering process. This can be an internal flux, inside the solder preform, or external, with the solder preform coated.
Similar substances
Glass solder is used to join glasses to other glasses, ceramics, metals, semiconductors, mica, and other materials, in a process called glass frit bonding. The glass solder has to flow and wet the soldered surfaces well below the temperature where deformation or degradation of either of the joined materials or nearby structures (e.g., metallization layers on chips or ceramic substrates) occurs. The usual temperature of achieving flowing and wetting is between .
| Physical sciences | Specific alloys | Chemistry |
28951 | https://en.wikipedia.org/wiki/Spyware | Spyware | Spyware (a portmanteau for spying software) is any malware that aims to gather information about a person or organization and send it to another entity in a way that harms the user by violating their privacy, endangering their device's security, or other means. This behavior may be present in other malware and in legitimate software. Websites may engage in spyware behaviors like web tracking. Hardware devices may also be affected.
Spyware is frequently associated with advertising and involves many of the same issues. Because these behaviors are so common, and can have non-harmful uses, providing a precise definition of spyware is a difficult task.
History
The first recorded use of the term spyware occurred on October 16, 1995, in a Usenet post that poked fun at Microsoft's business model. Spyware at first denoted software meant for espionage purposes. However, in early 2000 the founder of Zone Labs, Gregor Freund, used the term in a press release for the ZoneAlarm Personal Firewall. Later in 2000, a parent using ZoneAlarm was alerted to the fact that Reader Rabbit, educational software marketed to children by the Mattel toy company, was surreptitiously sending data back to Mattel. Since then, "spyware" has taken on its present sense.
According to a 2005 study by AOL and the National Cyber-Security Alliance, 61 percent of surveyed users' computers were infected with some form of spyware. 92 percent of surveyed users with spyware reported that they did not know of its presence, and 91 percent reported that they had not given permission for the installation of the spyware.
, spyware has become one of the preeminent security threats to computer systems running Microsoft Windows operating systems. Computers on which Internet Explorer (IE) was the primary browser are particularly vulnerable to such attacks, not only because IE was the most widely used, but also because its tight integration with Windows allows spyware access to crucial parts of the operating system.
Before Internet Explorer 6 SP2 was released as part of Windows XP Service Pack 2, the browser would automatically display an installation window for any ActiveX component that a website wanted to install. The combination of user ignorance about these changes, and the assumption by Internet Explorer that all ActiveX components are benign, helped to spread spyware significantly. Many spyware components would also make use of exploits in JavaScript, Internet Explorer and Windows to install without user knowledge or permission.
The Windows Registry contains multiple sections where modification of key values allows software to be executed automatically when the operating system boots. Spyware can exploit this design to circumvent attempts at removal. The spyware typically links itself to each location in the registry that allows execution. Once running, the spyware will periodically check if any of these links are removed. If so, they will be automatically restored. This ensures that the spyware will execute when the operating system is booted, even if some (or most) of the registry links are removed.
Overview
Spyware is mostly classified into four types: adware, system monitors, tracking including web tracking, and trojans; examples of other notorious types include digital rights management capabilities that "phone home", keyloggers, rootkits, and web beacons. These four categories are not mutually exclusive and they have similar tactics in attacking networks and devices. The main goal is to install, hack into the network, avoid being detected, and safely remove themselves from the network.
Spyware is mostly used for the stealing information and storing Internet users' movements on the Web and serving up pop-up ads to Internet users. Whenever spyware is used for malicious purposes, its presence is typically hidden from the user and can be difficult to detect. Some spyware, such as keyloggers, may be installed by the owner of a shared, corporate, or public computer intentionally in order to monitor users.
While the term spyware suggests software that monitors a user's computer, the functions of spyware can extend beyond simple monitoring. Spyware can collect almost any type of data, including personal information like internet surfing habits, user logins, and bank or credit account information. Spyware can also interfere with a user's control of a computer by installing additional software or redirecting web browsers. Some spyware can change computer settings, which can result in slow Internet connection speeds, un-authorized changes in browser settings, or changes to software settings.
Sometimes, spyware is included along with genuine software, and may come from a malicious website or may have been added to the intentional functionality of genuine software (see the paragraph about Facebook, below). In response to the emergence of spyware, a small industry has sprung up dealing in anti-spyware software. Running anti-spyware software has become a widely recognized element of computer security practices, especially for computers running Microsoft Windows. A number of jurisdictions have passed anti-spyware laws, which usually target any software that is surreptitiously installed to control a user's computer.
In German-speaking countries, spyware used or made by the government is called govware by computer experts (in common parlance: , literally "Government Trojan"). Govware is typically a trojan horse software used to intercept communications from the target computer. Some countries, like Switzerland and Germany, have a legal framework governing the use of such software. In the US, the term "policeware" has been used for similar purposes.
Use of the term "spyware" has eventually declined as the practice of tracking users has been pushed ever further into the mainstream by major websites and data mining companies; these generally break no known laws and compel users to be tracked, not by fraudulent practices per se, but by the default settings created for users and the language of terms-of-service agreements.
In one documented example, on CBS/CNet News reported, on March 7, 2011, an analysis in The Wall Street Journal revealed the practice of Facebook and other websites of tracking users' browsing activity, which is linked to their identity, far beyond users' visits and activity on the Facebook site itself. The report stated: "Here's how it works. You go to Facebook, you log in, you spend some time there, and then ... you move on without logging out. Let's say the next site you go to is The New York Times. Those buttons, without you clicking on them, have just reported back to Facebook and Twitter that you went there and also your identity within those accounts. Let's say you moved on to something like a site about depression. This one also has a tweet button, a Google widget, and those, too, can report back who you are and that you went there." The Wall Street Journal analysis was researched by Brian Kennish, founder of Disconnect, Inc.
Routes of infection
Spyware does not necessarily spread in the same way as a virus or worm because infected systems generally do not attempt to transmit or copy the software to other computers. Instead, spyware installs itself on a system by deceiving the user or by exploiting software vulnerabilities.
Most spyware is installed without knowledge, or by using deceptive tactics. Spyware may try to deceive users by bundling itself with desirable software. Other common tactics are using a Trojan horse, spy gadgets that look like normal devices but turn out to be something else, such as a USB Keylogger. These devices actually are connected to the device as memory units but are capable of recording each stroke made on the keyboard. Some spyware authors infect a system through security holes in the Web browser or in other software. When the user navigates to a Web page controlled by the spyware author, the page contains code which attacks the browser and forces the download and installation of spyware.
The installation of spyware frequently involves Internet Explorer. Its popularity and history of security issues have made it a frequent target. Its deep integration with the Windows environment make it susceptible to attack into the Windows operating system. Internet Explorer also serves as a point of attachment for spyware in the form of Browser Helper Objects, which modify the browser's behaviour.
Effects and behaviors
A spyware program rarely operates alone on a computer; an affected machine usually has multiple infections. Users frequently notice unwanted behavior and degradation of system performance. A spyware infestation can create significant unwanted CPU activity, disk usage, and network traffic. Stability issues, such as applications freezing, failure to boot, and system-wide crashes are also common. Usually, this effect is intentional, but may be caused from the malware simply requiring large amounts of computing power, disk space, or network usage. Spyware, which interferes with networking software commonly causes difficulty connecting to the Internet.
In some infections, the spyware is not even evident. Users assume in those situations that the performance issues relate to faulty hardware, Windows installation problems, or another malware infection. Some owners of badly infected systems resort to contacting technical support experts, or even buying a new computer because the existing system "has become too slow". Badly infected systems may require a clean reinstallation of all their software in order to return to full functionality.
Moreover, some types of spyware disable software firewalls and antivirus software, and/or reduce browser security settings, which opens the system to further opportunistic infections. Some spyware disables or even removes competing spyware programs, on the grounds that more spyware-related annoyances increase the likelihood that users will take action to remove the programs.
Keyloggers are sometimes part of malware packages downloaded onto computers without the owners' knowledge. Some keylogger software is freely available on the internet, while others are commercial or private applications. Most keyloggers allow not only keyboard keystrokes to be captured, they also are often capable of collecting screen captures from the computer.
A typical Windows user has administrative privileges, mostly for convenience. Because of this, any program the user runs has unrestricted access to the system. As with other operating systems, Windows users are able to follow the principle of least privilege and use non-administrator accounts. Alternatively, they can reduce the privileges of specific vulnerable Internet-facing processes, such as Internet Explorer.
Since Windows Vista is, by default, a computer administrator that runs everything under limited user privileges, when a program requires administrative privileges, a User Account Control pop-up will prompt the user to allow or deny the action. This improves on the design used by previous versions of Windows.
Spyware is also known as tracking software.
Remedies and prevention
As the spyware threat has evolved, a number of techniques have emerged to counteract it. These include programs designed to remove or block spyware, as well as various user practices which reduce the chance of getting spyware on a system.
Nonetheless, spyware remains a costly problem. When a large number of pieces of spyware have infected a Windows computer, the only remedy may involve backing up user data, and fully reinstalling the operating system. For instance, some spyware cannot be completely removed by Symantec, Microsoft, PC Tools.
Anti-spyware programs
Many programmers and some commercial firms have released products designed to remove or block spyware. Programs such as PC Tools' Spyware Doctor, Lavasoft's Ad-Aware SE and Patrick Kolla's Spybot - Search & Destroy rapidly gained popularity as tools to remove, and in some cases intercept, spyware programs. In December 2004, Microsoft acquired the GIANT AntiSpyware software, rebranding it as Microsoft AntiSpyware (Beta 1) and releasing it as a free download for Genuine Windows XP and Windows 2003 users. In November, 2005, it was renamed Windows Defender.
Major anti-virus firms such as Symantec, PC Tools, McAfee and Sophos have also added anti-spyware features to their existing anti-virus products. Early on, anti-virus firms expressed reluctance to add anti-spyware functions, citing lawsuits brought by spyware authors against the authors of web sites and programs which described their products as "spyware". However, recent versions of these major firms home and business anti-virus products do include anti-spyware functions, albeit treated differently from viruses. Symantec Anti-Virus, for instance, categorizes spyware programs as "extended threats" and now offers real-time protection against these threats.
Other Anti-spyware tools include FlexiSPY, Mobilespy, mSPY, TheWiSPY, and UMobix.
How anti-spyware software works
Anti-spyware programs can combat spyware in two ways:
They can provide real-time protection in a manner similar to that of anti-virus protection: all incoming network data is scanned for spyware, and any detected threats are blocked.
Anti-spyware software programs can be used solely for detection and removal of spyware software that has already been installed into the computer. This kind of anti-spyware can often be set to scan on a regular schedule.
Such programs inspect the contents of the Windows registry, operating system files, and installed programs, and remove files and entries which match a list of known spyware. Real-time protection from spyware works identically to real-time anti-virus protection: the software scans disk files at download time, and blocks the activity of components known to represent spyware.
In some cases, it may also intercept attempts to install start-up items or to modify browser settings. Earlier versions of anti-spyware programs focused chiefly on detection and removal. Javacool Software's SpywareBlaster, one of the first to offer real-time protection, blocked the installation of ActiveX-based spyware.
Like most anti-virus software, many anti-spyware/adware tools require a frequently updated database of threats. As new spyware programs are released, anti-spyware developers discover and evaluate them, adding to the list of known spyware, which allows the software to detect and remove new spyware. As a result, anti-spyware software is of limited usefulness without regular updates. Updates may be installed automatically or manually.
A popular generic spyware removal tool used by those that requires a certain degree of expertise is HijackThis, which scans certain areas of the Windows OS where spyware often resides and presents a list with items to delete manually. As most of the items are legitimate windows files/registry entries it is advised for those who are less knowledgeable on this subject to post a HijackThis log on the numerous antispyware sites and let the experts decide what to delete.
If a spyware program is not blocked and manages to get itself installed, it may resist attempts to terminate or uninstall it. Some programs work in pairs: when an anti-spyware scanner (or the user) terminates one running process, the other one respawns the killed program. Likewise, some spyware will detect attempts to remove registry keys and immediately add them again. Usually, booting the infected computer in safe mode allows an anti-spyware program a better chance of removing persistent spyware. Killing the process tree may also work.
Security practices
To detect spyware, computer users have found several practices useful in addition to installing anti-spyware programs. Many users have installed a web browser other than Internet Explorer, such as Mozilla Firefox or Google Chrome. Though no browser is completely safe, Internet Explorer was once at a greater risk for spyware infection due to its large user base as well as vulnerabilities such as ActiveX but these three major browsers are now close to equivalent when it comes to security.
Some ISPs—particularly colleges and universities—have taken a different approach to blocking spyware: they use their network firewalls and web proxies to block access to Web sites known to install spyware. On March 31, 2005, Cornell University's Information Technology department released a report detailing the behavior of one particular piece of proxy-based spyware, Marketscore, and the steps the university took to intercept it. Many other educational institutions have taken similar steps.
Individual users can also install firewalls from a variety of companies. These monitor the flow of information going to and from a networked computer and provide protection against spyware and malware. Some users install a large hosts file which prevents the user's computer from connecting to known spyware-related web addresses. Spyware may get installed via certain shareware programs offered for download. Downloading programs only from reputable sources can provide some protection from this source of attack.
Individual users can use cellphone / computer with physical (electric) switch, or isolated electronic switch that disconnects microphone, camera without bypass and keep it in disconnected position where not in use, that limits information that spyware can collect. (Policy recommended by NIST Guidelines for Managing the Security of Mobile Devices, 2013).
Applications
"Stealware" and affiliate fraud
A few spyware vendors, notably 180 Solutions, have written what the New York Times has dubbed "stealware", and what spyware researcher Ben Edelman terms affiliate fraud, a form of click fraud. Stealware diverts the payment of affiliate marketing revenues from the legitimate affiliate to the spyware vendor.
Spyware which attacks affiliate networks places the spyware operator's affiliate tag on the user's activity – replacing any other tag, if there is one. The spyware operator is the only party that gains from this. The user has their choices thwarted, a legitimate affiliate loses revenue, networks' reputations are injured, and vendors are harmed by having to pay out affiliate revenues to an "affiliate" who is not party to a contract. Affiliate fraud is a violation of the terms of service of most affiliate marketing networks. Mobile devices can also be vulnerable to chargeware, which manipulates users into illegitimate mobile charges.
Identity theft and fraud
In one case, spyware has been closely associated with identity theft. In August 2005, researchers from security software firm Sunbelt Software suspected the creators of the common CoolWebSearch spyware had used it to transmit "chat sessions, user names, passwords, bank information, etc."; however it turned out that "it actually (was) its own sophisticated criminal little trojan that's independent of CWS." This case was investigated by the FBI.
The Federal Trade Commission estimates that 27.3 million Americans have been victims of identity theft, and that financial losses from identity theft totaled nearly $48 billion for businesses and financial institutions and at least $5 billion in out-of-pocket expenses for individuals.
Digital rights management
Some copy-protection technologies have borrowed from spyware. In 2005, Sony BMG Music Entertainment was found to be using rootkits in its XCP digital rights management technology Like spyware, not only was it difficult to detect and uninstall, it was so poorly written that most efforts to remove it could have rendered computers unable to function.
Texas Attorney General Greg Abbott filed suit, and three separate class-action suits were filed. Sony BMG later provided a workaround on its website to help users remove it.
Beginning on April 25, 2006, Microsoft's Windows Genuine Advantage Notifications application was installed on most Windows PCs as a "critical security update". While the main purpose of this deliberately uninstallable application is to ensure the copy of Windows on the machine was lawfully purchased and installed, it also installs software that has been accused of "phoning home" on a daily basis, like spyware. It can be removed with the RemoveWGA tool.
Personal relationships
Stalkerware is spyware that has been used to monitor electronic activities of partners in intimate relationships. At least one software package, Loverspy, was specifically marketed for this purpose. Depending on local laws regarding communal/marital property, observing a partner's online activity without their consent may be illegal; the author of Loverspy and several users of the product were indicted in California in 2005 on charges of wiretapping and various computer crimes.
Browser cookies
Anti-spyware programs often report Web advertisers' HTTP cookies, the small text files that track browsing activity, as spyware. While they are not always inherently malicious, many users object to third parties using space on their personal computers for their business purposes, and many anti-spyware programs offer to remove them.
Shameware
Shameware or "accountability software" is a type of spyware that is not hidden from the user, but operates with their knowledge, if not necessarily their consent. Parents, religious leaders or other authority figures may require their children or congregation members to install such software, which is intended to detect the viewing of pornography or other content deemed inappropriate, and to report it to the authority figure, who may then confront the user about it.
Spyware programs
These common spyware programs illustrate the diversity of behaviors found in these attacks. Note that as with computer viruses, researchers give names to spyware programs which may not be used by their creators. Programs may be grouped into "families" based not on shared program code, but on common behaviors, or by "following the money" of apparent financial or business connections. For instance, a number of the spyware programs distributed by Claria are collectively known as "Gator". Likewise, programs that are frequently installed together may be described as parts of the same spyware package, even if they function separately.
Spyware vendors
Spyware vendors include NSO Group, which in the 2010s sold spyware to governments for spying on human rights activists and journalists. NSO Group was investigated by Citizen Lab.
Rogue anti-spyware programs
Malicious programmers have released a large number of rogue (fake) anti-spyware programs, and widely distributed Web banner ads can warn users that their computers have been infected with spyware, directing them to purchase programs which do not actually remove spyware—or else, may add more spyware of their own.
The proliferation of fake or spoofed antivirus products that bill themselves as antispyware can be troublesome. Users may receive popups prompting them to install them to protect their computer, when it will in fact add spyware. It is recommended that users do not install any freeware claiming to be anti-spyware unless it is verified to be legitimate. Some known offenders include:
AntiVirus 360 & Antivirus 2009
MacSweeper
Pest Trap
PSGuard
Spy Wiper
Spydawn
Spylocked
Spysheriff
SpyShredder
Spyware Quake
SpywareStrike
WinAntiVirus Pro 2006
Windows Police Pro
WinFixer
WorldAntiSpy
Fake antivirus products constitute 15 percent of all malware.
On January 26, 2006, Microsoft and the Washington state attorney general filed suit against Secure Computer for its Spyware Cleaner product.
Legal issues
Criminal law
Unauthorized access to a computer is illegal under computer crime laws, such as the U.S. Computer Fraud and Abuse Act, the U.K.'s Computer Misuse Act, and similar laws in other countries. Since owners of computers infected with spyware generally claim that they never authorized the installation, a prima facie reading would suggest that the promulgation of spyware would count as a criminal act. Law enforcement has often pursued the authors of other malware, particularly viruses. However, few spyware developers have been prosecuted, and many operate openly as strictly legitimate businesses, though some have faced lawsuits.
Spyware producers argue that, contrary to the users' claims, users do in fact give consent to installations. Spyware that comes bundled with shareware applications may be described in the legalese text of an end-user license agreement (EULA). Many users habitually ignore these purported contracts, but spyware companies such as Claria say these demonstrate that users have consented.
Despite the ubiquity of EULAs agreements, under which a single click can be taken as consent to the entire text, relatively little caselaw has resulted from their use. It has been established in most common law jurisdictions that this type of agreement can be a binding contract in certain circumstances. This does not, however, mean that every such agreement is a contract, or that every term in one is enforceable.
Some jurisdictions, including the U.S. states of Iowa and Washington, have passed laws criminalizing some forms of spyware. Such laws make it illegal for anyone other than the owner or operator of a computer to install software that alters Web-browser settings, monitors keystrokes, or disables computer-security software.
In the United States, lawmakers introduced a bill in 2005 entitled the Internet Spyware Prevention Act, which would imprison creators of spyware.
Additionally, several diplomatic efforts have been made to curb the growing usage of Spywares. Launched by France and the UK in early 2024, the Pall Mall Process aims to address the proliferation and irresponsible use of commercial cyber intrusion capabilities.
Administrative sanctions
US FTC actions
The US Federal Trade Commission has sued Internet marketing organizations under the "unfairness doctrine" to make them stop infecting consumers' PCs with spyware. In one case, that against Seismic Entertainment Productions, the FTC accused the defendants of developing a program that seized control of PCs nationwide, infected them with spyware and other malicious software, bombarded them with a barrage of pop-up advertising for Seismic's clients, exposed the PCs to security risks, and caused them to malfunction. Seismic then offered to sell the victims an "antispyware" program to fix the computers, and stop the popups and other problems that Seismic had caused. On November 21, 2006, a settlement was entered in federal court under which a $1.75 million judgment was imposed in one case and $1.86 million in another, but the defendants were insolvent
In a second case, brought against CyberSpy Software LLC, the FTC charged that CyberSpy marketed and sold "RemoteSpy" keylogger spyware to clients who would then secretly monitor unsuspecting consumers' computers. According to the FTC, Cyberspy touted RemoteSpy as a "100% undetectable" way to "Spy on Anyone. From Anywhere." The FTC has obtained a temporary order prohibiting the defendants from selling the software and disconnecting from the Internet any of their servers that collect, store, or provide access to information that this software has gathered. The case is still in its preliminary stages. A complaint filed by the Electronic Privacy Information Center (EPIC) brought the RemoteSpy software to the FTC's attention.
Netherlands OPTA
An administrative fine, the first of its kind in Europe, has been issued by the Independent Authority of Posts and Telecommunications (OPTA) from the Netherlands. It applied fines in total value of Euro 1,000,000 for infecting 22 million computers. The spyware concerned is called DollarRevenue. The law articles that have been violated are art. 4.1 of the Decision on universal service providers and on the interests of end users; the fines have been issued based on art. 15.4 taken together with art. 15.10 of the Dutch telecommunications law.
Civil law
Former New York State Attorney General and former Governor of New York Eliot Spitzer has pursued spyware companies for fraudulent installation of software. In a suit brought in 2005 by Spitzer, the California firm Intermix Media, Inc. ended up settling, by agreeing to pay US$7.5 million and to stop distributing spyware.
The hijacking of Web advertisements has also led to litigation. In June 2002, a number of large Web publishers sued Claria for replacing advertisements, but settled out of court.
Courts have not yet had to decide whether advertisers can be held liable for spyware that displays their ads. In many cases, the companies whose advertisements appear in spyware pop-ups do not directly do business with the spyware firm. Rather, they have contracted with an advertising agency, which in turn contracts with an online subcontractor who gets paid by the number of "impressions" or appearances of the advertisement. Some major firms such as Dell Computer and Mercedes-Benz have sacked advertising agencies that have run their ads in spyware.
Libel suits by spyware developers
Litigation has gone both ways. Since "spyware" has become a common pejorative, some makers have filed libel and defamation actions when their products have been so described. In 2003, Gator (now known as Claria) filed suit against the website PC Pitstop for describing its program as "spyware". PC Pitstop settled, agreeing not to use the word "spyware", but continues to describe harm caused by the Gator/Claria software. As a result, other anti-spyware and anti-virus companies have also used other terms such as "potentially unwanted programs" or greyware to denote these products.
WebcamGate
In the 2010 WebcamGate case, plaintiffs charged two suburban Philadelphia high schools secretly spied on students by surreptitiously and remotely activating webcams embedded in school-issued laptops the students were using at home, and therefore infringed on their privacy rights. The school loaded each student's computer with LANrev's remote activation tracking software. This included the now-discontinued "TheftTrack". While TheftTrack was not enabled by default on the software, the program allowed the school district to elect to activate it, and to choose which of the TheftTrack surveillance options the school wanted to enable.
TheftTrack allowed school district employees to secretly remotely activate the webcam embedded in the student's laptop, above the laptop's screen. That allowed school officials to secretly take photos through the webcam, of whatever was in front of it and in its line of sight, and send the photos to the school's server. The LANrev software disabled the webcams for all other uses (e.g., students were unable to use Photo Booth or video chat), so most students mistakenly believed their webcams did not work at all. On top of the webcam surveillance, TheftTrack allowed school officials to take screenshots and send them to the school's server. School officials were also granted the ability to take snapshots of instant messages, web browsing, music playlists, and written compositions. The schools admitted to secretly snapping over 66,000 webshots and screenshots, including webcam shots of students in their bedrooms.
| Technology | Computer security | null |
4520673 | https://en.wikipedia.org/wiki/Rapidity | Rapidity | In special relativity, the classical concept of velocity is converted to rapidity to accommodate the limit determined by the speed of light. Velocities must be combined by Einstein's velocity-addition formula. For low speeds, rapidity and velocity are almost exactly proportional but, for higher velocities, rapidity takes a larger value, with the rapidity of light being infinite.
Mathematically, rapidity can be defined as the hyperbolic angle that differentiates two frames of reference in relative motion, each frame being associated with distance and time coordinates.
Using the inverse hyperbolic function , the rapidity corresponding to velocity is where is the speed of light. For low speeds, by the small-angle approximation, is approximately . Since in relativity any velocity is constrained to the interval the ratio satisfies . The inverse hyperbolic tangent has the unit interval for its domain and the whole real line for its image; that is, the interval maps onto .
History
In 1908 Hermann Minkowski explained how the Lorentz transformation could be seen as simply a hyperbolic rotation of the spacetime coordinates, i.e., a rotation through an imaginary angle. This angle therefore represents (in one spatial dimension) a simple additive measure of the velocity between frames. The rapidity parameter replacing velocity was introduced in 1910 by Vladimir Varićak and by E. T. Whittaker. The parameter was named rapidity by Alfred Robb (1911) and this term was adopted by many subsequent authors, such as Ludwik Silberstein (1914), Frank Morley (1936) and Wolfgang Rindler (2001).
Area of a hyperbolic sector
The quadrature of the hyperbola by Grégoire de Saint-Vincent established the natural logarithm as the area of a hyperbolic sector or an equivalent area against an asymptote. In spacetime theory, the connection of events by light divides the universe into Past, Future, or Elsewhere based on a Here and Now . On any line in space, a light beam may be directed left or right. Take the as the events passed by the right beam and the as the events of the left beam. Then a resting frame has time along the diagonal . The rectangular hyperbola can be used to gauge velocities (in the first quadrant). Zero velocity corresponds to . Any point on the hyperbola has light-cone coordinates where is the rapidity, and is equal to the area of the hyperbolic sector from to these coordinates. Many authors refer instead to the unit hyperbola , using rapidity for a parameter, as in the standard spacetime diagram. There the axes are measured by clock and meter-stick, more familiar benchmarks, and the basis of spacetime theory. So the delineation of rapidity as a hyperbolic parameter of beam-space is a reference to the seventeenth-century origin of our precious transcendental functions, and a supplement to spacetime diagramming.
Lorentz boost
The rapidity arises in the linear representation of a Lorentz boost as a vector-matrix product
The matrix is of the type with and satisfying , so that lies on the unit hyperbola. Such matrices form the indefinite orthogonal group O(1,1) with one-dimensional Lie algebra spanned by the anti-diagonal unit matrix, showing that the rapidity is the coordinate on this Lie algebra. This action may be depicted in a spacetime diagram. In matrix exponential notation, can be expressed as , where is the negative of the anti-diagonal unit matrix
A key property of the matrix exponential is from which immediately follows that
This establishes the useful additive property of rapidity: if , and are frames of reference, then
where denotes the rapidity of a frame of reference relative to a frame of reference . The simplicity of this formula contrasts with the complexity of the corresponding velocity-addition formula.
As we can see from the Lorentz transformation above, the Lorentz factor identifies with
so the rapidity is implicitly used as a hyperbolic angle in the Lorentz transformation expressions using and . We relate rapidities to the velocity-addition formula
by recognizing
and so
Proper acceleration (the acceleration 'felt' by the object being accelerated) is the rate of change of rapidity with respect to proper time (time as measured by the object undergoing acceleration itself). Therefore, the rapidity of an object in a given frame can be viewed simply as the velocity of that object as would be calculated non-relativistically by an inertial guidance system on board the object itself if it accelerated from rest in that frame to its given speed.
The product of and appears frequently, and is from the above arguments
Exponential and logarithmic relations
From the above expressions we have
and thus
or explicitly
The Doppler-shift factor associated with rapidity is .
In experimental particle physics
The energy and scalar momentum of a particle of non-zero (rest) mass are given by:
With the definition of
and thus with
the energy and scalar momentum can be written as:
So, rapidity can be calculated from measured energy and momentum by
However, experimental particle physicists often use a modified definition of rapidity relative to a beam axis
where is the component of momentum along the beam axis. This is the rapidity of the boost along the beam axis which takes an observer from the lab frame to a frame in which the particle moves only perpendicular to the beam. Related to this is the concept of pseudorapidity.
Rapidity relative to a beam axis can also be expressed as
| Physical sciences | Theory of relativity | Physics |
18160997 | https://en.wikipedia.org/wiki/Rain%20and%20snow%20mixed | Rain and snow mixed | Rain and snow mixed (American English) or sleet (Commonwealth English) is precipitation composed of a mixture of rain and partially melted snow. Unlike ice pellets, which are hard, and freezing rain, which is fluid until striking an object where it fully freezes, this precipitation is soft and translucent, but it contains some traces of ice crystals from partially fused snowflakes, also called slush. In any one location, it usually occurs briefly as a transition phase from rain to snow or vice-versa, but hits the surface before fully transforming. Its METAR code is RASN or SNRA.
Terminology
This precipitation type is commonly known as sleet in most Commonwealth countries. However, the United States National Weather Service uses the term sleet to refer to ice pellets instead.
Formation
This precipitation occurs when the temperature in the lowest part of the atmosphere is slightly above the freezing point of water The depth of low-level warm air (below the freezing level) needed to melt snow falling from above to rain varies from about and depends on the mass of the flakes and the lapse rate of the melting layer. Rain and snow typically mix when the melting layer depth falls between these values as rain starts forming when in that range.
"Wintry showers" or "wintry mixes"
Wintry showers is a somewhat informal meteorological term, used primarily in the United Kingdom, to refer to various mixtures of rain, graupel and snow at once. Though there is no "official" definition of the term, in the United Kingdom it is not used when there is any significant accumulation of snow on the ground. It is often used when the temperature of the ground surface is above , preventing snow from accumulating even if the air temperature near the surface is slightly below ; but even then, the falling precipitation must generally have something else other than exclusively snow.
In the United States, wintry mix generally refers to a mixture of freezing rain, ice pellets, and snow. In contrast to the usage in the United Kingdom, in the United States it is usually used when both air and ground temperatures are below . Additionally, it is generally used when some surface accumulation of ice and snow is expected to occur. During winter, a wide area can be affected by the multiple mixed precipitation types typical of a wintry mix during a single winter storm, as counterclockwise winds around a storm system bring warm air northwards ahead of the system, and then bring cold air back southwards behind it. Most often, it is the region ahead of the approaching storm system which sees the wintry mix, as warm air moves northward and above retreating cold air in a warm front, causing snow to change into ice pellets, freezing rain and finally rain. The reverse transition can occur behind the departing low-pressure system, though it is more common for precipitation to freeze directly from rain to snow, or for it to stop before a transition back.
| Physical sciences | Precipitation | Earth science |
7827879 | https://en.wikipedia.org/wiki/Tectonite | Tectonite | Tectonites are metamorphic or tectonically deformed rocks whose fabric reflects the history of their deformation, or rocks with fabric that clearly displays coordinated geometric features that indicate continuous solid (ductile) flow during formation. Planar foliation results from a parallel orientation of platey mineral phases such as the phyllosilicates or graphite. Slender prismatic crystals such as amphibole produce a lineation in which these prisms or columnar crystals become aligned.
Tectonites are rocks with minerals that have been affected by natural forces of the earth, which allowed their orientations to change. This usually includes recrystallization of minerals, and the foliation formation. Tectonites are studied through structural analysis and allows for the determination of two things:
The orientation of shearing and compressive stresses during (dynamic) metamorphism
The later (or final) stages of metamorphism
According to the nature of mineral orientation, there are three main groups of tectonites, L-Tectonites, S-Tectonites, and LS-Tectonites. The different types reflect on the different ways that matter moves.
L-Tectonites are aligned in a linear fabric, which allows the rock to split into rod-like shapes due to the two intersecting planes. The foliation of this type is not strong.
S-Tectonites are the fabric that is dominantly a foliation fabric which allows the rock to split into plate-like sheets that are parallel to foliation. There are little to no linear fabrics within the foliation fabrics.
LS-Tectonites is the perfect mix of the two, with both strong lineations and strong foliations. This is caused by the rotation of the grains around axes that are oriented along the trends of the folds.
Classification
S-tectonites (from the German, Schiefer for schist) have a dominant planar fabric and may indicate a flattening type of strain. This may also be due to a lack of minerals capable of giving a lineation e.g. in a phyllonite.
L-tectonites have a dominant linear fabric and generally indicate a constrictional type of strain. This may be due to a lack of platey phases.
L-S tectonites have equally developed linear and planar fabric elements and may indicate a plane strain deformation. Many mylonites are L-S tectonites consistent with a simple shear deformation.
L-Tectonites (Lineations) indicate Constrictional Strain
S-Tectonites (Foliations) indicate Flattening Strain
L-S Tectonites (Plain strain) indicate both elongation and flattening strain
| Physical sciences | Metamorphic rocks | Earth science |
3333827 | https://en.wikipedia.org/wiki/Koolasuchus | Koolasuchus | Koolasuchus is an extinct genus of brachyopoid temnospondyl in the family Chigutisauridae. Fossils have been found from Victoria, Australia and date back 125-120 million years ago to Barremian-Aptian stages of the Early Cretaceous. Koolasuchus is the youngest known temnospondyl. It is known from several fragments of the skull and other bones such as vertebrae, ribs, and pectoral elements. The type species Koolasuchus cleelandi was named in 1997. K. cleelandi was adopted as the fossil emblem for the state of Victoria, Australia on 13 January 2022.
History
The first fossil of temnospondyls found in the Strzelecki Group was NMV-PI56988, the posterior fragment of a jaw, collected around 1980. The jaw fragment was first mentioned in a 1986 publication by Anne Warren and R. Jupp, who did not definitively identify it as that of a temnospondyl due to the Cretaceous age of the specimen, much younger than any other known temnospondyl specimen at the time. In 1991, additional remains were reported including NMV-PI86040, an intercentrum (part of the vertebra) and NMV-PI86101, an isolated skull roof bone, likely representing either a frontal, a supratemporal or a parietal. The intercentrum unquestionably confirmed that temnospondyls were present in the Strzelecki Group. The morphology of the skull roof bone lead to the authors suggesting that the temnospondyl was either a member of Plagiosauridae or Brachyopoidea.
Koolasuchus was named in 1997 from the Aptian aged Wonthaggi Formation of Strzelecki Group in Victoria. It is known from four fragments of the lower jaw and several postcranial bones, including ribs, vertebrae, a fibula, and parts of the pectoral girdle. A jawbone was found in 1978 in a fossil site known as the Punch Bowl near the town of San Remo. Later specimens were found in 1989 on the nearby Rowell's Beach. A partial skull is also known but has not been fully prepared. Koolasuchus was named for the palaeontologist Lesley Kool. The name is also a pun on the word "cool" in reference to the cold climate of its environment. The type species K. cleelandi is named after geologist Mike Cleeland.
Description
Koolasuchus was a large, aquatic temnospondyl, measuring up to in length and weighing up to . Like other chigutisaurids, it had a wide, rounded head and tabular horns projecting from the back of the skull. Although represented by incomplete material, the skull was likely long.
Koolasuchus is distinguished from other temnospondyls aside from Siderops and Hadrokkosaurus by having the ramus of the mandible "articular is excluded from the dorsal surface of the postglenoid area by a suture between the surangular and the prearticular", and is distinguished from those two taxa by a lack of coronoid teeth.
Paleobiology
Koolasuchus inhabited rift valleys in southern Australia during the Early Cretaceous. During this time the area was below the Antarctic Circle, and temperatures were relatively cool for the Mesozoic. Based on the coarse-grained rocks in which remains were found, Koolasuchus likely lived in fast-moving streams. As a large aquatic predator, it was similar in lifestyle to crocodilians. Although eusuchians and kin were common during the Early Cretaceous, they were absent from southern Australia 120 million years ago, possibly because of the cold climate. By 110 Mya, represented by rocks in the Dinosaur Cove fossil locality, temperatures had warmed and crocodilians had returned to the area. These crocodilians likely displaced Koolasuchus, leading to its disappearance in younger rocks.
| Biology and health sciences | Prehistoric amphibians | Animals |
3335767 | https://en.wikipedia.org/wiki/Meander | Meander | A meander is one of a series of regular sinuous curves in the channel of a river or other watercourse. It is produced as a watercourse erodes the sediments of an outer, concave bank (cut bank or river cliff) and deposits sediments on an inner, convex bank which is typically a point bar. The result of this coupled erosion and sedimentation is the formation of a sinuous course as the channel migrates back and forth across the axis of a floodplain.
The zone within which a meandering stream periodically shifts its channel is known as a meander belt. It typically ranges from 15 to 18 times the width of the channel. Over time, meanders migrate downstream, sometimes in such a short time as to create civil engineering challenges for local municipalities attempting to maintain stable roads and bridges.
The degree of meandering of the channel of a river, stream, or other watercourse is measured by its sinuosity. The sinuosity of a watercourse is the ratio of the length of the channel to the straight line down-valley distance. Streams or rivers with a single channel and sinuosities of 1.5 or more are defined as meandering streams or rivers.
Origin of term
The term derives from the winding river Menderes located in Asia-Minor and known to the Ancient Greeks as Μαίανδρος Maiandros (Latin: Maeander), characterised by a very convoluted path along the lower reach. As a result, even in Classical Greece (and in later Greek thought) the name of the river had become a common noun meaning anything convoluted and winding, such as decorative patterns or speech and ideas, as well as the geomorphological feature. Strabo said: ‘...its course is so exceedingly winding that everything winding is called meandering.’
The Meander River is south of Izmir, east of the ancient Greek town of Miletus, now Milet, Turkey. It flows through series of three graben in the Menderes Massif, but has a flood plain much wider than the meander zone in its lower reach. Its modern Turkish name is the Büyük Menderes River.
Governing physics
Meanders are a result of the interaction of water flowing through a curved channel with the underlying river bed. This produces helicoidal flow, in which water moves from the outer to the inner bank along the river bed, then flows back to the outer bank near the surface of the river. This in turn increases carrying capacity for sediments on the outer bank and reduces it on the inner bank, so that sediments are eroded from the outer bank and redeposited on the inner bank of the next downstream meander.
When a fluid is introduced to an initially straight channel which then bends, the sidewalls induce a pressure gradient that causes the fluid to alter course and follow the bend. From here, two opposing processes occur: (1) irrotational flow and (2) secondary flow. For a river to meander, secondary flow must dominate.
Irrotational flow: From Bernoulli's equations, high pressure results in low velocity. Therefore, in the absence of secondary flow we would expect low fluid velocity at the outside bend and high fluid velocity at the inside bend. This classic fluid mechanics result is irrotational vortex flow. In the context of meandering rivers, its effects are dominated by those of secondary flow.
Secondary flow: A force balance exists between pressure forces pointing to the inside bend of the river and centrifugal forces pointing to the outside bend of the river. In the context of meandering rivers, a boundary layer exists within the thin layer of fluid that interacts with the river bed. Inside that layer and following standard boundary-layer theory, the velocity of the fluid is effectively zero. Centrifugal force, which depends on velocity, is also therefore effectively zero. Pressure force, however, remains unaffected by the boundary layer. Therefore, within the boundary layer, pressure force dominates and fluid moves along the bottom of the river from the outside bend to the inside bend. This initiates helicoidal flow: Along the river bed, fluid roughly follows the curve of the channel but is also forced toward the inside bend; away from the river bed, fluid also roughly follows the curve of the channel but is forced, to some extent, from the inside to the outside bend.
The higher velocities at the outside bend lead to higher shear stresses and therefore result in erosion. Similarly, lower velocities at the inside bend cause lower shear stresses and deposition occurs. Thus meander bends erode at the outside bend, causing the river to becoming increasingly sinuous (until cutoff events occur). Deposition at the inside bend occurs such that for most natural meandering rivers, the river width remains nearly constant, even as the river evolves.
In a speech before the Prussian Academy of Sciences in 1926, Albert Einstein suggested that because the Coriolis force of the earth can cause a small imbalance in velocity distribution, such that velocity on one bank is higher than on the other, it could trigger the erosion on one bank and deposition of sediment on the other that produces meanders However, Coriolis forces are likely insignificant compared with other forces acting to produce river meanders.
Meander geometry
The technical description of a meandering watercourse is termed meander geometry or meander planform geometry. It is characterized as an irregular waveform. Ideal waveforms, such as a sine wave, are one line thick, but in the case of a stream the width must be taken into consideration. The bankfull width is the distance across the bed at an average cross-section at the full-stream level, typically estimated by the line of lowest vegetation.
As a waveform the meandering stream follows the down-valley axis, a straight line fitted to the curve such that the sum of all the amplitudes measured from it is zero. This axis represents the overall direction of the stream.
At any cross-section the flow is following the sinuous axis, the centerline of the bed. Two consecutive crossing points of sinuous and down-valley axes define a meander loop. The meander is two consecutive loops pointing in opposite transverse directions. The distance of one meander along the down-valley axis is the meander length or wavelength. The maximum distance from the down-valley axis to the sinuous axis of a loop is the meander width or amplitude. The course at that point is the apex.
In contrast to sine waves, the loops of a meandering stream are more nearly circular. The curvature varies from a maximum at the apex to zero at a crossing point (straight line), also called an inflection, because the curvature changes direction in that vicinity. The radius of the loop is the straight line perpendicular to the down-valley axis intersecting the sinuous axis at the apex. As the loop is not ideal, additional information is needed to characterize it. The orientation angle is the angle between sinuous axis and down-valley axis at any point on the sinuous axis.
A loop at the apex has an outer or concave bank and an inner or convex bank. The meander belt is defined by an average meander width measured from outer bank to outer bank instead of from centerline to centerline. If there is a flood plain, it extends beyond the meander belt. The meander is then said to be free—it can be found anywhere in the flood plain. If there is no flood plain, the meanders are fixed.
Various mathematical formulae relate the variables of the meander geometry. As it turns out some numerical parameters can be established, which appear in the formulae. The waveform depends ultimately on the characteristics of the flow but the parameters are independent of it and apparently are caused by geologic factors. In general the meander length is 10–14 times, with an average 11 times, the fullbank channel width and 3 to 5 times, with an average of 4.7 times, the radius of curvature at the apex. This radius is 2–3 times the channel width.
A meander has a depth pattern as well. The cross-overs are marked by riffles, or shallow beds, while at the apices are pools. In a pool direction of flow is downward, scouring the bed material. The major volume, however, flows more slowly on the inside of the bend where, due to decreased velocity, it deposits sediment.
The line of maximum depth, or channel, is the thalweg or thalweg line. It is typically designated the borderline when rivers are used as political borders. The thalweg hugs the outer banks and returns to center over the riffles. The meander arc length is the distance along the thalweg over one meander. The river length is the length along the centerline.
Formation
Once a channel begins to follow a sinusoidal path, the amplitude and concavity of the loops increase dramatically. This is due to the effect of helical flow which sweeps dense eroded material towards the inside of the bend, and leaves the outside of the bend unprotected and vulnerable to accelerated erosion. This establishes a positive feedback loop. In the words of Elizabeth A. Wood: "...this process of making meanders seems to be a self-intensifying process...in which greater curvature results in more erosion of the bank, which results in greater curvature..."
The cross-current along the floor of the channel is part of the secondary flow and sweeps dense eroded material towards the inside of the bend. The cross-current then rises to the surface near the inside and flows towards the outside, forming the helical flow. The greater the curvature of the bend, and the faster the flow, the stronger is the cross-current and the sweeping.
Due to the conservation of angular momentum the speed on the inside of the bend is faster than on the outside.
Since the flow velocity is diminished, so is the centrifugal pressure. The pressure of the super-elevated column prevails, developing an unbalanced gradient that moves water back across the bottom from the outside to the inside. The flow is supplied by a counter-flow across the surface from the inside to the outside. This entire situation is very similar to the Tea leaf paradox. This secondary flow carries sediment from the outside of the bend to the inside making the river more meandering.
As to why streams of any size become sinuous in the first place, there are a number of theories, not necessarily mutually exclusive.
Stochastic theory
The stochastic theory can take many forms but one of the most general statements is that of Scheidegger: "The meander train is assumed to be the result of the stochastic fluctuations of the direction of flow due to the random presence of direction-changing obstacles in the river path." Given a flat, smooth, tilted artificial surface, rainfall runs off it in sheets, but even in that case adhesion of water to the surface and cohesion of drops produce rivulets at random. Natural surfaces are rough and erodible to different degrees. The result of all the physical factors acting at random is channels that are not straight, which then progressively become sinuous. Even channels that appear straight have a sinuous thalweg that leads eventually to a sinuous channel.
Equilibrium theory
In the equilibrium theory, meanders decrease the stream gradient until an equilibrium between the erodibility of the terrain and the transport capacity of the stream is reached. A mass of water descending must give up potential energy, which, given the same velocity at the end of the drop as at the beginning, is removed by interaction with the material of the stream bed. The shortest distance; that is, a straight channel, results in the highest energy per unit of length, disrupting the banks more, creating more sediment and aggrading the stream. The presence of meanders allows the stream to adjust the length to an equilibrium energy per unit length in which the stream carries away all the sediment that it produces.
Geomorphic and morphotectonic theory
Geomorphic refers to the surface structure of the terrain. Morphotectonic means having to do with the deeper, or tectonic (plate) structure of the rock. The features included under these categories are not random and guide streams into non-random paths. They are predictable obstacles that instigate meander formation by deflecting the stream. For example, the stream might be guided into a fault line (morphotectonic).
Associated landforms
Cut bank
A cut bank is an often vertical bank or cliff that forms where the outside, concave bank of a meander cuts into the floodplain or valley wall of a river or stream. A cutbank is also known either as a river-cut cliff, river cliff, or a bluff and spelled as cutbank. Erosion that forms a cut bank occurs at the outside bank of a meander because helicoidal flow of water keeps the bank washed clean of loose sand, silt, and sediment and subjects it to constant erosion. As a result, the meander erodes and migrates in the direction of the outside bend, forming the cut bank.
As the cut bank is undermined by erosion, it commonly collapses as slumps into the river channel. The slumped sediment, having been broken up by slumping, is readily eroded and carried toward the middle of the channel. The sediment eroded from a cut bank tends to be deposited on the point bar of the next downstream meander, and not on the point bar opposite it. This can be seen in areas where trees grow on the banks of rivers; on the inside of meanders, trees, such as willows, are often far from the bank, whilst on the outside of the bend, the tree roots are often exposed and undercut, eventually leading the trees to fall into the river.
Meander cutoff
A meander cutoff, also known as either a cutoff meander or abandoned meander, is a meander that has been abandoned by its stream after the formation of a neck cutoff. A lake that occupies a cutoff meander is known as an oxbow lake. Cutoff meanders that have cut downward into the underlying bedrock are known in general as incised cutoff meanders. As in the case of the Anderson Bottom Rincon, incised meanders that have either steep-sided, often vertical walls, are often, but not always, known as rincons in the southwest United States. Rincon in English is a nontechnical word in the southwest United States for either a small secluded valley, an alcove or angular recess in a cliff, or a bend in a river.
Incised meanders
The meanders of a stream or river that has cut its bed down into the bedrock are known as either incised, intrenched, entrenched, inclosed or ingrown meanders. Some Earth scientists recognize and use a finer subdivision of incised meanders. Thornbury argues that incised or inclosed meanders are synonyms that are appropriate to describe any meander incised downward into bedrock and defines enclosed or entrenched meanders as a subtype of incised meanders (inclosed meanders) characterized by a symmetrical valley sides. He argues that the symmetrical valley sides are the direct result of rapid down-cutting of a watercourse into bedrock. In addition, as proposed by Rich, Thornbury argues that incised valleys with a pronounced asymmetry of cross section, which he called ingrown meanders, are the result of the lateral migration and incision of a meander during a period of slower channel downcutting. Regardless, the formation of both entrenched meanders and ingrown meanders is thought to require that base level falls as a result of either relative change in mean sea level, isostatic or tectonic uplift, the breach of an ice or landslide dam, or regional tilting. Classic examples of incised meanders are associated with rivers in the Colorado Plateau, the Kentucky River Palisades in central Kentucky, and streams in the Ozark Plateau.
As noted above, it was initially either argued or presumed that an incised meander is characteristic of an antecedent stream or river that had incised its channel into underlying strata. An antecedent stream or river is one that maintains its original course and pattern during incision despite the changes in underlying rock topography and rock types. However, later geologists argue that the shape of an incised meander is not always, if ever, "inherited", e.g., strictly from an antecedent meandering stream where its meander pattern could freely develop on a level floodplain. Instead, they argue that as fluvial incision of bedrock proceeds, the stream course is significantly modified by variations in rock type and fractures, faults, and other geological structures into either lithologically conditioned meanders or structurally controlled meanders.
Oxbow lakes
The oxbow lake, which is the most common type of fluvial lake, is a crescent-shaped lake that derives its name from its distinctive curved shape. Oxbow lakes are also known as cutoff lakes. Such lakes form regularly in undisturbed floodplains as a result of the normal process of fluvial meandering. Either a river or stream forms a sinuous channel as the outer side of its bends are eroded away and sediments accumulate on the inner side, which forms a meandering horseshoe-shaped bend. Eventually as the result of its meandering, the fluvial channel cuts through the narrow neck of the meander and forms a cutoff meander. The final break-through of the neck, which is called a neck cutoff, often occurs during a major flood because that is when the watercourse is out of its banks and can flow directly across the neck and erode it with the full force of the flood.
After a cutoff meander is formed, river water flows into its end from the river builds small delta-like feature into either end of it during floods. These delta-like features block either end of the cutoff meander to form a stagnant oxbow lake that is separated from the flow of the fluvial channel and independent of the river. During floods, the flood waters deposit fine-grained sediment into the oxbow lake. As a result, oxbow lakes tend to become filled in with fine-grained, organic-rich sediments over time.
Point bar
A point bar, which is also known as a meander bar, is a fluvial bar that is formed by the slow, often episodic, addition of individual accretions of noncohesive sediment on the inside bank of a meander by the accompanying migration of the channel toward its outer bank. This process is called lateral accretion. Lateral accretion occurs mostly during high water or floods when the point bar is submerged. Typically, the sediment consists of either sand, gravel, or a combination of both. The sediment comprising some point bars might grade downstream into silty sediments. Because of the decreasing velocity and strength of current from the thalweg of the channel to the upper surface of point bar when the sediment is deposited the vertical sequence of sediments comprising a point bar becomes finer upward within an individual point bar. For example, it is typical for point bars to fine upward from gravel at the base to fine sands at the top. The source of the sediment is typically upstream cut banks from which sand, rocks and debris has been eroded, swept, and rolled across the bed of the river and downstream to the inside bank of a river bend. On the inside bend, this sediment and debris is eventually deposited on the slip-off slope of a point bar.
Scroll-bars
Scroll-bars are a result of continuous lateral migration of a meander loop that creates an asymmetrical ridge and swale topography on the inside of the bends. The topography is generally parallel to the meander, and is related to migrating bar forms and back bar chutes, which carve sediment from the outside of the curve and deposit sediment in the slower flowing water on the inside of the loop, in a process called lateral accretion. Scroll-bar sediments are characterized by cross-bedding and a pattern of fining upward. These characteristics are a result of the dynamic river system, where larger grains are transported during high energy flood events and then gradually die down, depositing smaller material with time (Batty 2006). Deposits for meandering rivers are generally homogeneous and laterally extensive unlike the more heterogeneous braided river deposits. There are two distinct patterns of scroll-bar depositions; the eddy accretion scroll bar pattern and the point-bar scroll pattern. When looking down the river valley they can be distinguished because the point-bar scroll patterns are convex and the eddy accretion scroll bar patterns are concave.
Scroll bars often look lighter at the tops of the ridges and darker in the swales. This is because the tops can be shaped by wind, either adding fine grains or by keeping the area unvegetated, while the darkness in the swales can be attributed to silts and clays washing in during high water periods. This added sediment in addition to water that catches in the swales is in turn is a favorable environment for vegetation that will also accumulate in the swales.
Slip-off slope
Depending upon whether a meander is part of an entrenched river or part of a freely meandering river within a floodplain, the term slip-off slope can refer to two different fluvial landforms that comprise the inner, convex, bank of a meander loop. In case of a freely meandering river on a floodplain, a slip-off slope is the inside, gently sloping bank of a meander on which sediments episodically accumulate to form a point bar as a river meanders. This type of slip-off slope is located opposite the cutbank. This term can also be applied to the inside, sloping bank of a meandering tidal channel.
In case of an entrenched river, a slip-off slope is a gently sloping bedrock surface that rises from the inside, concave bank of an asymmetrically entrenched river. This type of slip-off slope is often covered by a thin, discontinuous layer of alluvium. It is produced by the gradual outward migration of the meander as a river cuts downward into bedrock. A terrace on the slip-off slope of a meander spur, known as slip-off slope terrace, can be formed by a brief halt during the irregular incision by an actively meandering river.
Derived quantities
The meander ratio or sinuosity index is a means of quantifying how much a river or stream meanders (how much its course deviates from the shortest possible path). It is calculated as the length of the stream divided by the length of the valley. A perfectly straight river would have a meander ratio of 1 (it would be the same length as its valley), while the higher this ratio is above 1, the more the river meanders.
Sinuosity indices are calculated from the map or from an aerial photograph measured over a distance called the reach, which should be at least 20 times the average fullbank channel width. The length of the stream is measured by channel, or thalweg, length over the reach, while the bottom value of the ratio is the downvalley length or air distance of the stream between two points on it defining the reach.
The sinuosity index plays a part in mathematical descriptions of streams. The index may require elaboration, because the valley may meander as well—i.e., the downvalley length is not identical to the reach. In that case the valley index is the meander ratio of the valley while the channel index is the meander ratio of the channel. The channel sinuosity index is the channel length divided by the valley length and the standard sinuosity index is the channel index divided by the valley index. Distinctions may become even more subtle.
Sinuosity Index has a non-mathematical utility as well. Streams can be placed in categories arranged by it; for example, when the index is between 1 and 1.5 the river is sinuous, but if between 1.5 and 4, then meandering. The index is a measure also of stream velocity and sediment load, those quantities being maximized at an index of 1 (straight).
| Physical sciences | Fluvial landforms | null |
3338527 | https://en.wikipedia.org/wiki/Cyclostomi | Cyclostomi | Cyclostomi, often referred to as Cyclostomata , is a group of vertebrates that comprises the living jawless fishes: the lampreys and hagfishes. Both groups have jawless mouths with horny epidermal structures that function as teeth called ceratodontes, and branchial arches that are internally positioned instead of external as in the related jawed fishes. The name Cyclostomi means "round mouths". It was named by Joan Crockford-Beattie.
Possible external relationships
This taxon is often included in the paraphyletic superclass Agnatha, which also includes several groups of extinct armored fishes called ostracoderms. Most fossil agnathans, such as galeaspids, thelodonts, and osteostracans, are more closely related to vertebrates with jaws (called gnathostomes) than to cyclostomes.
Biologists historically disagreed on whether cyclostomes are a clade. The "vertebrate hypothesis" held that lampreys are more closely related to gnathostomes than they are to the hagfish. The "cyclostome hypothesis", on the other hand, holds that lampreys and hagfishes are more closely related, making cyclostomi monophyletic.
Most studies based on anatomy have supported the vertebrate hypothesis, while most molecular phylogenies have supported the cyclostome hypothesis.
There are exceptions in both cases, however. Similarities in the cartilage and muscles of the tongue apparatus also provide evidence of sister-group relationship between lampreys and hagfishes. At least one molecular phylogeny has supported the vertebrate hypothesis. The embryonic development of hagfishes was once held to be drastically different from that of lampreys and gnathostomes, but recent evidence suggests that it is more similar than previously thought, which may remove an obstacle to the cyclostome hypothesis.
Several groups of Palaeozoic jawless fish have been suggested to be more closely related to cyclostomes than to jawed fish, including conodonts and anaspids. The presence of mineralised elements in these jawless fish, like the oral conodont elements and the armoured body covering of anaspids and scutes on other species like Lasanius suggests that mineralised tissues were present in the last common ancestor of all vertebrates, but were secondarily lost in hagfish and lampreys.
Internal differences and similarities
Both hagfishes and lampreys have a single gonad, but for different reasons. In hagfishes the left gonad degenerates during their ontogeny and only the right gonad develops, whereas in lampreys the left and right gonads fuse into one. There are no gonoducts present.
Hagfishes have direct development, but lamprey go through a larval stage followed by metamorphosis into a juvenile form (or adult form in the non-parasitic species). Lamprey larvae live in freshwater and are called ammocoetes, and are the only vertebrates with an endostyle, an organ used for filter feeding that is otherwise found only in tunicates and lancelets. During metamorphosis the lamprey endostyle develops into the thyroid gland.
The Cyclostomi evolved oxygen transport hemoglobins independently from the jawed vertebrates.
Hagfishes and lampreys lack a thymus, spleen, myelin and sympathetic chain ganglia. Neither species has internal eye muscles and hagfishes also lack external eye muscles. Both groups have only a single olfactory organ with a single nostril. The nasal duct ends blindly in a pouch in lampreys but opens into the pharynx in hagfishes. The branchial basket (reduced in hagfishes) is attached to the cranium.
The common ancestor of both cyclostomes and gnathostomes went through a genome duplication before their split, and while a second genome duplication occurred in the stem-gnathostomes, the stem-cyclostomes experienced an independent genome triplication.
The mouth apparatus in hagfishes and adult lampreys has some similarities, but differ from one another. Lampreys have tooth plates on the top of a tongue-like piston cartilage, and the hagfish have a fixed cartilaginous plate on the floor of its mouth with groves that allows tooth plates to slide backwards and forwards over it like a conveyor belt, and are everted as they move over the edge of the plate. Hagfishes also have a keratinous palatine tooth hanging from the roof of the mouth.
Unlike jawed vertebrates, which have three semicircular canals in each inner ear, lampreys have only two and hagfishes just one. The semicircular canal of hagfishes contains both stereocilia and a second class of hair cells, apparently a derived trait, whereas lampreys and other vertebrates have stereocilia only. Because the inner ear of hagfishes has two forms of sensory ampullae, their single semicircular canal is assumed to be a result of two semicircular canals that have merged into just one.
The hagfish blood is isotonic with seawater, while lampreys appears to use the same gill-based mechanisms of osmoregulation as marine teleosts. Yet the same mechanisms are apparent in the mitochondria-rich cells in the gill epithelia of hagfishes, but never develops the ability to regulate the blood's salinity, even if they are capable of regulating the ionic concentration of Ca and Mg ions. It has been suggested that the hagfish ancestors evolved from an anadromous or freshwater species that has since adapted to saltwater over a very long time, resulting in higher electrolyte levels in its blood.
The lamprey intestine has a typhlosole that increases the inner surface like the spiral valve does in some jawed vertebrates. The spiral valve in the latter develops by twisting the whole gut, while the lamprey typhlosole is confined to the mucous membrane of the intestines. The mucous membranes of hagfishes have a primitive typhlosole in the form of permanent zigzag ridges. This trait could be a primitive one, since it is also found in some sea squirts such as Ciona. The intestinal epithelia of lampreys also have ciliated cells, which have not been detected in hagfishes. Because ciliated intestines are also found in Chondrostei, lungfishes and the early stages of some teleosts, it is considered a primitive condition that has been lost in hagfishes.
Phylogeny
After Miyashita et al. 2019.
| Biology and health sciences | Agnatha | Animals |
1152886 | https://en.wikipedia.org/wiki/Urine%20test | Urine test | A urine test is any medical test performed on a urine specimen. The analysis of urine is a valuable diagnostic tool because its composition reflects the functioning of many body systems, particularly the kidneys and urinary system, and specimens are easy to obtain. Common urine tests include the routine urinalysis, which examines the physical, chemical, and microscopic properties of the urine; urine drug screening; and urine pregnancy testing.
Background
The value of urine for diagnostic purposes has been recognized since ancient times. Urine examination was practiced in Sumer and Babylonia as early as 4000 BC, and is described in ancient Greek and Sanskrit texts. Contemporary urine testing uses a range of methods to investigate the physical and biochemical properties of the urine. For instance, the results of the routine urinalysis can provide information about the functioning of the kidneys and urinary system; suggest the presence of a urinary tract infection (UTI); and screen for possible diabetes or liver disease, among other conditions. A urine culture can be performed to identify the bacterial species involved in a UTI. Simple point-of-care tests can detect pregnancy by identifying the presence of beta-hCG in the urine and indicate the use of recreational drugs by detecting excreted drugs or their metabolites. Analysis of abnormal cells in urine (urine cytology) can help to diagnose some cancers, and testing for organic acids or amino acids in urine can be used to screen for some genetic disorders.
Specimen collection
The techniques used to collect urine specimens vary based on the desired test. A random urine, meaning a specimen that is collected at any time, can be used for many tests. However, a sample collected during the first urination of the morning (first morning specimen) is preferred for tests like urinalysis and pregnancy screening because it is typically more concentrated, making the test more sensitive. Because the concentration of many substances in the urine varies throughout the day, some tests require timed urine collections, in which the patient collects all of their urine into a container for a given period of time (commonly 24 hours). A small amount of the specimen is then removed for testing. Timed collections are commonly used to measure creatinine, urea, urine protein, hormones and electrolytes.
If urine is needed for microbiological culture, it is important that the sample is not contaminated. In this case, the proper collection procedure involves cleaning the genital area, beginning to urinate into the toilet, and then filling the specimen container before completing the urination into the toilet. This is called a "midstream clean catch" collection. Research has shown many women are unsure of how to take a midstream sample or why it is needed.
If the subject is not able to urinate voluntarily, samples can be obtained using a urinary catheter or by inserting a needle through the abdomen and into the bladder (suprapubic aspiration). In infants and young children, urine can be collected into a bag attached to the genital region, but this is associated with a high risk of contamination.
Types
Some examples of urine tests include:
Chemistry
Urinalysis — assessment of the visual properties of the urine, chemical evaluation using urine test strips, and microscopic examination
Urine creatinine, creatinine clearance — used to assess kidney function
Albumin/creatinine ratio — used to diagnose microalbuminuria
Urine osmolality — measure of the solute concentration of urine
Urine specific gravity ― another measure of urine concentration
Urine electrolyte levels — measurement of electrolytes such as sodium and potassium in urine
Urine anion gap — used to distinguish between some causes of metabolic acidosis
Hormones
Urine pregnancy test ― detects human chorionic gonadotropin in urine
Urine cortisol ― used to investigate disorders of the adrenal glands
Urine metanephrines ― used to help diagnose some rare tumours
Microbiology
Urine culture — microbiological culture of urine samples, used to identify bacteria causing urinary tract infections
Miscellaneous
Urine drug screen — screen for usage of recreational drugs
Urine cytology — cytopathological examination of cells in the urine, used to screen for cancer
Urine protein electrophoresis — classification and measurement of different proteins in the urine; used to help diagnose monoclonal gammopathies
Urine organic acids, urine amino acids — used to test for some inborn errors of metabolism
| Biology and health sciences | Diagnostics | Health |
1152896 | https://en.wikipedia.org/wiki/Side%20chain | Side chain | In organic chemistry and biochemistry, a side chain is a chemical group that is attached to a core part of the molecule called the "main chain" or backbone. The side chain is a hydrocarbon branching element of a molecule that is attached to a larger hydrocarbon backbone. It is one factor in determining a molecule's properties and reactivity. A side chain is also known as a pendant chain, but a pendant group (side group) has a different definition.
Conventions
The placeholder R is often used as a generic placeholder for alkyl (saturated hydrocarbon) group side chains in chemical structure diagrams. To indicate other non-carbon groups in structure diagrams, X, Y, or Z are often used.
History
The R symbol was introduced by 19th-century French chemist Charles Frédéric Gerhardt, who advocated its adoption on the grounds that it would be widely recognizable and intelligible given its correspondence in multiple European languages to the initial letter of "root" or "residue": French ("root") and ("residue"), these terms' respective English translations along with radical (itself derived from Latin below), Latin ("root") and ("residue"), and German ("remnant" and, in the context of chemistry, both "residue" and "radical").
Usage
Organic chemistry
In polymer science, the side chain of an oligomeric or polymeric offshoot extends from the backbone chain of a polymer. Side chains have noteworthy influence on a polymer's properties, mainly its crystallinity and density. An oligomeric branch may be termed a short-chain branch, and a polymeric branch may be termed a long-chain branch. Side groups are different from side chains; they are neither oligomeric nor polymeric.
Biochemistry
In proteins, which are composed of amino acid residues, the side chains are attached to the alpha-carbon atoms of the amide backbone. The side chain connected to the alpha-carbon is specific for each amino acid and is responsible for determining charge and polarity of the amino acid. The amino acid side chains are also responsible for many of the interactions that lead to proper protein folding and function. Amino acids with similar polarity are usually attracted to each other, while nonpolar and polar side chains usually repel each other. Nonpolar/polar interactions can still play an important part in stabilizing the secondary structure due to the relatively large amount of them occurring throughout the protein. Spatial positions of side-chain atoms can be predicted based on protein backbone geometry using computational tools for side-chain reconstruction.
| Physical sciences | Concepts_2 | Chemistry |
1154940 | https://en.wikipedia.org/wiki/Sprat | Sprat | Sprat is the common name applied to a group of forage fish belonging to the genus Sprattus in the family Clupeidae. The term also is applied to a number of other small sprat-like forage fish (Clupeoides, Clupeonella, Corica, Ehirava, Hyperlophus, Microthrissa, Nannothrissa, Platanichthys, Ramnogaster, Rhinosardinia, and Stolothrissa). Like most forage fishes, sprats are highly active, small, oily fish. They travel in large schools with other fish and swim continuously throughout the day.
They are recognized for their nutritional value, as they contain high levels of polyunsaturated fats, considered beneficial to the human diet. They are eaten in many places around the world. Sprats are sometimes passed off as other fish; products sold as having been prepared from anchovies (since the 19th century) and others sold as sardines sometimes are prepared from sprats, as the authentic ones once were less accessible. They are known for their smooth flavour and are easy to mistake for baby sardines.
Species
True sprats
True sprats belong to the genus Sprattus in the family Clupeidae. The five species are:
* Type species
Other sprats
The term also is commonly applied to a number of other small sprat-like forage fish that share characteristics of the true sprat. Apart from the true sprats, FishBase lists another 48 species whose common names ends with "sprat". Some examples are:
Characteristics
The average length of time from fertilization to hatching is about 15 days, with environmental factors playing a major role in the size and overall success of the sprat. The development of young larval sprat and reproductive success of the sprat have been largely influenced by environmental factors. Some of these factors affecting the sprat can be seen in the Baltic Sea, where specific gravity, water temperature, depth, and other such factors play a role in their success.
In recent decades the number of sprat has fluctuated, due primarily to availability of zooplankton, a common food source, and also from overall changes in Clupeidae total abundance. Although the overall survival rates of the sprat decreased in the late 1980s and early 1990s, there has been a subsequent increase. Recent studies suggesting a progression in the reproductive success of the sprat acknowledge that a significant increase in spawning stock biomass occurred. One of the main concerns for reproductive success for the sprat include exceedingly cold winters, as cold temperatures, especially in the Baltic Sea, have been known to affect the development of sprat eggs and larvae.
The metabolic rate of the sprat is highly influenced by environmental factors such as water temperature. Several related fish, such as the Atlantic herring (C. harengus), have much lower metabolic rates than that of the sprat. Some of the difference may be due to size differences among the related species, but the most important reason for high levels of metabolism for the sprat is their exceedingly high level of activity throughout the day.
Distribution
Fish of the different species of sprat are found in various parts of the world including New Zealand, Australia, and parts of Europe. By far, the most highly studied location where sprat, most commonly Sprattus sprattus, reside is the Baltic Sea in Northern Europe. The Baltic Sea provides the sprat with a highly diverse environment, with spatial and temporal potential allowing for successful reproduction.
One of the most well-known locations in the Baltic Sea where they forage for their food is the Bornholm Basin, in the southern portion of the Baltic Sea. Although the Baltic Sea has undergone several ecological changes during the last two decades, the sprat has dramatically increased in population. One of the environmental changes that has occurred in the Baltic Sea since the 1980s includes a decrease in water salinity, due to a lack of inflow from the North Sea that contains high saline and oxygen content.
Ecology
In the Baltic Sea, cod, herring, and sprat are considered the most important species. Cod is the top predator, while the herring and sprat primarily are recognized as prey. This has been proven by many studies that analyze the stomach contents of such fish, often finding contents that immediately signify predation among the species. Although cod primarily feed on adult sprat, sprat tend to feed on cod before the cod have been fully developed. The sprat tends to prey on the cod eggs and larvae. Furthermore, sprat and herring are considered highly competitive for the same resources that are available to them. This is most present in the vertical migration of the two species in the Baltic Sea, where they compete for the limited zooplankton that is available and necessary for their survival.
Sprats are highly selective in their diet and are strict zooplanktivores that do not change their diet as their size increases, like some herring, but include only zooplankton in their diet. They eat various species of zooplankton in accordance to changes in the environment, as temperature and other such factors affect the availability of their food.
During autumn, sprats tend to have a diet high in Temora longicornis and Bosmina maritime. During the winter, their diet includes Pesudocalanus elongates. Pseudocalanus is genus of the order Calanoida and subclass Copepoda that is important to the predation and diet of fish in the Baltic Sea.
In both autumn and winter, a tendency exists for sprats to avoid eating Acartia spp., because they tend to be very small in size and have a high escape response to predators such as the herring and sprat. Although Acartia spp. may be present in large numbers, they also tend to dwell more toward the surface of the water, whereas the sprats, especially during the day, tend to dwell in deeper waters.
Fisheries
As food
In Northern Europe, European sprats are commonly smoked and preserved in oil, which retains a strong, smoky flavor.
Sprat, if smoked, is considered to be one of the foods highest in purine content.
Sprats contain long-chain polyunsaturated fatty acids, including eicosapentaenoic acid (EPA) and docosahexaenoic acid (DHA). They are present in amounts comparable to Atlantic salmon, and up to seven times higher in EPA and DHA than common fresh fillets of gilt-head bream. The sprats contain about 1.43 g/100 g of these polyunsaturated fatty acids that have been found to help prevent mental, neural, and cardiovascular diseases.
| Biology and health sciences | Clupeiformes | null |
1156703 | https://en.wikipedia.org/wiki/Respirator | Respirator | A respirator is a device designed to protect the wearer from inhaling hazardous atmospheres including lead fumes, vapors, gases and particulate matter such as dusts and airborne pathogens such as viruses. There are two main categories of respirators: the air-purifying respirator, in which respirable air is obtained by filtering a contaminated atmosphere, and the air-supplied respirator, in which an alternate supply of breathable air is delivered. Within each category, different techniques are employed to reduce or eliminate noxious airborne contaminants.
Air-purifying respirators range from relatively inexpensive, single-use, disposable face masks, known as filtering facepiece respirators, reusable models with replaceable cartridges called elastomeric respirators, to powered air-purifying respirators (PAPR), which use a pump or fan to constantly move air through a filter and supply purified air into a mask, helmet or hood.
History
Earliest records to 19th century
The history of protective respiratory equipment can be traced back as far as the first century, when Pliny the Elder (–79) described using animal bladder skins to protect workers in Roman mines from red lead oxide dust. In the 16th century, Leonardo da Vinci suggested that a finely woven cloth dipped in water could protect sailors from a toxic weapon made of powder that he had designed.
Alexander von Humboldt introduced a primitive respirator in 1799 when he worked as a mining engineer in Prussia.
Julius Jeffreys first used the word "respirator" as a mask in 1836.
In 1848, the first US patent for an air-purifying respirator was granted to Lewis P. Haslett for his 'Haslett's Lung Protector,' which filtered dust from the air using one-way clapper valves and a filter made of moistened wool or a similar porous substance. Hutson Hurd patented a cup-shaped mask in 1879 which became widespread in industrial use.
Inventors in Europe included John Stenhouse, a Scottish chemist, who investigated the power of charcoal in its various forms, to capture and hold large volumes of gas. He built one of the first respirators able to remove toxic gases from the air, paving the way for activated charcoal to become the most widely used filter for respirators. Irish physicist John Tyndall took Stenhouse's mask, added a filter of cotton wool saturated with lime, glycerin, and charcoal, and in 1871 invented a 'fireman's respirator', a hood that filtered smoke and gas from air, which he exhibited at a meeting of the Royal Society in London in 1874. Also in 1874, Samuel Barton patented a device that 'permitted respiration in places where the atmosphere is charged with noxious gases, or vapors, smoke, or other impurities.'
In the late 19th century, Miles Philips began using a "mundebinde" ("mouth bandage") of sterilized cloth which he refined by adapting a chloroform mask with two layers of cotton mull.
20th century
World War I
United States
In the 1970s, the successor to the United States Bureau of Mines and NIOSH developed standards for single-use respirators, and the first single-use respirator was developed by 3M and approved in 1972. 3M used a melt blowing process that it had developed decades prior and used in products such as ready-made ribbon bows and bra cups; its use in a wide array of products had been pioneered by designer Sara Little Turnbull.
1990s
21st century
Continuing mesothelioma litigation
NIOSH certifies B Readers, people qualified to testify or provide evidence in mesothelioma personal injury lawsuits, in addition to regulating respirators. However, since 2000, the increasing scope of claims related to mesothelioma started to include respirator manufacturers to the tune of 325,000 cases, despite the primary use of respirators being to prevent asbestos and silica-related diseases. Most of these cases were not successful, or reached settlements of around $1000 per litigant, well below the cost of mesothelioma treatment.
One reason is due to the fact that respirator manufacturers are not allowed to modify a respirator once it is certified by NIOSH. In one case, a jury ruled against 3M for a respirator that was initially approved for asbestos, but was quickly disapproved once OSHA permissible exposure limits for asbestos changed. Combined with testimony that the plaintiff rarely wore a respirator around asbestos, the lack of evidence, and the limitation of liability from static NIOSH approval, the case was overturned.
Nonetheless, the costs of litigation reduced the margins for respirators, which was blamed for supply shortages for N95 respirators for anticipated pandemics, like avian influenza, during the 2000s.
2020
China normally makes 10 million masks per day, about half of the world production. During the COVID-19 pandemic, 2,500 factories were converted to produce 116 million daily.
During the COVID-19 pandemic, people in the United States, and in a lot of countries in the world, were urged to make their own cloth masks due to the widespread shortage of commercial masks.
2024
Summary of modern respirators
All respirators have some type of facepiece held to the wearer's head with straps, a cloth harness, or some other method. Facepieces come in many different styles and sizes to accommodate all types of face shapes.
A full facepiece covers the mouth, nose and eyes and if sealed, is sealed round the perimeter of the face. Unsealed versions may be used when air is supplied at a rate which prevents ambient gas from reaching the nose or mouth during inhalation.
Respirators can have half-face forms that cover the bottom half of the face including the nose and mouth, and full-face forms that cover the entire face. Half-face respirators are only effective in environments where the contaminants are not toxic to the eyes or facial area.
An escape respirator may have no component that would normally be described as a mask, and may use a bite-grip mouthpiece and nose clip instead. Alternatively, an escape respirator could be a time-limited self-contained breathing apparatus.
For hazardous environments, like confined spaces, atmosphere-supplying respirators, like SCBAs, should be used.
A wide range of industries use respirators including healthcare & pharmaceuticals, defense & public safety services (defense, firefighting & law enforcement), oil and gas industries, manufacturing (automotive, chemical, metal fabrication, food and beverage, wood working, paper and pulp), mining, construction, agriculture and forestry, cement production, power generation, painting, shipbuilding, and the textile industry.
Respirators require user training in order to provide proper protection.
Use
User seal check
Each time a wearer dons a respirator, they must perform a seal check to be sure that they have an airtight seal to the face so that air does not leak around the edges of the respirator. (PAPR respirators may not require this because they don't necessarily seal to the face.) This check is different than the periodic fit test that is performed using testing equipment. Filtering facepiece respirators are typically checked by cupping the hands over the facepiece while exhaling (positive pressure check) or inhaling (negative pressure check) and observing any air leakage around the facepiece. Elastomeric respirators are checked in a similar manner, except the wearer blocks the airways through the inlet valves (negative pressure check) or exhalation valves (positive pressure check) while observing the flexing of the respirator or air leakage. Manufacturers have different methods for performing seal checks and wearers should consult the specific instructions for the model of respirator they are wearing. Some models of respirators or filter cartridges have special buttons or other mechanisms built into them to facilitate seal checks.
Fit testing
Contrast with surgical mask
Surgical N95
Respirator selection
Air-purifying respirators are respirators that draw in the surrounding air and purify it before it is breathed (unlike air-supplying respirators, which are sealed systems, with no air intake, like those used underwater). Air-purifying respirators filter particulates, gases, and vapors from the air, and may be negative-pressure respirators driven by the wearer's inhalation and exhalation, or positive-pressure units such as powered air-purifying respirators (PAPRs).
According to the NIOSH Respirator Selection Logic, air-purifying respirators are recommended for concentrations of hazardous particulates or gases that are greater than the relevant occupational exposure limit but less than the immediately dangerous to life or health level and the manufacturer's maximum use concentration, subject to the respirator having a sufficient assigned protection factor. For substances hazardous to the eyes, a respirator equipped with a full facepiece, helmet, or hood is recommended. Air-purifying respirators are not effective during firefighting, in oxygen-deficient atmosphere, or in an unknown atmosphere; in these situations a self-contained breathing apparatus is recommended instead.
Types of filtration
Mechanical filter
Main Article: Mechanical filter respirator (and regulatory ratings)
Mechanical filters remove contaminants from air in several ways: interception when particles following a line of flow in the airstream come within one radius of a fiber and adhere to it; impaction, when larger particles unable to follow the curving contours of the airstream are forced to embed in one of the fibers directly; this increases with diminishing fiber separation and higher air flow velocity; by diffusion, where gas molecules collide with the smallest particles, especially those below 100 nm in diameter, which are thereby impeded and delayed in their path through the filter, increasing the probability that particles will be stopped by either of the previous two mechanisms; and by using an electrostatic charge that attracts and holds particles on the filter surface.
There are many different filtration standards that vary by jurisdiction. In the United States, the National Institute for Occupational Safety and Health defines the categories of particulate filters according to their NIOSH air filtration rating. The most common of these are the N95 respirator, which filters at least 95% of airborne particles but is not resistant to oil.
Other categories filter 99% or 99.97% of particles, or have varying degrees of resistance to oil.
In the European Union, European standard EN 143 defines the 'P' classes of particle filters that can be attached to a face mask, while European standard EN 149 defines classes of "filtering half masks" or "filtering facepieces", usually called FFP masks.
According to 3M, the filtering media in respirators made according to the following standards are similar to U.S. N95 or European FFP2 respirators, however, the construction of the respirators themselves, such as providing a proper seal to the face, varies considerably. (For example, US NIOSH-approved respirators never include earloops because they don't provide enough support to establish a reliable, airtight seal.) Standards for respirator filtration the Chinese KN95, Australian / New Zealand P2, Korean 1st Class also referred to as KF94, and Japanese DS.
Canister or chemical cartridge
Chemical cartridges and gas mask canisters remove gases, volatile organic compounds (VOCs), and other vapors from breathing air by adsorption, absorption, or chemisorption. A typical organic vapor respirator cartridge is a metal or plastic case containing from 25 to 40 grams of sorption media such as activated charcoal or certain resins. The service life of the cartridge varies based, among other variables, on the carbon weight and molecular weight of the vapor and the cartridge media, the concentration of vapor in the atmosphere, the relative humidity of the atmosphere, and the breathing rate of the respirator wearer. When filter cartridges become saturated or particulate accumulation within them begins to restrict air flow, they must be changed.
If the concentration of harmful gases is immediately dangerous to life or health, in workplaces covered by the Occupational Safety and Health Act the US Occupational Safety and Health Administration specifies the use of air-supplied respirators except when intended solely for escape during emergencies. NIOSH also discourages their use under such conditions.
Air-purifying respirators
Filtering facepiece
Elastomeric
Powered air-purifying respirators
Atmosphere-supplying respirators
These respirators do not purify the ambient air, but supply breathing gas from another source. The three types are the self contained breathing apparatus, in which a compressed air cylinder is worn by the wearer; the supplied air respirators, where a hose supplies air from a stationary source; and combination supplied-air respirators, with an emergency backup tank.
Self-contained breathing apparatus
Supplied air respirator
Escape respirators
Smoke hood
Self-contained breathing apparatus
Continuous-flow
Self-rescue device
Issues
Under 30 CFR 11
In 1992, NIOSH published a draft report on the effectiveness of respirator regulations under the then-current 30 CFR 11. Particulate respirators back then were mainly classified as either DM, DFM, or HEPA.
Respirator risk modelling
Assigned protection factors (APF) are predicated on the assumption that users are trained in the use of their respirators, and that 100% of users exceed the APF. This "simulated workplace protection factor" (SWPF) was said to be problematic:
The ideal assumption of all respirator users exceeding the APF is termed the zero control failure rate by NIOSH. The term control failure rate here refers to the number of respirator users, per 100 users, that fail to reach the APF. The risk of user error affecting the failure rate, and the studies quantifying it, was, according to NIOSH, akin to the study of contraception failure rates.
This is despite there being a "reasonable expectation, of both purchasers and users, [that] none of the users will receive less protection than the class APF (when the masks are properly selected, fit tested by the employer, and properly worn by the users)". NIOSH expands on the methods for measuring this error in Chapter 7 of the draft report.
Qualitative fit testing
Qualitative fit testing with isoamyl acetate, irritant smoke, and saccharin were proposed as alternatives to quantitative fit testing in the 1980s, but doubts were raised as to its efficacy.
With regards to the effectiveness of fit testing in general, others have said:
Exercise protocols
With regards to fit test protocols, it was noted by NIOSH that "time pressures" resulted in the exclusion of intense exercises meant to simulate workplace use:
Neither exercise was included in the OSHA fit test protocols. Put another way, it has been said:
Noncompliance with regulation
In spite of the requirement to fit test by OSHA, the following observations of noncompliance with respirator regulations were made by NIOSH and OSHA:
Almost 80% of negative-pressure respirator wearers were not receiving fit testing.
Over 70% of 123,000 manufacturing plants did not perform exposure-level monitoring, when selecting respirators to use in the plants.
Noncompliance increased to almost 90% for the smallest plants.
75% of manufacturing plants did not have a written program.
56% of manufacturing plants did not have a professional respirator-program administrator (i.e., qualified individual supervising the program).
Almost 50% of wearers in manufacturing plants did not receive an annual examination by a physician.
Almost 50% of wearers in manufacturing plants did not receive respirator-use training.
80% of wearers in manufacturing plants did not have access to more than one facial-size mask, even though nearly all reusable masks were available in at least three sizes.
Hierarchy of Controls point of view under 42 CFR 84
The Hierarchy of Controls, noted as part of the Prevention Through Design initiative started by NIOSH with other standards bodies, is a set of guidelines emphasizing building in safety during design, as opposed to ad-hoc solutions like PPE, with multiple entities providing guidelines on how to implement safety during development outside of NIOSH-approved respirators. US Government entities currently and formerly involved in the regulation of respirators follow the Hierarchy of Controls, including OSHA and MSHA.
However, some HOC implementations, notably MSHA's, have been criticized for allowing mining operators to skirt engineering control noncompliance by requiring miners to wear respirators instead if the permissible exposure limit (PEL) is exceeded, without work stoppages, breaking the hierarchy of engineering controls. Another concern was fraud related to the inability to scrutinize engineering controls, unlike NIOSH-approved respirators, like the N95, which can be fit tested by anyone, are subject to the scrutiny of NIOSH, and are trademarked and protected under US federal law. NIOSH also noted, in a 2002 video about TB respirator use, that "engineering controls, like negative pressure isolation rooms may not control the TB hazard completely. The use of respirators is necessary".
Respirator non-compliance
With regards to people complying with requirements to wear respirators, various papers note high respirator non-compliance across industries, with a survey noting non-compliance was due in large part due to discomfort from temperature increases along the face, and a large amount of respondents also noting the social unacceptability of provided N95 respirators during the survey. For reasons like mishandling, ill-fitting respirators and lack of training, the Hierarchy of Controls dictates respirators be evaluated last while other controls exist and are working. Alternative controls like hazard elimination, administrative controls, and engineering controls like ventilation are less likely to fail due to user discomfort or error.
A U.S. Department of Labor study showed that in almost 40 thousand American enterprises, the requirements for the correct use of respirators are not always met. Experts note that in practice it is difficult to achieve elimination of occupational morbidity with the help of respirators:
Beards
Certain types of facial hair can reduce fit to a significant degree. For this reason, there are facial hair guidelines for respirator users. This is another example of potential respirator non-compliance.
Counterfeiting, modification, and revocation of regulated respirators
Another disadvantage of respirators is that the onus is on the respirator user to determine if their respirator is counterfeit or has had its certification revoked. Customers and employers can inadvertently purchase non-OEM parts for a NIOSH-approved respirator which void the NIOSH approval and violate OSHA laws, in addition to potentially compromising the fit of the respirator. This is another example of respirator mishandling under the Hierarchy of Controls.
Issues with fit testing
If respirators must be used, under 29 CFR 1910.134, OSHA requires respirator users to conduct a respirator fit test, with a safety factor of 10 to offset lower fit during real world use. However, NIOSH notes the large amount of time required for fit testing has been a point of contention for employers.
Other opinions concern the change in performance of respirators in use compared to when fit testing, and compared to engineering control alternatives:
Issues with respirator design
Extended or off-label use of certain negative-pressure respirators, like a filtering facepiece respirator paired with a surgical mask, can result in higher levels of carbon dioxide from dead space and breathing resistance (pressure drop) which can impact functioning and sometimes can exceed the PEL. This effect was significantly reduced with powered air purifying respirators. In various surveys among healthcare workers, headaches, dermatitis and acne have been reported.
Complaints have been leveled at early LANL NIOSH fit test panels (which included primarily military personnel) as being unrepresentative of the broader American populace. However, later fit test panels, based on a NIOSH facial survey conducted in 2003, were able to reach 95% representation of working US population surveyed. Despite these developments, 42 CFR 84, the US regulation NIOSH follows for respirator approval, allows for respirators that don't follow the NIOSH fit test panel provided that: more than one facepiece size is provided, and no chemical cartridges are made available.
Issues with lack of regulation
Respirators designed to non-US standards may not be subject to as much or any scrutiny:
In China, under GB2626-2019, which includes standards like KN95, there is no procedure for fit testing.
Some jurisdictions allow for respirator filtration ratings lower than 95%, respirators which are not rated to prevent respiratory infection, asbestos, or other dangerous occupational hazards. These respirators are sometimes known as dust masks for their almost exclusive approval only against dust nuisances:
In Europe, regulation allows for dust masks under FFP1, where 20% inward leakage is allowed, with a minimum filtration efficiency of 80%.
South Korea allows 20% filter leakage under KF80.
In the US, NIOSH noted that under standards predating the N95, 'Dust/Mist' rated respirators could not prevent the spread of TB.
Regulation
The choice and use of respirators in developed countries is regulated by national legislation. To ensure that employers choose respirators correctly, and perform high-quality respiratory protection programs, various guides and textbooks have been developed:
For standard filter classes used in respirators, see Mechanical filter (respirator)#Filtration standards.
Voluntary respirator use
United States
| Technology | Food, water and health | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.